Build Human-Like, Reliable AI Agents with Parlant's Agentic Design
What agentic design means
Agentic design is about creating AI systems that act autonomously within clear boundaries. Unlike traditional software that maps inputs to deterministic outputs, agentic systems depend on probabilistic models. Designers must therefore specify desirable behaviors and constraints, trusting the model to interpret and fill in the details in context.
Variability in AI responses
Probabilistic models intentionally produce varied but contextually appropriate replies. For example, a user asking for help resetting a password may receive different, yet suitable responses each time. That variability can make interactions feel natural and human-like, but it also requires carefully crafted guidelines and safeguards to maintain safety and consistency.
Why clear instructions matter
Language models interpret high-level instructions rather than executing code. Vague guidance can lead to unexpected or unsafe outcomes. Be concrete and action-oriented when writing guidelines so the agent’s behavior aligns with policy and user expectations.
Examples of unclear and clearer guidelines
Unclear guideline:
agent.create_guideline(
condition="User expresses frustration",
action="Try to make them happy"
)
This kind of guidance is ambiguous and may lead the agent to improvise inappropriately.
Clear, actionable guideline:
agent.create_guideline(
condition="User is upset by a delayed delivery",
action="Acknowledge the delay, apologize, and provide a status update"
)
Building compliance: layers of control
You can’t fully control an LLM, but you can shape and constrain its behavior using layers of control.
Layer 1: Guidelines
Use guidelines to define normal behavior and expected responses.
await agent.create_guideline(
condition="Customer asks about topics outside your scope",
action="Politely decline and redirect to what you can help with"
)
Layer 2: Canned responses
For high-risk scenarios, provide pre-approved canned responses so the agent never improvises on sensitive topics.
await agent.create_canned_response(
template="I can help with account questions, but for policy details I'll connect you to a specialist."
)
This layered approach reduces risk and maintains consistency in critical moments.
Tool calling and the parameter guessing problem
When agents act through APIs or functions, they must often infer missing details. A user request like “Schedule a meeting with Sarah for next week” lacks which Sarah, which day and time, and which calendar to use. This is the Parameter Guessing Problem.
Design tools with clear descriptions, parameter hints, and consistent parameter types. Intuitive tool names and contextual examples help agents choose the right tool and correctly populate inputs, improving accuracy and reducing errors.
Agent design is iterative
Agent behavior evolves through observation, evaluation, and refinement. Start with common “happy path” scenarios, deploy in a controlled environment, and monitor for unexpected replies or policy breaches. Introduce targeted rules to correct recurrent issues—for instance, stop repeated upsell attempts if users consistently decline. Over time, incremental tuning turns a prototype into a reliable conversational system.
Writing effective guidelines
Each guideline should include a condition, a concrete action, and optionally the tools to use. Example:
await agent.create_guideline(
condition="Customer requests a specific appointment time that's unavailable",
action="Offer the three closest available slots as alternatives",
tools=[get_available_slots]
)
Structured conversations: journeys
For complex, multi-step tasks such as bookings or onboarding, use structured flows called journeys. Journeys define states and transitions so the agent can guide users through processes while keeping the dialogue natural.
Example booking flow:
booking_journey = await agent.create_journey(
title="Book Appointment",
conditions=["Customer wants to schedule an appointment"],
description="Guide customer through the booking process"
)
t1 = await booking_journey.initial_state.transition_to(
chat_state="Ask what type of service they need"
)
t2 = await t1.target.transition_to(
tool_state=check_availability_for_service
)
t3 = await t2.target.transition_to(
chat_state="Offer available time slots"
)
Balancing flexibility and predictability
Effective agents strike a balance between being conversational and remaining predictable. Avoid overly rigid, word-for-word scripts and avoid vague prompts that produce inconsistent outputs. Instead, provide clear objectives with room for adaptive phrasing—for example, instruct the agent to explain pricing tiers clearly, emphasize value, and ask about customer needs before recommending a plan.
Designing for real conversations
Conversations are non-linear: users can change topics, skip steps, or revisit earlier points. Key principles for handling real conversations include:
- Context preservation: remember user-provided details throughout the interaction.
- Progressive disclosure: present information gradually to avoid overwhelming users.
- Recovery mechanisms: gracefully handle misunderstandings with clarifying questions or gentle redirections.
Start small and iterate
Begin with core features and common scenarios; monitor behavior closely and improve based on real interactions. Use clear rules, canned responses for sensitive cases, and journeys for complex flows. Be transparent with users about what the agent can and cannot do. This iterative, layered approach produces agents that are both user-friendly and safe.