Introduction: Your First Real-World AI Agent!
Welcome to Chapter 8! Up until now, we’ve explored the theoretical foundations, core components, and setup of OpenAI’s open-sourced Agents SDK. We’ve discussed what makes an AI agent “agentic” and how to define its tools and persona. Now, it’s time to put all that knowledge into practice by building a fully functional, albeit simplified, customer support agent. This chapter marks a significant milestone: your first real-world project!
In this exciting chapter, you’ll learn to design, implement, and test a multi-agent customer support system. We’ll break down the complex task of handling customer queries into manageable steps, creating specialized agents that collaborate to provide assistance. This hands-on experience will solidify your understanding of agent orchestration, tool integration, and practical prompt engineering, giving you the confidence to tackle more advanced AI agent projects.
Before we dive in, make sure you’re comfortable with:
- The basics of the OpenAI Agents SDK (from Chapter 4).
- Defining agents and their
system_prompt(from Chapter 5). - Creating and integrating custom tools (from Chapter 6).
- The concept of multi-agent workflows (from Chapter 7).
Ready to build something awesome? Let’s get started!
Core Concepts: Designing Our Customer Support Workflow
Building a robust customer support agent isn’t about creating one monolithic AI that knows everything. Instead, it’s about designing a team of specialized agents that can collaborate efficiently. This modular approach makes our system more scalable, maintainable, and effective.
The Multi-Agent Customer Support Architecture
Imagine a human customer support team: you have a front-line agent who triages issues, a specialist for technical problems, another for billing, and a supervisor for escalations. We’ll mimic this structure with our AI agents. Our customer support system will consist of several agents, each with a distinct role and set of tools, working together to resolve customer queries.
Here’s a high-level overview of how our agents will interact:
- User: The customer initiating the conversation.
- Triage Agent: The first point of contact. Its job is to understand the customer’s intent and route the query to the most appropriate specialized agent.
- FAQ Agent: Handles common questions by searching a predefined knowledge base.
- Order Status Agent: Assists with queries related to order tracking, delivery, or returns by interacting with an order management system.
- Escalation Agent: If no other agent can resolve the issue, this agent facilitates a handover to a human support representative.
This decentralized pattern, where agents specialize and collaborate, is a best practice for building complex AI systems, as highlighted in OpenAI’s practical guides for agent development.
Essential Components for Our Customer Support Agent
For each agent to perform its role, it needs specific capabilities:
- Large Language Model (LLM) Integration: Each agent will leverage an LLM (like GPT-4 or GPT-3.5 Turbo) for natural language understanding, response generation, and tool invocation. The LLM acts as the agent’s “brain.”
- Specialized Tools: These are functions that allow agents to interact with external systems or perform specific actions beyond their conversational abilities. For our project, we’ll create dummy versions of these tools:
search_knowledge_base(query: str): Simulates searching a database of FAQs.check_order_status(order_id: str): Simulates looking up an order in an e-commerce system.escalate_to_human(issue_description: str): Simulates initiating a human handover process.
- Memory: While not explicitly shown in the diagram, each agent will maintain a conversational memory to understand context over multiple turns. The Agents SDK handles this by default within an
AgentGroupor individualAgentinstances. - Orchestration Logic: The
openai-agents-pythonSDK provides mechanisms, such asAgentGroupor direct agent calls, to manage the flow of conversation and delegate tasks between agents.
By combining these elements, we can build an intelligent and responsive customer support system.
Step-by-Step Implementation: Bringing Our Agent to Life
Let’s roll up our sleeves and start coding! We’ll begin by setting up our project, defining our tools, and then creating each specialized agent.
Step 1: Project Setup and Dependencies
First, create a new directory for our project, navigate into it, and set up a virtual environment.
mkdir customer_support_agent
cd customer_support_agent
python -m venv venv
# On Windows:
# .\venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
Now, install the necessary packages. As of 2026-02-08, we’ll use the openai-agents-python SDK. Always check PyPI for the absolute latest stable version, as package versions can update rapidly. For this guide, we’ll use a representative version.
pip install openai-agents-python==0.2.0 openai==1.12.0 python-dotenv==1.0.1 # (Example versions; check PyPI for actual latest stable releases)
openai-agents-python: This is the core framework for building our agents.openai: The official OpenAI Python client, used by the agents SDK to interact with OpenAI’s models. We specify a version to ensure compatibility, though the agents SDK often handles this dependency.python-dotenv: For loading environment variables.
Next, we need to set our OpenAI API key. It’s best practice to load this from an environment variable to keep it out of your code. Create a .env file in your project root and add your key:
# .env
OPENAI_API_KEY="sk-YOUR_OPENAI_API_KEY_HERE"
Replace "sk-YOUR_OPENAI_API_KEY_HERE" with your actual OpenAI API key.
Step 2: Defining Our Customer Support Tools
Let’s create the dummy tools our agents will use. These functions will simulate interactions with external systems. Create a file named tools.py in your project directory:
# tools.py
from typing import Dict, Any
def search_knowledge_base(query: str) -> str:
"""
Searches a predefined knowledge base for answers to common questions.
Args:
query: The question to search for.
Returns:
A relevant answer or a message indicating no answer was found.
"""
print(f"\n--- Tool Call: Searching knowledge base for '{query}' ---")
faq_data = {
"shipping cost": "Standard shipping costs $5.99. Express shipping is $12.99.",
"return policy": "You can return items within 30 days of purchase with the original receipt. Some exclusions apply, please check our website for details.",
"contact support": "You can reach our human support team by calling 1-800-555-0199 or emailing support@example.com.",
"order tracking": "Please provide your order ID to track your order. You can use the 'check_order_status' tool for this."
}
# Simple keyword matching for demonstration
for keyword, answer in faq_data.items():
if keyword in query.lower():
print(f"--- Tool Result: Found answer for '{keyword}' ---")
return answer
print("--- Tool Result: No direct answer found in knowledge base. ---")
return "I couldn't find a direct answer in the knowledge base. Would you like me to try something else or escalate to a human?"
def check_order_status(order_id: str) -> str:
"""
Checks the status of a customer's order using a simulated order management system.
Args:
order_id: The unique identifier for the order.
Returns:
The current status of the order.
"""
print(f"\n--- Tool Call: Checking order status for ID '{order_id}' ---")
# Simulate an external API call to an order system
mock_orders = {
"ORD12345": {"status": "Shipped", "delivery_date": "2026-03-01"},
"ORD67890": {"status": "Processing", "delivery_date": "N/A"},
"ORDABCDE": {"status": "Delivered", "delivery_date": "2026-02-05"}
}
if order_id in mock_orders:
status_info = mock_orders[order_id]
print(f"--- Tool Result: Order '{order_id}' status: {status_info['status']} ---")
return f"Order {order_id} is currently '{status_info['status']}'. Estimated delivery: {status_info['delivery_date']}."
print(f"--- Tool Result: Order '{order_id}' not found. ---")
return f"I couldn't find an order with ID {order_id}. Please double-check the ID and try again."
def escalate_to_human(issue_description: str) -> str:
"""
Initiates a handover to a human support agent.
Args:
issue_description: A summary of the customer's issue.
Returns:
A confirmation message and next steps for the customer.
"""
print(f"\n--- Tool Call: Escalating to human support with issue: '{issue_description}' ---")
# In a real system, this would trigger a ticket creation, queue addition, or direct transfer.
print("--- Tool Result: Escalation initiated. ---")
return (f"I'm sorry I couldn't resolve your issue. I've escalated your request to a human support agent "
f"with the description: '{issue_description}'. Please wait while we connect you or expect an email within 24 hours.")
- Explanation: Each function simulates a crucial action. Notice the
printstatements within each tool. These are very important for debugging! They let you see exactly when and with what arguments your tools are being called by the agent, helping you understand its decision-making process. Thetypingmodule is used for type hints, which improves code readability and maintainability.
Step 3: Creating Our Specialized Agents
Now, let’s define our agents using the openai-agents-python SDK. We’ll put this code in a file named main.py in your project directory.
# main.py
import os
from dotenv import load_dotenv
from openai_agents import Agent, AgentGroup, tool_from_function
from tools import search_knowledge_base, check_order_status, escalate_to_human
# Load environment variables (like OPENAI_API_KEY) from .env file
load_dotenv()
# --- 1. Define Tools for the Agents ---
# We convert our Python functions into tools the agents can understand.
# The 'description' is crucial as it tells the LLM when to use the tool.
kb_search_tool = tool_from_function(
func=search_knowledge_base,
description="Search the knowledge base for answers to common customer questions like shipping costs, return policies, or general information."
)
order_status_tool = tool_from_function(
func=check_order_status,
description="Check the current status of a customer's order. Requires an 'order_id' (e.g., ORD12345)."
)
human_escalation_tool = tool_from_function(
func=escalate_to_human,
description="Escalate the customer's issue to a human support agent when the AI cannot resolve it. Provide a concise 'issue_description'."
)
# --- 2. Create Individual Agents ---
# Triage Agent: The first point of contact, routes requests.
triage_agent = Agent(
id="TriageAgent",
system_prompt=(
"You are the initial customer support triage agent. Your primary role is to understand the customer's "
"query and determine which specialized agent or tool is best suited to help. "
"If the query is about an order, route to the Order Status Agent. "
"If it's a general question, route to the FAQ Agent. "
"If you cannot determine the intent or resolve the issue, you must escalate to a human using the 'escalate_to_human' tool."
),
tools=[kb_search_tool, order_status_tool, human_escalation_tool], # Triage agent needs access to all tools for routing decisions
model="gpt-4-turbo" # Or "gpt-3.5-turbo" for lower cost/faster responses
)
# FAQ Agent: Handles general questions using the knowledge base.
faq_agent = Agent(
id="FAQAgent",
system_prompt=(
"You are a helpful FAQ agent for customer support. Your goal is to answer common questions by "
"using the 'search_knowledge_base' tool. If the knowledge base doesn't have a satisfactory answer, "
"clearly state that you couldn't find an answer and suggest escalating to human support using the 'escalate_to_human' tool. "
"Always try to provide a concise answer first from the knowledge base before escalating."
),
tools=[kb_search_tool, human_escalation_tool],
model="gpt-4-turbo"
)
# Order Status Agent: Handles order-related inquiries.
order_agent = Agent(
id="OrderAgent",
system_prompt=(
"You are an Order Status agent. Your task is to assist customers with their order-related queries. "
"You must use the 'check_order_status' tool to retrieve order information. "
"Always ask the customer for their exact order ID if it's not provided. "
"If you cannot find the order or resolve the issue, suggest escalating to human support using the 'escalate_to_human' tool."
),
tools=[order_status_tool, human_escalation_tool],
model="gpt-4-turbo"
)
# Escalation Agent: Final fallback to human support.
escalation_agent = Agent(
id="EscalationAgent",
system_prompt=(
"You are the final escalation agent. Your sole purpose is to facilitate the handover to a human support representative. "
"You must use the 'escalate_to_human' tool with a clear summary of the customer's issue. "
"Do not attempt to answer questions yourself. Confirm the escalation has been initiated."
),
tools=[human_escalation_tool],
model="gpt-4-turbo"
)
# --- 3. Orchestrate Agents with AgentGroup ---
# An AgentGroup allows agents to converse and delegate tasks.
customer_service_group = AgentGroup(
agents=[triage_agent, faq_agent, order_agent, escalation_agent],
routing_agent=triage_agent, # The TriageAgent will decide who handles the initial query
system_prompt=(
"You are a customer service team. Work together to assist the customer. "
"The TriageAgent will start by understanding the request. "
"The FAQAgent handles general questions. "
"The OrderAgent handles order status inquiries. "
"The EscalationAgent is the last resort for human handover. "
"Ensure all customer queries are addressed or escalated appropriately."
),
max_iterations=10 # Prevent infinite loops in complex conversations
)
# --- 4. Simulate a Conversation ---
print("Customer Service Agent Group is ready. Type 'quit' to exit.\n")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
# The AgentGroup manages the conversation flow.
# The 'routing_agent' (TriageAgent) will receive the initial message.
response = customer_service_group.chat(user_input)
# The 'response' object will contain the final message from the agent that resolved the query.
# It might also contain details about tool calls if configured to return them.
print(f"Agent: {response.response}\n")
- Explanation:
load_dotenv(): This line ensures yourOPENAI_API_KEYis loaded from the.envfile.tool_from_function: This helper function fromopenai_agentsconverts our regular Python functions (search_knowledge_base, etc.) into a format the LLM can understand and invoke. Thedescriptionis absolutely critical – it’s how the LLM decides when to use a particular tool. Spend time making these descriptions clear and precise.Agent(...): We instantiate fourAgentobjects.id: A unique identifier for the agent.system_prompt: This is the agent’s core instruction. It defines its role, responsibilities, and how it should interact. Crafting clear and concise system prompts is an art!tools: A list oftool_from_functionobjects that this specific agent has access to. Notice how each agent only gets the tools relevant to its role, except for theTriageAgentwhich needs to know about all tools to route effectively.model: Specifies the OpenAI model to use.gpt-4-turbois generally more capable for complex reasoning, whilegpt-3.5-turbocan be faster and cheaper for simpler tasks.
AgentGroup(...): This is where the magic of multi-agent orchestration happens.agents: A list of all agents participating in this group.routing_agent: The agent that will initially receive messages and decide which other agent (or itself) should respond or take action. This is crucial for our triage system.system_prompt: A high-level instruction for the entire group, guiding their collaborative behavior.max_iterations: A safeguard to prevent agents from getting stuck in a loop.
customer_service_group.chat(user_input): This is the method that initiates the conversation within the agent group. Theuser_inputis first sent to therouting_agent, which then orchestrates the response.
Step 4: Run Your Agent!
Save both tools.py and main.py in your customer_support_agent directory. Make sure your .env file is also there with your API key.
Now, activate your virtual environment (if not already active) and run your main.py script:
python main.py
You should see output similar to this as you interact:
Customer Service Agent Group is ready. Type 'quit' to exit.
You: What is your return policy?
--- Tool Call: Searching knowledge base for 'return policy' ---
--- Tool Result: Found answer for 'return policy' ---
Agent: You can return items within 30 days of purchase with the original receipt. Some exclusions apply, please check our website for details.
You: How much does shipping cost?
--- Tool Call: Searching knowledge base for 'shipping cost' ---
--- Tool Result: Found answer for 'shipping cost' ---
Agent: Standard shipping costs $5.99. Express shipping is $12.99.
You: What is the status of order ORD12345?
--- Tool Call: Checking order status for ID 'ORD12345' ---
--- Tool Result: Order 'ORD12345' status: Shipped ---
Agent: Order ORD12345 is currently 'Shipped'. Estimated delivery: 2026-03-01.
You: My issue is not resolved. I need to speak to someone.
--- Tool Call: Escalating to human support with issue: 'My issue is not resolved. I need to speak to someone.' ---
--- Tool Result: Escalation initiated. ---
Agent: I'm sorry I couldn't resolve your issue. I've escalated your request to a human support agent with the description: 'My issue is not resolved. I need to speak to someone.'. Please wait while we connect you or expect an email within 24 hours.
You: quit
Congratulations! You’ve just built and interacted with a multi-agent customer support system using OpenAI’s Agents SDK.
Mini-Challenge: Enhance the Knowledge Base
You’ve seen how easy it is to add new capabilities. Now, it’s your turn to expand our agent’s knowledge!
Challenge: Add a new entry to the faq_data dictionary in your tools.py file. This new entry should answer a question about “payment methods” (e.g., “We accept Visa, MasterCard, American Express, and PayPal.”).
Hint:
- Open
tools.py. - Locate the
faq_datadictionary within thesearch_knowledge_basefunction. - Add a new key-value pair for “payment methods” (as the key) and its corresponding answer (as the value).
- Save the file and re-run
main.py. Test your agent by asking about payment methods.
What to Observe/Learn:
- How quickly an agent’s capabilities can be extended by simply updating a tool’s data.
- The agent’s ability to seamlessly integrate new information without needing changes to its
system_prompt(unless the type of query changes significantly). - The power of well-defined tools and prompts to guide the LLM’s behavior.
Common Pitfalls & Troubleshooting
Even with a well-designed system, you might encounter issues. Here are a few common pitfalls and how to approach them:
- Agent Not Using the Correct Tool:
- Problem: The agent hallucinates an answer or tries to answer a question that clearly requires a tool, but doesn’t invoke it.
- Solution: Review the
descriptionattribute of yourtool_from_functioncarefully. Is it clear and explicit about when the tool should be used? Also, check the agent’ssystem_prompt. Does it sufficiently instruct the agent to use tools when appropriate? Sometimes, adding a phrase like “Always use thesearch_knowledge_basetool for general inquiries” can help.
- Incorrect Tool Arguments:
- Problem: The agent calls a tool but provides incorrect or missing arguments (e.g., calling
check_order_statuswithout anorder_id). - Solution: The
descriptionof the tool should clearly state its required arguments. For example, “Requires an ‘order_id’ (e.g., ORD12345).” Also, ensure the agent’ssystem_promptencourages it to ask for missing information from the user before attempting to call a tool.
- Problem: The agent calls a tool but provides incorrect or missing arguments (e.g., calling
- Agents Getting Stuck in Loops or Misrouting:
- Problem: Agents keep passing control back and forth, or the
routing_agentrepeatedly sends a query to the wrong specialist. - Solution:
- Refine System Prompts: Ensure each agent’s
system_promptis highly specific about its role and when it should transfer control or escalate. - Review
AgentGroupsystem_prompt: The group’s prompt also guides overall behavior. max_iterations: Increase this if your conversations are legitimately longer, but be wary of infinite loops. If you hitmax_iterations, it’s often a sign of unclear routing logic.- Debugging Prints: The
printstatements within your tools and by theopenai-agents-pythonSDK (if verbose logging is enabled) are invaluable for tracing the flow of control and understanding agent decisions.
- Refine System Prompts: Ensure each agent’s
- Problem: Agents keep passing control back and forth, or the
Remember, prompt engineering and tool descriptions are iterative processes. Don’t be afraid to experiment and refine them based on your observations.
Summary
Phew! You’ve just completed your first hands-on project with the OpenAI Agents SDK, building a functional customer support agent. Let’s recap what you’ve achieved:
- Designed a Multi-Agent Architecture: You learned how to break down a complex task into specialized roles for different agents, enhancing modularity and efficiency.
- Implemented Custom Tools: You created and integrated dummy tools (
search_knowledge_base,check_order_status,escalate_to_human) that allow agents to interact with external systems. - Crafted Specialized Agents: You defined distinct
TriageAgent,FAQAgent,OrderAgent, andEscalationAgentinstances with specificsystem_prompts and tool access. - Orchestrated with
AgentGroup: You used theAgentGroupto manage the collaborative workflow between your agents, enabling intelligent routing and conversation flow. - Gained Practical Experience: You successfully ran and interacted with your multi-agent system, seeing theory come to life.
This project is a foundational step. You now have a solid understanding of how to build practical AI agents that can perform real-world tasks. In the next chapters, we’ll explore more advanced topics, including integrating with real-world APIs, adding persistence for memory, and deploying your agents.
Keep experimenting with your customer support agent, and think about what other specialized agents or tools you could add!
References
- OpenAI Agents SDK for Python GitHub Repository
- OpenAI: A practical guide to building agents (PDF)
- OpenAI API Documentation (Models)
- Python
dotenvDocumentation
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.