
Single agent AI systems represent the foundational building block of autonomous AI. While multi-agent systems get a lot of attention, understanding how to build robust, capable single agents is crucial for any AI engineer. Let's dive deep into the architecture, patterns, and best practices for creating truly autonomous AI agents.
A single agent AI system is an autonomous entity that can:
Unlike simple chatbots that just respond to inputs, autonomous agents have agency - they can plan, execute multi-step tasks, and adapt to changing situations.
The brain of your agent, typically powered by an LLM:
1from anthropic import Anthropic 2from openai import OpenAI 3 4class AgentReasoning: 5 def __init__(self, provider: str = "anthropic"): 6 if provider == "anthropic": 7 self.client = Anthropic() 8 self.model = "claude-3-5-sonnet-20241022" 9 else: 10 self.client = OpenAI() 11 self.model = "gpt-4-turbo-preview" 12 13 def reason(self, context: str, task: str) -> dict: 14 """Core reasoning function""" 15 prompt = f""" 16 Context: {context} 17 18 Task: {task} 19 20 Think step by step: 21 1. What is the goal? 22 2. What information do I have? 23 3. What information do I need? 24 4. What actions should I take? 25 5. What is the expected outcome? 26 27 Provide your reasoning and action plan. 28 """ 29 30 response = self.client.messages.create( 31 model=self.model, 32 max_tokens=2048, 33 messages=[{"role": "user", "content": prompt}] 34 ) 35 36 return self.parse_reasoning(response.content[0].text) 37 38 def parse_reasoning(self, text: str) -> dict: 39 """Extract structured reasoning from response""" 40 # Parse the reasoning into actionable steps 41 return { 42 "goal": self.extract_goal(text), 43 "analysis": self.extract_analysis(text), 44 "actions": self.extract_actions(text), 45 "expected_outcome": self.extract_outcome(text) 46 } 47
Enable your agent to interact with the world:
1from typing import Callable, Dict, Any 2import inspect 3 4class ToolRegistry: 5 def __init__(self): 6 self.tools: Dict[str, Callable] = {} 7 self.tool_descriptions: Dict[str, str] = {} 8 9 def register(self, name: str, description: str): 10 """Decorator to register tools""" 11 def decorator(func: Callable): 12 self.tools[name] = func 13 self.tool_descriptions[name] = description 14 15 # Get function signature for schema 16 sig = inspect.signature(func) 17 params = { 18 name: { 19 "type": param.annotation.__name__, 20 "required": param.default == inspect.Parameter.empty 21 } 22 for name, param in sig.parameters.items() 23 } 24 25 return func 26 return decorator 27 28 def get_tool_schema(self) -> list: 29 """Generate tool schema for LLM""" 30 return [ 31 { 32 "name": name, 33 "description": desc, 34 "input_schema": self.get_tool_params(name) 35 } 36 for name, desc in self.tool_descriptions.items() 37 ] 38 39 def execute_tool(self, tool_name: str, **kwargs) -> Any: 40 """Execute a registered tool""" 41 if tool_name not in self.tools: 42 raise ValueError(f"Tool {tool_name} not found") 43 44 try: 45 return self.tools[tool_name](**kwargs) 46 except Exception as e: 47 return f"Error executing {tool_name}: {str(e)}" 48 49# Example tools 50tools = ToolRegistry() 51 52@tools.register("web_search", "Search the web for information") 53def web_search(query: str) -> str: 54 """Search the web and return results""" 55 # Implement web search 56 return f"Search results for: {query}" 57 58@tools.register("calculate", "Perform mathematical calculations") 59def calculate(expression: str) -> float: 60 """Safely evaluate mathematical expressions""" 61 try: 62 # Use safe evaluation 63 return eval(expression, {"__builtins__": {}}) 64 except: 65 return "Invalid expression" 66 67@tools.register("read_file", "Read contents of a file") 68def read_file(filepath: str) -> str: 69 """Read and return file contents""" 70 with open(filepath, 'r') as f: 71 return f.read() 72 73@tools.register("write_file", "Write content to a file") 74def write_file(filepath: str, content: str) -> str: 75 """Write content to a file""" 76 with open(filepath, 'w') as f: 77 f.write(content) 78 return f"Successfully wrote to {filepath}" 79
The agent's ability to plan and execute multi-step tasks:
1from dataclasses import dataclass 2from enum import Enum 3from typing import List, Optional 4 5class ActionStatus(Enum): 6 PENDING = "pending" 7 IN_PROGRESS = "in_progress" 8 COMPLETED = "completed" 9 FAILED = "failed" 10 11@dataclass 12class Action: 13 tool_name: str 14 parameters: dict 15 status: ActionStatus = ActionStatus.PENDING 16 result: Optional[Any] = None 17 error: Optional[str] = None 18 19class ActionPlanner: 20 def __init__(self, reasoning_engine: AgentReasoning, tools: ToolRegistry): 21 self.reasoning = reasoning_engine 22 self.tools = tools 23 24 def create_plan(self, task: str, context: str) -> List[Action]: 25 """Generate action plan for a task""" 26 reasoning_result = self.reasoning.reason(context, task) 27 28 # Convert reasoning into concrete actions 29 plan = [] 30 for action_desc in reasoning_result["actions"]: 31 action = self.parse_action(action_desc) 32 if action: 33 plan.append(action) 34 35 return plan 36 37 def parse_action(self, action_desc: str) -> Optional[Action]: 38 """Parse action description into Action object""" 39 # Use LLM to extract structured action 40 prompt = f""" 41 Parse this action description into a tool call: 42 {action_desc} 43 44 Available tools: {list(self.tools.tools.keys())} 45 46 Return JSON with: 47 {{ 48 "tool_name": "tool_name", 49 "parameters": {{"param1": "value1"}} 50 }} 51 """ 52 53 # Get structured output from LLM 54 # Implementation depends on your LLM provider 55 # For now, simplified: 56 return Action( 57 tool_name="example_tool", 58 parameters={} 59 ) 60 61 def execute_plan(self, plan: List[Action]) -> List[Action]: 62 """Execute action plan""" 63 for action in plan: 64 action.status = ActionStatus.IN_PROGRESS 65 66 try: 67 result = self.tools.execute_tool( 68 action.tool_name, 69 **action.parameters 70 ) 71 action.result = result 72 action.status = ActionStatus.COMPLETED 73 except Exception as e: 74 action.error = str(e) 75 action.status = ActionStatus.FAILED 76 break # Stop on failure 77 78 return plan 79
Enable your agent to maintain context:
1from datetime import datetime 2from collections import deque 3 4class AgentMemory: 5 def __init__(self, max_short_term: int = 10): 6 self.short_term = deque(maxlen=max_short_term) 7 self.long_term = [] 8 self.working_context = {} 9 10 def add_to_short_term(self, item: dict): 11 """Add to short-term memory""" 12 item["timestamp"] = datetime.now() 13 self.short_term.append(item) 14 15 def add_to_long_term(self, item: dict): 16 """Add to long-term memory""" 17 item["timestamp"] = datetime.now() 18 self.long_term.append(item) 19 20 def update_context(self, key: str, value: Any): 21 """Update working context""" 22 self.working_context[key] = value 23 24 def get_context_summary(self) -> str: 25 """Generate context summary for the agent""" 26 summary = [] 27 28 # Recent interactions 29 if self.short_term: 30 summary.append("Recent interactions:") 31 for item in list(self.short_term)[-5:]: 32 summary.append(f"- {item.get('type', 'interaction')}: {item.get('content', '')}") 33 34 # Working context 35 if self.working_context: 36 summary.append("\nCurrent context:") 37 for key, value in self.working_context.items(): 38 summary.append(f"- {key}: {value}") 39 40 return "\n".join(summary) 41 42 def search_memory(self, query: str, limit: int = 5) -> List[dict]: 43 """Search through memories""" 44 # Simple keyword search (in production, use vector search) 45 results = [] 46 for memory in reversed(self.long_term): 47 content = str(memory.get("content", "")) 48 if query.lower() in content.lower(): 49 results.append(memory) 50 if len(results) >= limit: 51 break 52 return results 53
Now let's put it all together:
1class AutonomousAgent: 2 def __init__(self, name: str, role: str, goal: str): 3 self.name = name 4 self.role = role 5 self.goal = goal 6 7 # Initialize components 8 self.reasoning = AgentReasoning() 9 self.tools = ToolRegistry() 10 self.planner = ActionPlanner(self.reasoning, self.tools) 11 self.memory = AgentMemory() 12 13 # Agent state 14 self.current_task = None 15 self.task_history = [] 16 17 def process_task(self, task: str) -> str: 18 """Main task processing loop""" 19 self.current_task = task 20 21 # Add task to memory 22 self.memory.add_to_short_term({ 23 "type": "task", 24 "content": task 25 }) 26 27 # Get context 28 context = self.build_context() 29 30 # Create action plan 31 plan = self.planner.create_plan(task, context) 32 33 # Execute plan 34 executed_plan = self.planner.execute_plan(plan) 35 36 # Analyze results 37 result = self.analyze_results(executed_plan) 38 39 # Update memory 40 self.memory.add_to_long_term({ 41 "type": "completed_task", 42 "task": task, 43 "plan": executed_plan, 44 "result": result 45 }) 46 47 self.task_history.append({ 48 "task": task, 49 "result": result, 50 "timestamp": datetime.now() 51 }) 52 53 return result 54 55 def build_context(self) -> str: 56 """Build context for reasoning""" 57 context_parts = [ 58 f"I am {self.name}, a {self.role}.", 59 f"My goal is: {self.goal}", 60 "\n" + self.memory.get_context_summary(), 61 f"\nAvailable tools: {list(self.tools.tools.keys())}" 62 ] 63 64 if self.task_history: 65 recent = self.task_history[-3:] 66 context_parts.append("\nRecent tasks:") 67 for item in recent: 68 context_parts.append(f"- {item['task']}: {item['result']}") 69 70 return "\n".join(context_parts) 71 72 def analyze_results(self, plan: List[Action]) -> str: 73 """Analyze execution results and generate summary""" 74 successful = [a for a in plan if a.status == ActionStatus.COMPLETED] 75 failed = [a for a in plan if a.status == ActionStatus.FAILED] 76 77 if not plan: 78 return "No actions were planned" 79 80 if failed: 81 return f"Task partially completed. {len(successful)}/{len(plan)} actions succeeded. Failures: {[a.error for a in failed]}" 82 83 # Generate summary using LLM 84 results_text = "\n".join([ 85 f"{a.tool_name}: {a.result}" for a in successful 86 ]) 87 88 summary_prompt = f""" 89 Summarize the results of these actions for the task: {self.current_task} 90 91 Actions taken: 92 {results_text} 93 94 Provide a concise summary of what was accomplished. 95 """ 96 97 response = self.reasoning.client.messages.create( 98 model=self.reasoning.model, 99 max_tokens=500, 100 messages=[{"role": "user", "content": summary_prompt}] 101 ) 102 103 return response.content[0].text 104 105 def reflect(self) -> str: 106 """Agent reflects on its performance""" 107 if not self.task_history: 108 return "No tasks completed yet" 109 110 reflection_prompt = f""" 111 Review these recent tasks and reflect on performance: 112 113 {self.task_history[-5:]} 114 115 Consider: 116 1. What worked well? 117 2. What could be improved? 118 3. Are there patterns in successes/failures? 119 4. What should I do differently next time? 120 121 Provide insights and recommendations. 122 """ 123 124 response = self.reasoning.client.messages.create( 125 model=self.reasoning.model, 126 max_tokens=1000, 127 messages=[{"role": "user", "content": reflection_prompt}] 128 ) 129 130 return response.content[0].text 131
Interleave reasoning and acting for better decision-making:
1class ReActAgent(AutonomousAgent): 2 def react_loop(self, task: str, max_iterations: int = 5): 3 """ReAct loop: Reason, Act, Observe, Repeat""" 4 context = self.build_context() 5 observations = [] 6 7 for i in range(max_iterations): 8 # Thought: Reason about next action 9 thought = self.reason_next_step(task, context, observations) 10 11 # Action: Execute the action 12 action_result = self.execute_action(thought["action"]) 13 14 # Observation: Record result 15 observation = { 16 "iteration": i + 1, 17 "thought": thought["reasoning"], 18 "action": thought["action"], 19 "result": action_result 20 } 21 observations.append(observation) 22 23 # Check if task is complete 24 if self.is_task_complete(task, observations): 25 break 26 27 return self.synthesize_result(observations) 28 29 def reason_next_step(self, task: str, context: str, observations: list) -> dict: 30 """Reason about the next step""" 31 prompt = f""" 32 Task: {task} 33 Context: {context} 34 35 Previous observations: 36 {observations} 37 38 Think: What should I do next to complete this task? 39 Provide your reasoning and the next action to take. 40 """ 41 42 # Get reasoning from LLM 43 # Return structured thought and action 44 pass 45
Enable deeper reasoning with step-by-step thinking:
1class ChainOfThoughtAgent(AutonomousAgent): 2 def solve_with_cot(self, problem: str) -> str: 3 """Solve problem using chain-of-thought""" 4 cot_prompt = f""" 5 Problem: {problem} 6 7 Let's solve this step by step: 8 9 Step 1: Understand the problem 10 - What is being asked? 11 - What information do I have? 12 13 Step 2: Break down the solution 14 - What are the sub-problems? 15 - What's the approach? 16 17 Step 3: Execute the solution 18 - Solve each sub-problem 19 - Combine results 20 21 Step 4: Verify the answer 22 - Does it make sense? 23 - Did I answer the question? 24 25 Think through each step carefully. 26 """ 27 28 response = self.reasoning.client.messages.create( 29 model=self.reasoning.model, 30 max_tokens=2048, 31 messages=[{"role": "user", "content": cot_prompt}] 32 ) 33 34 return response.content[0].text 35
Build agents that can verify and correct their work:
1class SelfCorrectingAgent(AutonomousAgent): 2 def execute_with_validation(self, task: str) -> str: 3 """Execute task with self-validation""" 4 # Initial attempt 5 result = self.process_task(task) 6 7 # Validate result 8 validation = self.validate_result(task, result) 9 10 if not validation["valid"]: 11 # Attempt correction 12 corrected = self.correct_result( 13 task, 14 result, 15 validation["issues"] 16 ) 17 return corrected 18 19 return result 20 21 def validate_result(self, task: str, result: str) -> dict: 22 """Validate if result satisfies task requirements""" 23 validation_prompt = f""" 24 Task: {task} 25 Result: {result} 26 27 Validate if the result properly addresses the task: 28 1. Is it complete? 29 2. Is it accurate? 30 3. Does it address all requirements? 31 32 Return validation status and any issues found. 33 """ 34 35 # Get validation from LLM 36 # Return structured validation result 37 pass 38 39 def correct_result(self, task: str, result: str, issues: list) -> str: 40 """Correct result based on identified issues""" 41 correction_prompt = f""" 42 Original task: {task} 43 Previous result: {result} 44 Issues identified: {issues} 45 46 Provide a corrected result that addresses these issues. 47 """ 48 49 # Get corrected result from LLM 50 pass 51
Define clear roles and capabilities:
1agent = AutonomousAgent( 2 name="DataAnalyst", 3 role="Senior Data Analyst specialized in financial data", 4 goal="Analyze data and provide actionable insights" 5) 6
1def safe_execute(self, action: Action) -> Any: 2 """Execute action with error handling""" 3 try: 4 result = self.tools.execute_tool( 5 action.tool_name, 6 **action.parameters 7 ) 8 return result 9 except Exception as e: 10 # Log error 11 logger.error(f"Action failed: {action.tool_name}, Error: {e}") 12 13 # Attempt recovery 14 recovery_action = self.plan_recovery(action, e) 15 if recovery_action: 16 return self.safe_execute(recovery_action) 17 18 raise 19
1class ObservableAgent(AutonomousAgent): 2 def __init__(self, *args, **kwargs): 3 super().__init__(*args, **kwargs) 4 self.metrics = { 5 "tasks_completed": 0, 6 "tasks_failed": 0, 7 "total_actions": 0, 8 "average_task_time": 0 9 } 10 11 def process_task(self, task: str) -> str: 12 start_time = time.time() 13 14 try: 15 result = super().process_task(task) 16 self.metrics["tasks_completed"] += 1 17 return result 18 except Exception as e: 19 self.metrics["tasks_failed"] += 1 20 raise 21 finally: 22 execution_time = time.time() - start_time 23 self.update_metrics(execution_time) 24 25 def get_metrics(self) -> dict: 26 return self.metrics 27
1personal_assistant = AutonomousAgent( 2 name="PersonalAssistant", 3 role="Personal productivity assistant", 4 goal="Help manage tasks, schedule, and information" 5) 6 7# Register relevant tools 8@personal_assistant.tools.register("schedule_meeting", "Schedule a meeting") 9def schedule_meeting(title: str, time: str, attendees: list): 10 # Integration with calendar API 11 pass 12 13@personal_assistant.tools.register("send_email", "Send an email") 14def send_email(to: str, subject: str, body: str): 15 # Integration with email API 16 pass 17 18result = personal_assistant.process_task( 19 "Schedule a meeting with the engineering team for tomorrow at 2 PM to discuss the new feature" 20) 21
1code_assistant = AutonomousAgent( 2 name="CodeAssistant", 3 role="Senior software engineer", 4 goal="Help with code review, debugging, and development" 5) 6 7# Code-specific tools 8@code_assistant.tools.register("analyze_code", "Analyze code for issues") 9def analyze_code(filepath: str): 10 # Static analysis 11 pass 12 13@code_assistant.tools.register("run_tests", "Execute test suite") 14def run_tests(test_path: str): 15 # Run tests 16 pass 17
1research_assistant = AutonomousAgent( 2 name="ResearchAssistant", 3 role="Research analyst", 4 goal="Conduct thorough research and provide comprehensive reports" 5) 6 7# Research tools 8@research_assistant.tools.register("web_search", "Search the web") 9def web_search(query: str): 10 pass 11 12@research_assistant.tools.register("summarize_paper", "Summarize research paper") 13def summarize_paper(url: str): 14 pass 15
Single agent AI systems are powerful building blocks for autonomous AI applications. By combining reasoning, planning, tool use, and memory, you can create agents that:
The key to success is:
Start with simple agents and gradually add complexity. Test thoroughly, monitor performance, and iterate based on real-world usage.
Build your first autonomous agent today. Start with a clear goal, add essential tools, and let your agent learn and grow in capability.
An in-depth look at how AI agents are revolutionizing software development and enterprise operations in 2025.

Fortan Pireva
November 8, 2025

Master CrewAI to build collaborative multi-agent systems where specialized AI agents work together to solve complex tasks.

Fortan Pireva
November 28, 2024

A comprehensive guide to the latest advancements in LLMs , including key architectures, techniques, and applications.

Fortan Pireva
May 27, 2024