<RETURN_TO_BASE

Designing a Gemini-Powered Self-Correcting AI System

Explore the framework for a self-correcting AI system using Gemini with semantic routing and task orchestration.

Overview of the AI Orchestration Pipeline

In this tutorial, we explore how to design and run a full agentic AI orchestration pipeline powered by semantic routing, symbolic guardrails, and self-correction loops using Gemini.

Structuring Agents and Dispatching Tasks

We walk through how we structure agents, dispatch tasks, enforce constraints, and refine outputs using a clean, modular architecture. As we progress through each snippet, we see how the system intelligently chooses the right agent, validates its output, and improves itself through iterative reflection.

Setting Up the Core Environment

import os
import json
import time
import typing
from dataclasses import dataclass, asdict
from google import genai
from google.genai import types
 
API_KEY = os.environ.get("GEMINI_API_KEY", "API Key")
client = genai.Client(api_key=API_KEY)
 
@dataclass
class AgentMessage:
   source: str
   target: str
   content: str
   metadata: dict
   timestamp: float = time.time()

We set up our core environment by importing essential libraries, defining the API key, and initializing the Gemini client. We also establish the AgentMessage structure, which acts as the shared communication format between agents.

Implementing the Cognitive Engine

class CognitiveEngine:
   @staticmethod
   def generate(prompt: str, system_instruction: str, json_mode: bool = False) -> str:
       config = types.GenerateContentConfig(
           temperature=0.1,
           response_mime_type="application/json" if json_mode else "text/plain"
       )
       try:
           response = client.models.generate_content(
               model="gemini-2.0-flash",
               contents=prompt,
               config=config
           )
           return response.text
       except Exception as e:
           raise ConnectionError(f"Gemini API Error: {e}")

We build the cognitive layer using Gemini, allowing us to generate text and JSON outputs depending on the instruction.

Semantic Routing: Analyzing Queries

class SemanticRouter:
   def __init__(self, agents_registry: dict):
       self.registry = agents_registry
 
   def route(self, user_query: str) -> str:
       prompt = f"""
       You are a Master Dispatcher. Analyze the user request and map it to the ONE best agent.
       AVAILABLE AGENTS:
       {json.dumps(self.registry, indent=2)}
       USER REQUEST: "{user_query}"
       Return ONLY a JSON object: {{"selected_agent": "agent_name", "reasoning": "brief reason"}}
       """
       response_text = CognitiveEngine.generate(prompt, "You are a routing system.", json_mode=True)
       try:
           decision = json.loads(response_text)
           print(f"   [Router] Selected: {decision['selected_agent']} (Reason: {decision['reasoning']})")
           return decision['selected_agent']
       except:
           return "general_agent"

Here we implement the semantic router, which analyzes queries and selects the most suitable agent.

Crafting Worker Agents

class Agent:
   def __init__(self, name: str, instruction: str):
       self.name = name
       self.instruction = instruction
 
   def execute(self, message: AgentMessage) -> str:
       return CognitiveEngine.generate(
           prompt=f"Input: {message.content}",
           system_instruction=self.instruction
       )

We construct the worker agents and the central orchestrator, defining roles for each agent: analyst, creative, or coder.

Validating Constraints and Self-Correction

def validate_constraint(self, content: str, constraint_type: str) -> tuple[bool, str]:
   if constraint_type == "json_only":
       try:
           json.loads(content)
           return True, "Valid JSON"
       except:
           return False, "Output was not valid JSON."
   if constraint_type == "no_markdown":
       if "```" in content:
           return False, "Output contains Markdown code blocks, which are forbidden."
       return True, "Valid Text"
   return True, "Pass"

We implement symbolic guardrails and a self-correction loop to enforce constraints, allowing our agents to fix their own mistakes when outputs violate requirements.

Execution of Scenarios

if __name__ == "__main__":
   orchestrator = Orchestrator()
   orchestrator.run_task(
       "Compare the GDP of France and Germany in 2023.",
       constraint="json_only"
   )
   orchestrator.run_task(
       "Write a Python function for Fibonacci numbers.",
       constraint="no_markdown"
   )

We execute two complete scenarios, showcasing routing, agent execution, and constraint validation in action. We can observe the reflexive behavior of our agents when tasked with specific constraints.

Conclusion

In summary, we've witnessed how routing, worker agents, guardrails, and self-correction collaborate to create a reliable and intelligent agentic system. This architecture is easily expandable with new agents, richer constraints, or more advanced reasoning strategies.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский