<RETURN_TO_BASE

Mastering Advanced Multi-Agent Round-Robin Workflows with Microsoft AutoGen and Google Gemini

Discover how to build sophisticated multi-agent AI workflows using Microsoft AutoGen and Google Gemini, enabling seamless collaboration among specialized assistant agents.

Leveraging Microsoft AutoGen for Multi-Agent Workflows

Microsoft AutoGen offers developers an efficient framework to orchestrate complex multi-agent workflows with minimal coding effort. Utilizing abstractions like RoundRobinGroupChat and TeamTool, it enables assembling specialized assistants such as Researchers, FactCheckers, Critics, Summarizers, and Editors into a unified tool named "DeepDive".

Simplifying Coordination and Communication

AutoGen manages the complexities of turn-taking, termination conditions, and streaming outputs internally. This allows developers to concentrate on defining each agent's expertise and crafting system prompts without the need to manually handle callbacks or prompt chains.

Setting Up the Environment

To get started, install necessary packages including autogen-agentchat with Gemini support, OpenAI extensions, and nest_asyncio for event loop management:

!pip install -q autogen-agentchat[gemini] autogen-ext[openai] nest_asyncio

Importing and configuring the environment involves applying nest_asyncio and setting the Gemini API key:

import os, nest_asyncio
from getpass import getpass
 
nest_asyncio.apply()
os.environ["GEMINI_API_KEY"] = getpass("Enter your Gemini API key: ")

Initializing the Model Client

An OpenAI-compatible chat client is initialized to interact with Google’s Gemini model:

from autogen_ext.models.openai import OpenAIChatCompletionClient
 
model_client = OpenAIChatCompletionClient(
    model="gemini-1.5-flash-8b",    
    api_key=os.environ["GEMINI_API_KEY"],
    api_type="google",
)

Defining Specialized Agents

Five agents are created, each with specific roles:

from autogen_agentchat.agents import AssistantAgent
 
researcher   = AssistantAgent(name="Researcher", system_message="Gather and summarize factual info.", model_client=model_client)
factchecker  = AssistantAgent(name="FactChecker", system_message="Verify facts and cite sources.",       model_client=model_client)
critic       = AssistantAgent(name="Critic",    system_message="Critique clarity and logic.",         model_client=model_client)
summarizer   = AssistantAgent(name="Summarizer",system_message="Condense into a brief executive summary.", model_client=model_client)
editor       = AssistantAgent(name="Editor",    system_message="Polish language and signal APPROVED when done.", model_client=model_client)

Creating the Round-Robin Team

The agents cycle through tasks until a termination condition is met:

from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
 
max_msgs = MaxMessageTermination(max_messages=20)
text_term = TextMentionTermination(text="APPROVED", sources=["Editor"])
termination = max_msgs | text_term                                    
team = RoundRobinGroupChat(
    participants=[researcher, factchecker, critic, summarizer, editor],
    termination_condition=termination
)

Packaging as a TeamTool

Wrapping the team for easy invocation:

from autogen_agentchat.tools import TeamTool
 
deepdive_tool = TeamTool(team=team, name="DeepDive", description="Collaborative multi-agent deep dive")

The Host Agent

A Host agent is configured to access the DeepDive tool:

host = AssistantAgent(
    name="Host",
    model_client=model_client,
    tools=[deepdive_tool],
    system_message="You have access to a DeepDive tool for in-depth research."
)

Running the Workflow Asynchronously

An async function runs the DeepDive task on a given topic:

import asyncio
 
async def run_deepdive(topic: str):
    result = await host.run(task=f"Deep dive on: {topic}")
    print(" DeepDive result:\n", result)
    await model_client.close()
 
topic = "Impacts of Model Context Protocl on Agentic AI"
loop = asyncio.get_event_loop()
loop.run_until_complete(run_deepdive(topic))

Advantages and Future Potential

Integrating Google Gemini with AutoGen's OpenAI-compatible client simplifies multi-agent orchestration. AutoGen abstracts event loop management, streaming, and termination logic, enabling rapid iteration on agent roles and workflow design. This approach paves the way for advanced collaborative AI systems, with potential extensions into retrieval pipelines, dynamic selectors, and conditional execution strategies.

Explore the notebook linked in the original post and follow related communities for ongoing updates.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский