<RETURN_TO_BASE

Mastering Function Calling for Real-Time Conversational AI with Gemini 2.0 Flash

This guide explains how to build real-time, tool-enabled conversational AI agents using function calling with Google Gemini 2.0 Flash, including practical examples for a weather assistant.

Function Calling Bridges Natural Language and Real-World APIs

Function calling enables large language models (LLMs) to act as intermediaries between natural language inputs and code or API calls. Instead of just generating plain text responses, the model can invoke predefined functions by emitting structured JSON calls including function names and arguments. The application then executes these calls and returns results. This interactive cycle can repeat multiple times, allowing complex multi-step conversations controlled entirely by natural language.

Building a Weather Assistant with Gemini 2.0 Flash

This tutorial demonstrates function calling by building a weather assistant using Google’s Gemini 2.0 Flash model. By integrating function calls, the assistant can fetch live weather data or perform other real-time tasks seamlessly through conversation, eliminating the need for users to navigate forms or screens.

Setting Up the Environment and SDK

Install the Gemini Python SDK along with geopy and requests for geolocation and HTTP requests:

!pip install "google-genai>=1.0.0" geopy requests

Import the SDK and initialize the client with your API key:

import os
from google import genai
 
GEMINI_API_KEY = "Use_Your_API_Key"
client = genai.Client(api_key=GEMINI_API_KEY)
model_id = "gemini-2.0-flash"

Simple Text Generation Example

Send a prompt to the model and print the response:

res = client.models.generate_content(
    model=model_id,
    contents=["Tell me 1 good fact about Nuremberg."]
)
print(res.text)

Defining Function Calls via JSON Schema

Define the weather function with JSON schema specifying name, description, and parameters:

weather_function = {
    "name": "get_weather_forecast",
    "description": "Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g., San Francisco, CA"
            },
            "date": {
                "type": "string",
                "description": "the forecasting date for when to get the weather format (yyyy-mm-dd)"
            }
        },
        "required": ["location","date"]
    }
}

Configure the model to use this tool:

from google.genai.types import GenerateContentConfig
 
config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.",
    tools=[{"function_declarations": [weather_function]}],
)

Testing Function Calls

A prompt without the function config returns generic text:

response = client.models.generate_content(
    model=model_id,
    contents='Whats the weather in Berlin today?'
)
print(response.text)

With the config, Gemini detects and emits a function call:

response = client.models.generate_content(
    model=model_id,
    config=config,
    contents='Whats the weather in Berlin today?'
)
 
for part in response.candidates[0].content.parts:
    print(part.function_call)

Implementing the Function Execution Loop

Define a Python function to get weather data using geopy and Open-Meteo API:

from google.genai import types
from geopy.geocoders import Nominatim
import requests
 
g = Nominatim(user_agent="weather-app")
def get_weather_forecast(location, date):
    location = g.geocode(location)
    if location:
        try:
            response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={location.latitude}&longitude={location.longitude}&hourly=temperature_2m&start_date={date}&end_date={date}")
            data = response.json()
            return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"])}
        except Exception as e:
            return {"error": str(e)}
    else:
        return {"error": "Location not found"}

Create a loop that sends prompts, detects function calls, executes them, and feeds results back to Gemini:

def call_function(function_name, **kwargs):
    return functions[function_name](**kwargs)
 
def function_call_loop(prompt):
    contents = [types.Content(role="user", parts=[types.Part(text=prompt)])]
    response = client.models.generate_content(
        model=model_id,
        config=config,
        contents=contents
    )
    for part in response.candidates[0].content.parts:
        contents.append(types.Content(role="model", parts=[part]))
        if part.function_call:
            print("Tool call detected")
            function_call = part.function_call
            print(f"Calling tool: {function_call.name} with args: {function_call.args}")
            tool_result = call_function(function_call.name, **function_call.args)
            function_response_part = types.Part.from_function_response(
                name=function_call.name,
                response={"result": tool_result},
            )
            contents.append(types.Content(role="user", parts=[function_response_part]))
            print(f"Calling LLM with tool results")
            func_gen_response = client.models.generate_content(
                model=model_id, config=config, contents=contents
            )
            contents.append(types.Content(role="model", parts=[func_gen_response]))
    return contents[-1].parts[0].text.strip()
 
result = function_call_loop("Whats the weather in Berlin today?")
print(result)

Using Python Functions Directly for Function Calling

Define the function with proper typing and docstring:

from geopy.geocoders import Nominatim
import requests
 
geolocator = Nominatim(user_agent="weather-app")
 
def get_weather_forecast(location: str, date: str) -> str:
    """
    Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour.
   
    Args:
        location (str): The city and state, e.g., San Francisco, CA
        date (str): The forecasting date for when to get the weather format (yyyy-mm-dd)
    Returns:
        Dict[str, float]: A dictionary with the time as key and the temperature as value
    """
    location = geolocator.geocode(location)
    if location:
        try:
            response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={location.latitude}&longitude={location.longitude}&hourly=temperature_2m&start_date={date}&end_date={date}")
            data = response.json()
            return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"])}
        except Exception as e:
            return {"error": str(e)}
    else:
        return {"error": "Location not found"}

Configuring Gemini to Use the Python Function

Register the Python function as a callable tool and control automatic calling:

from google.genai.types import GenerateContentConfig
 
config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that can help with weather related questions. Today is 2025-03-04.",
    tools=[get_weather_forecast],
    automatic_function_calling={"disable": True}
)

Inspecting Gemini's Function Call Decisions

Send a prompt and inspect the emitted function call without automatic execution:

r = client.models.generate_content(
    model=model_id,
    config=config,
    contents='Whats the weather in Berlin today?'
)
for part in r.candidates[0].content.parts:
    print(part.function_call)

Enabling Automatic Function Calling

Enable automatic function calling to let Gemini invoke the tool and return natural language responses:

config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API.",
    tools=[get_weather_forecast],
)
 
r = client.models.generate_content(
    model=model_id,
    config=config,
    contents='Whats the weather in Berlin today?'
)
 
print(r.text)

Adding User Context for Personalized Responses

Extend the assistant with user information to provide personalized advice:

prompt = f"""
Today is 2025-03-04. You are chatting with Andrew, you have access to more information about him.
 
User Context:
- name: Andrew
- location: Nuremberg
 
User: Can i wear a T-shirt later today?"""
 
r = client.models.generate_content(
    model=model_id,
    config=config,
    contents=prompt
)
 
print(r.text)

This outputs a natural language recommendation based on the forecast.

Summary

By defining functions through JSON schemas or Python signatures, configuring Gemini 2.0 Flash to detect and emit function calls, and implementing an agentic loop to execute calls and respond, developers can transform LLMs into powerful real-time, tool-enabled conversational AI agents. These agents can automate workflows, retrieve live data, and interact with APIs seamlessly within natural conversations.

Additional Resources

The full Colab notebook is available for hands-on experimentation. Join the community on Twitter, Telegram, LinkedIn, and the 90k+ ML SubReddit. Also, register for the miniCON Virtual Conference on AGENTIC AI featuring workshops and certification.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский