How to Integrate Dappier AI's Real-Time Search & Recommendations with OpenAI Chat API: A Step-by-Step Guide
A comprehensive tutorial on integrating Dappier AI's real-time search and recommendation tools with OpenAI's chat API to build intelligent conversational applications with live data and personalized content.
Setting Up Your Environment
This tutorial guides you through integrating Dappier AI's real-time search and recommendation tools into conversational applications using OpenAI's chat API. We start by setting up the environment in Google Colab and installing all necessary dependencies, including LangChain libraries and the OpenAI client.
!pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openaiSecuring API Keys
Securely load Dappier and OpenAI API keys at runtime using Python's getpass module to avoid hardcoding sensitive information.
import os
from getpass import getpass
os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")
os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")Initializing Dappier Tools
Create instances of Dappier's real-time search and AI recommendation tools to enable live web queries and personalized article suggestions.
from langchain_dappier import DappierRealTimeSearchTool
search_tool = DappierRealTimeSearchTool()
print("Real-time search tool ready:", search_tool)from langchain_dappier import DappierAIRecommendationTool
recommendation_tool = DappierAIRecommendationTool(
data_model_id="dm_01j0pb465keqmatq9k83dthx34",
similarity_top_k=3,
ref="sportsnaut.com",
num_articles_ref=2,
search_algorithm="most_recent",
)
print("Recommendation tool ready:", recommendation_tool)Creating the OpenAI Chat Model
Initialize the GPT-3.5-turbo chat model with zero temperature for deterministic responses and bind it with the real-time search tool.
from langchain.chat_models import init_chat_model
llm = init_chat_model(
model="gpt-3.5-turbo",
model_provider="openai",
temperature=0,
)
llm_with_tools = llm.bind_tools([search_tool])
print(" llm_with_tools ready")Building the Prompt Chain
Construct a conversational prompt chain that injects the current date and formats user input and message history for the LLM.
import datetime
from langchain_core.prompts import ChatPromptTemplate
today = datetime.datetime.today().strftime("%Y-%m-%d")
prompt = ChatPromptTemplate([
("system", f"we are a helpful assistant. Today is {today}."),
("human", "{user_input}"),
("placeholder", "{messages}"),
])
llm_chain = prompt | llm_with_tools
print(" llm_chain built")Defining the End-to-End Tool Chain
Create a chain that sends user input to the LLM, invokes any tool calls, and integrates tool responses for a comprehensive answer.
from langchain_core.runnables import RunnableConfig, chain
@chain
def tool_chain(user_input: str, config: RunnableConfig):
ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke(
{"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
config=config
)
print(" tool_chain defined")Demonstrating Queries
Perform direct queries to the real-time search and recommendation tools, as well as full conversational queries with integrated tools.
res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
print(" Search:", res)
rec = recommendation_tool.invoke({"query": "latest sports news"})
print(" Recommendation:", rec)
out = tool_chain.invoke("Who won the last Nobel Prize?")
print(" Chain output:", out)This setup provides a robust framework to enrich conversational AI with up-to-date web information and personalized content recommendations. Users can further customize search filters and recommendation parameters to tailor the system to specific domains and needs.
Сменить язык
Читать эту статью на русском