Mastering Chain-of-Thought Reasoning with Mirascope and Groq’s LLaMA 3
Discover how to use Mirascope and Groq’s LLaMA 3 model to implement Chain-of-Thought reasoning, enabling AI to solve complex problems step-by-step effectively.
Introduction to Chain-of-Thought Reasoning
Chain-of-Thought (CoT) reasoning is a powerful approach that guides language models to solve problems by breaking them down into logical, sequential steps. This method enhances accuracy and transparency, making it ideal for tackling complex multi-step tasks.
Using Mirascope with Groq's LLaMA 3 Model
This tutorial demonstrates how to implement CoT reasoning using the Mirascope library alongside Groq's LLaMA 3 model. Instead of jumping directly to answers, the model is encouraged to think through problems step by step, similar to human reasoning.
Sample Problem: Relative Velocity
We will use a relative velocity question as a practical example:
"If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?"
Setting Up Dependencies
Install necessary packages:
!pip install "mirascope[groq]"
!pip install datetimeAPI Key Requirement
A Groq API key is required to make calls to the LLM. Obtain it from https://console.groq.com/keys.
Defining the Schema with Pydantic
We import required libraries and define a COTResult Pydantic model to structure each reasoning step. Each step includes a title, content, and a flag indicating whether to continue or finalize the answer.
from typing import Literal
from mirascope.core import groq
from pydantic import BaseModel, Field
history: list[dict] = []
class COTResult(BaseModel):
title: str = Field(..., description="The title of the step")
content: str = Field(..., description="The output content of the step")
next_action: Literal["continue", "final_answer"] = Field(..., description="The next action to take")Defining Step-wise Reasoning Functions
The cot_step function allows iterative reasoning by reviewing prior steps and deciding to continue or finalize. The final_answer function consolidates all reasoning into a concise final response.
@groq.call("llama-3.3-70b-versatile", json_mode=True, response_model=COTResult)
def cot_step(prompt: str, step_number: int, previous_steps: str) -> str:
return f"""
You are an expert AI assistant that explains your reasoning step by step.
For this step, provide a title that describes what you're doing, along with the content.
Decide if you need another step or if you're ready to give the final answer.
Guidelines:
- Use AT MOST 5 steps to derive the answer.
- Be aware of your limitations as an LLM and what you can and cannot do.
- In your reasoning, include exploration of alternative answers.
- Consider you may be wrong, and if you are wrong in your reasoning, where it would be.
- Fully test all other possibilities.
- YOU ARE ALLOWED TO BE WRONG. When you say you are re-examining
- Actually re-examine, and use another approach to do so.
- Do not just say you are re-examining.
IMPORTANT: Do not use code blocks or programming examples in your reasoning. Explain your process in plain language.
This is step number {step_number}.
Question: {prompt}
Previous steps:
{previous_steps}
"""
@groq.call("llama-3.3-70b-versatile")
def final_answer(prompt: str, reasoning: str) -> str:
return f"""
Based on the following chain of reasoning, provide a final answer to the question.
Only provide the text response without any titles or preambles.
Retain any formatting as instructed by the original prompt, such as exact formatting for free response or multiple choice.
Question: {prompt}
Reasoning:
{reasoning}
Final Answer:
"""Generating and Displaying Responses
generate_cot_response manages the iterative reasoning process, collecting steps until the final answer or a maximum of 5 steps is reached. display_cot_response prints each step with the time taken, followed by the final answer and total processing time.
def generate_cot_response(
user_query: str,
) -> tuple[list[tuple[str, str, float]], float]:
steps: list[tuple[str, str, float]] = []
total_thinking_time: float = 0.0
step_count: int = 1
reasoning: str = ""
previous_steps: str = ""
while True:
start_time: datetime = datetime.now()
cot_result = cot_step(user_query, step_count, previous_steps)
end_time: datetime = datetime.now()
thinking_time: float = (end_time - start_time).total_seconds()
steps.append(
(
f"Step {step_count}: {cot_result.title}",
cot_result.content,
thinking_time,
)
)
total_thinking_time += thinking_time
reasoning += f"\n{cot_result.content}\n"
previous_steps += f"\n{cot_result.content}\n"
if cot_result.next_action == "final_answer" or step_count >= 5:
break
step_count += 1
# Generate final answer
start_time = datetime.now()
final_result: str = final_answer(user_query, reasoning).content
end_time = datetime.now()
thinking_time = (end_time - start_time).total_seconds()
total_thinking_time += thinking_time
steps.append(("Final Answer", final_result, thinking_time))
return steps, total_thinking_time
def display_cot_response(
steps: list[tuple[str, str, float]], total_thinking_time: float
) -> None:
for title, content, thinking_time in steps:
print(f"{title}:")
print(content.strip())
print(f"**Thinking time: {thinking_time:.2f} seconds**\n")
print(f"**Total thinking time: {total_thinking_time:.2f} seconds**")Running the Workflow
The run function initiates the entire process by submitting the example question, generating the CoT response, displaying it, and saving the interaction history.
def run() -> None:
question: str = "If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (which is 300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?"
print("(User):", question)
# Generate COT response
steps, total_thinking_time = generate_cot_response(question)
display_cot_response(steps, total_thinking_time)
# Add the interaction to the history
history.append({"role": "user", "content": question})
history.append(
{"role": "assistant", "content": steps[-1][1]}
) # Add only the final answer to the history
# Run the function
run()This comprehensive approach offers a transparent, logical method for leveraging AI models to solve complex problems step-by-step using Chain-of-Thought reasoning.
Сменить язык
Читать эту статью на русском