Mastering the Self-Refine Technique with Large Language Models Using Mirascope
Discover how to use Mirascope to implement the Self-Refine technique with Large Language Models, enabling iterative improvement of AI-generated responses for enhanced accuracy.
Introduction to Self-Refine Technique
This tutorial demonstrates how to implement the Self-Refine technique using Large Language Models (LLMs) with Mirascope, a powerful framework for building structured prompt workflows. Self-Refine is a prompt engineering strategy where the model evaluates its own output, generates feedback, and iteratively improves its response based on that feedback. This refinement loop can be repeated multiple times to progressively enhance the quality and accuracy of the final answer.
Applications and Benefits
The Self-Refine approach is particularly effective for tasks involving reasoning, code generation, and content creation, where incremental improvements lead to significantly better results.
Installing Dependencies
To get started, install the necessary dependencies:
!pip install "mirascope[openai]"Setting Up OpenAI API Key
To use the OpenAI API, generate an API key from your OpenAI account at https://platform.openai.com/settings/organization/api-keys. New users may need to add billing details and make a minimum payment of $5 to activate access.
Example setup:
import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')Basic Self-Refine Implementation
Using Mirascope’s decorators, the process starts with generating an initial response to a query, evaluating it, and then generating an improved response based on feedback. The loop can be repeated multiple times for better accuracy.
from mirascope.core import openai, prompt_template
from mirascope.core.openai import OpenAICallResponse
@openai.call(model="gpt-4o-mini")
def call(query: str) -> str:
return query
@openai.call(model="gpt-4o-mini")
@prompt_template(
"""
Here is a query and a response to the query. Give feedback about the answer,
noting what was correct and incorrect.
Query:
{query}
Response:
{response}
"""
)
def evaluate_response(query: str, response: OpenAICallResponse): ...
@openai.call(model="gpt-4o-mini")
@prompt_template(
"""
For this query:
{query}
The following response was given:
{response}
Here is some feedback about the response:
{feedback}
Consider the feedback to generate a new response to the query.
"""
)
def generate_new_response(
query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
feedback = evaluate_response(query, response)
return {"computed_fields": {"feedback": feedback}}
def self_refine(query: str, depth: int) -> str:
response = call(query)
for _ in range(depth):
response = generate_new_response(query, response)
return response.content
query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"
print(self_refine(query, 1))Enhanced Self-Refine with Structured Response Model
This version uses Pydantic to define a structured response model MathSolution capturing solution steps and the final answer, improving clarity and usability.
from pydantic import BaseModel, Field
class MathSolution(BaseModel):
steps: list[str] = Field(..., description="The steps taken to solve the problem")
final_answer: float = Field(..., description="The final numerical answer")
@openai.call(model="gpt-4o-mini", response_model=MathSolution)
@prompt_template(
"""
For this query:
{query}
The following response was given:
{response}
Here is some feedback about the response:
{feedback}
Consider the feedback to generate a new response to the query.
Provide the solution steps and the final numerical answer.
"""
)
def enhanced_generate_new_response(
query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
feedback = evaluate_response(query, response)
return {"computed_fields": {"feedback": feedback}}
def enhanced_self_refine(query: str, depth: int) -> MathSolution:
response = call(query)
for _ in range(depth):
solution = enhanced_generate_new_response(query, response)
response = f"Steps: {solution.steps}\nFinal Answer: {solution.final_answer}"
return solution
# Example usage
result = enhanced_self_refine(query, 1)
print(result)Results and Advantages
The enhanced Self-Refine technique accurately solved the mathematical problem with a single iteration, providing a step-by-step derivation and the correct answer of 60 km/h. This method offers:
- Improved accuracy through iterative feedback
- Clearer reasoning and solution steps
- Greater transparency and trustworthiness
This technique is promising for tasks requiring accuracy, structure, and iterative improvements, from technical problem solving to creative writing. Consider computational costs and adjust refinement depth accordingly.
Additional Resources
For full code and more details, visit the original tutorial. Follow the researchers on Twitter and join the AI community for updates and discussions.
Сменить язык
Читать эту статью на русском