AWS Releases Strands Agents SDK to Revolutionize AI Agent Development
AWS has open-sourced the Strands Agents SDK, providing developers with a powerful, model-driven framework to build and deploy autonomous AI agents more easily across various applications.
Simplifying AI Agent Development with Strands SDK
Amazon Web Services (AWS) has made its Strands Agents SDK open-source to facilitate easier and more flexible AI agent creation across different fields. The SDK adopts a model-driven approach that abstracts much of the complexity involved in building, orchestrating, and deploying intelligent agents. This helps developers create autonomous tools capable of planning, reasoning, and interacting.
Core Components of a Strands Agent
An AI agent built with Strands consists of three key elements: a model, a set of tools, and a prompt. These components work together to enable the agent to perform various tasks, from answering questions to managing workflows through iterative reasoning and tool selection, powered by large language models (LLMs).
-
Model: Strands supports multiple models, including Amazon Bedrock offerings like Claude and Titan, Anthropic, Meta’s Llama, and others accessible via APIs such as LiteLLM. It also allows local model development on platforms like Ollama and supports custom model providers.
-
Tools: Tools provide external capabilities that the model can leverage. Strands includes over 20 prebuilt tools covering file operations, API calls, and AWS service integrations. Developers can register custom Python functions using the @tool decorator. Additionally, Strands supports thousands of Model Context Protocol (MCP) servers for dynamic tool usage.
-
Prompt: Defines the task or goal for the agent, which can be customized by users or set system-wide to regulate general behavior.
How the Agentic Loop Works
Strands operates through an iterative loop where the agent interacts with the model and tools until the task specified by the prompt is accomplished. Each iteration involves invoking the LLM with the current context and tool descriptions. The model can generate responses, plan multi-step actions, reflect on previous steps, or call tools.
When a tool is selected, Strands executes it and returns results to the model, continuing this cycle until a final output is produced. This loop leverages the advanced reasoning and planning capabilities of modern LLMs.
Extending Agent Behavior with Tools
Strands SDK’s flexibility is enhanced by its extensive toolset. Some advanced tool types include:
-
Retrieve Tool: Connects with Amazon Bedrock Knowledge Bases for semantic search, allowing models to fetch documents or select relevant tools from thousands based on embedding similarity.
-
Thinking Tool: Encourages the model to perform multi-step analytical reasoning, supporting deeper planning and self-reflection.
-
Multi-Agent Tools: Features workflow, graph, and swarm tools that enable coordination among sub-agents for complex tasks. Future support for the Agent2Agent (A2A) protocol will further boost multi-agent collaboration.
Real-World Use and Infrastructure Support
Strands Agents SDK is already used internally at AWS by teams like Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer in production settings. It supports deployment on various platforms, including local setups, AWS Lambda, Fargate, and EC2.
Observability is integrated via OpenTelemetry (OTEL), providing detailed monitoring and diagnostics essential for production environments.
Strands Agents SDK presents a robust and flexible framework for building AI agents by clearly separating models, tools, and prompts. Its model-driven loop and compatibility with existing LLM ecosystems make it an excellent choice for developers aiming to create autonomous agents with minimal boilerplate and high customization.
Сменить язык
Читать эту статью на русском