Building Effective Agents with Anthropic’s Best Practices and smolagents ❤️

Community Article Published January 4, 2025

In the rapidly evolving world of AI, building effective agents has become a cornerstone for solving complex problems. However, the challenge lies in balancing simplicity, efficiency, and scalability. Recently, I came across Anthropic’s article on "Building Effective Agents", which resonated deeply with my experience at Hawky.ai, where we’ve built multiple marketing and creative intelligence agents. Combining these insights with the simplicity of smolagents, I implemented a few workflows that I’d like to share with you.


Why Agents Matter

Agents are systems where Large Language Models (LLMs) dynamically direct their own processes and tool usage. Unlike predefined workflows, agents offer flexibility and adaptability, making them ideal for tasks requiring model-driven decision-making. However, building agents can often feel like assembling IKEA furniture—overwhelming and unnecessarily complex. This is where smolagents shines.


The smolagents Advantage

smolagents is a lightweight library designed to give developers the raw feel of building agents without drowning in layers of abstraction. Here’s why it’s a game-changer:

  • Minimalistic Design: The entire library is just ~1,000 lines of code.
  • Code-Based Agents: Agents write actions as Python code, reducing steps by 30% and improving performance.
  • Secure Execution: Run code in sandboxed environments or use a secure Python interpreter.
  • Flexibility: Works with any LLM—Hugging Face, OpenAI, Anthropic, and more.

Implementing Anthropic’s Best Practices with smolagents

Anthropic’s article emphasizes simplicity, transparency, and well-documented tools. Here’s how I implemented these principles using smolagents:

1. Prompt Chaining Workflow

Prompt chaining breaks tasks into sequential subtasks, with validation gates between steps. Here’s how I implemented it:

class ChainedPromptAgent(MultiStepAgent):
    """
    Agent that implements prompt chaining workflow by breaking tasks into sequential subtasks
    with validation gates between steps.
    """
    
    def __init__(
        self,
        tools: List[Tool],
        model: Callable,
        subtask_definitions: List[Dict[str, Any]],
        system_prompt: Optional[str] = None,
        validation_functions: Optional[Dict[str, Callable]] = None,
        **kwargs
    ):
        super().__init__(tools=tools, model=model, system_prompt=system_prompt, **kwargs)
        self.subtask_definitions = subtask_definitions
        self.validation_functions = validation_functions or {}
        self.current_subtask_index = 0
        self.subtask_outputs = {}
        
    def validate_subtask_output(self, subtask_name: str, output: Any) -> Tuple[bool, str]:
        """
        Validate the output of a subtask using its validation function.
        """
        if subtask_name not in self.validation_functions:
            return True, ""
            
        validation_fn = self.validation_functions[subtask_name]
        try:
            is_valid = validation_fn(output)
            return is_valid, "" if is_valid else f"Validation failed for {subtask_name}"
        except Exception as e:
            return False, f"Validation error for {subtask_name}: {str(e)}"

    def format_subtask_prompt(self, subtask_def: Dict[str, Any]) -> str:
        """
        Format the prompt template for a subtask using previous outputs.
        """
        prompt_template = subtask_def["prompt_template"]
        return prompt_template.format(
            **self.subtask_outputs,
            task=self.task
        )

    def step(self, log_entry: ActionStep) -> Union[None, Any]:
        """
        Execute one step in the prompt chain.
        """
        if self.current_subtask_index >= len(self.subtask_definitions):
            return self.subtask_outputs[self.subtask_definitions[-1]["name"]]
            
        current_subtask = self.subtask_definitions[self.current_subtask_index]
        subtask_name = current_subtask["name"]
        
        formatted_prompt = self.format_subtask_prompt(current_subtask)
        agent_memory = self.write_inner_memory_from_logs()
        agent_memory.append({
            "role": MessageRole.USER,
            "content": formatted_prompt
        })
        
        try:
            subtask_output = self.model(agent_memory)
            log_entry.llm_output = subtask_output
        except Exception as e:
            raise AgentGenerationError(f"Error in subtask {subtask_name}: {str(e)}")
            
        if current_subtask.get("requires_validation", False):
            is_valid, error_msg = self.validate_subtask_output(subtask_name, subtask_output)
            if not is_valid:
                raise AgentExecutionError(error_msg)
                
        self.subtask_outputs[subtask_name] = subtask_output
        self.current_subtask_index += 1
        log_entry.observations = f"Completed subtask: {subtask_name}"
        return None

Key Features

  • Task Classification: Uses LLM or rule-based classification to route tasks.
  • Specialized Handlers: Directs tasks to the most appropriate handler.
  • Fallback Mechanism: Ensures tasks are handled even if classification fails.

Why smolagents + Anthropic’s Approach Works

Combining smolagents with Anthropic’s best practices offers a powerful yet simple way to build agents. Here’s why:

  1. Simplicity: smolagents keeps the codebase minimal and easy to understand.
  2. Transparency: Anthropic’s emphasis on well-documented tools ensures clarity.
  3. Scalability: The modular design allows for easy scaling and customization.

Final Thoughts

Building agents doesn’t have to be complicated. By leveraging smolagents and following Anthropic’s best practices, you can create agents that are powerful, efficient, and easy to maintain. Whether you’re a startup or an enterprise, this approach can help you focus on solving real problems instead of wrestling with framework complexity.

Try smolagents today and let me know how it works for you! You can find the GitHub repository here: smolagents-approach.Dynamic Prompting: Uses previous outputs to format prompts for subsequent steps.