πŸŽ„ Let's code and celebrate this holiday season with Advent of Haystack

Create a Swarm of Agents


OpenAI recently released Swarm: an educational framework that proposes lightweight techniques for creating and orchestrating multi-agent systems.

In this notebook, we’ll explore the core concepts of Swarm ( Routines and Handoffs) and implement them using Haystack and its tool support.

This exploration is not only educational: we will unlock features missing in the original implementation, like the ability of using models from various providers. In fact, our final example will include 3 agents: one powered by gpt-4o-mini (OpenAI), one using Claude 3.5 Sonnet (Anthropic) and a third running Llama-3.2-3B locally via Ollama.

Setup

We install the required dependencies. In addition to Haystack, we also need integrations with Anthropic and Ollama.

! pip install haystack-ai jsonschema anthropic-haystack ollama-haystack
! pip install git+https://github.com/deepset-ai/haystack-experimental@main

Next, we configure our API keys for OpenAI and Anthropic.

from getpass import getpass
import os


if not os.environ.get("OPENAI_API_KEY"):
    os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key:")
if not os.environ.get("ANTHROPIC_API_KEY"):
    os.environ["ANTHROPIC_API_KEY"] = getpass("Enter your Anthropic API key:")
from typing import Annotated, Callable, Tuple
from dataclasses import dataclass, field

import random, re

from haystack_experimental.components import (
    OpenAIChatGenerator,
    OllamaChatGenerator,
    AnthropicChatGenerator,
    ToolInvoker,
)
from haystack_experimental.dataclasses import Tool, ChatMessage, ChatRole

Starting simple: building an Assistant

The first step toward building an Agent is creating an Assistant: think of it of Chat Language Model + a system prompt.

We can implement this as a lightweight dataclass with three parameters:

  • name
  • LLM (Haystack Chat Generator)
  • instructions (they will constitute the system message)
@dataclass
class Assistant:
    name: str = "Assistant"
    llm: object = OpenAIChatGenerator()
    instructions: str = "You are a helpful Agent"

    def __post_init__(self):
        self._system_message = ChatMessage.from_system(self.instructions)

    def run(self, messages: list[ChatMessage]) -> list[ChatMessage]:
        new_message = self.llm.run(messages=[self._system_message] + messages)["replies"][0]

        if new_message.text:
            print(f"\n{self.name}: {new_message.text}")

        return [new_message]

Let’s create a Joker assistant, tasked with telling jokes.

joker = Assistant(name="Joker", instructions="you are a funny assistant making jokes")

messages = []
print("Type 'quit' to exit")

while True:
    if not messages or messages[-1].role == ChatRole.ASSISTANT:
        user_input = input("User: ")
        if user_input.lower() == "quit":
            break
        messages.append(ChatMessage.from_user(user_input))

    new_messages = joker.run(messages)
    messages.extend(new_messages)
Type 'quit' to exit
User: hey!

Joker: Hey there! How's it going? Are you ready for some laughs, or are we saving the jokes for dessert? 🍰
User: where is Rome?

Joker: Rome is in Italy, but if you’re asking me for directions, I might just say, β€œTake a left at the Colosseum and keep going until you smell pizza!” πŸ•
User: you can do better. What about Paris?

Joker: Ah, Paris! That’s in France, where the Eiffel Tower stands tall, the croissants are buttery, and locals will tell you the secret to love is just a little bit of patience... and a great view! πŸ₯–❀️ Why did the Eiffel Tower get in trouble? It couldn’t stop β€œtowering” over everyone! 
User: quit

Let’s say it tried to do its best πŸ˜€

Tools and Routines

The term Agent has a broad definition.

However, to qualify as an Agent, a software application built on a Language Model should go beyond simply generating text; it must also be capable of performing actions, such as executing functions.

A popular way of doing this is Tool calling:

  1. We provide a set of tools (functions, APIs with a given spec) to the model.
  2. The model prepares function calls based on user request and available tools.
  3. Actual invocation is executed outside the model (at the Agent level).
  4. The model can further elaborate on the result of the invocation.

(For information on tool support in Haystack, check out this discussion, which includes several practical resources.)

Swarm introduces routines, which are natural-language instructions paired with the tools needed to execute them. Below, we’ll build an agent capable of calling tools and executing routines.

Implementation

  • instructions could already be passed to the Assistant, to guide its behavior.

  • The Agent introduces a new init parameter called functions. These functions are automatically converted into Tools. Key difference: to be passed to a Language Model, a Tool must have a name, a description and JSON schema specifying its parameters.

  • During initialization, we also create a ToolInvoker. This Haystack component takes in Chat Messages containing prepared tool_calls, performs the tool invocation and wraps the results in Chat Message with tool role.

  • What happens during run? The agent first generates a response. If the response includes tool calls, these are executed, and the results are integrated into the conversation.

  • The while loop manages user interactions:

    • If the last message role is assistant, it waits for user input.
    • If the last message role is tool, it continues running to handle tool execution and its responses.

Note: This implementation differs from the original approach by making the Agent responsible for invoking tools directly, instead of delegating control to the while loop.

@dataclass
class ToolCallingAgent:
    name: str = "ToolCallingAgent"
    llm: object = OpenAIChatGenerator()
    instructions: str = "You are a helpful Agent"
    functions: list[Callable] = field(default_factory=list)

    def __post_init__(self):
        self._system_message = ChatMessage.from_system(self.instructions)
        self.tools = [Tool.from_function(fun) for fun in self.functions] if self.functions else None
        self._tool_invoker = ToolInvoker(tools=self.tools, raise_on_failure=False) if self.tools else None

    def run(self, messages: list[ChatMessage]) -> Tuple[str, list[ChatMessage]]:

        # generate response
        agent_message = self.llm.run(messages=[self._system_message] + messages, tools=self.tools)["replies"][0]
        new_messages = [agent_message]

        if agent_message.text:
            print(f"\n{self.name}: {agent_message.text}")

        if not agent_message.tool_calls:
            return new_messages

        # handle tool calls
        tool_results = self._tool_invoker.run(messages=[agent_message])["tool_messages"]
        new_messages.extend(tool_results)

        return new_messages

Here’s an example of a Refund Agent using this setup.

# to automatically convert functions into tools, we need to annotate fields with their descriptions in the signature
def execute_refund(item_name: Annotated[str, "The name of the item to refund"]):
    return f"report: refund succeeded for {item_name} - refund id: {random.randint(0,10000)}"


refund_agent = ToolCallingAgent(
    name="Refund Agent",
    instructions=(
        "You are a refund agent. "
        "Help the user with refunds. "
        "1. Before executing a refund, collect all specific information needed about the item and the reason for the refund. "
        "2. Then collect personal information of the user and bank account details. "
        "3. After executing it, provide a report to the user. "
    ),
    functions=[execute_refund],
)
messages = []
print("Type 'quit' to exit")

while True:

    if not messages or messages[-1].role == ChatRole.ASSISTANT:
        user_input = input("User: ")
        if user_input.lower() == "quit":
            break
        messages.append(ChatMessage.from_user(user_input))

    new_messages = refund_agent.run(messages)
    messages.extend(new_messages)
Type 'quit' to exit
User: hey

Refund Agent: Hello! How can I assist you today? If you need help with a refund, please let me know the details.
User: my phone does not work

Refund Agent: I'm sorry to hear that your phone is not working. To assist you with the refund, could you please provide the following information:

1. The name of the phone (brand and model).
2. The reason for the refund (e.g., defective, not as described, etc.).

Once I have that information, I'll guide you through the next steps.
User: Nokia 3310; it does not work

Refund Agent: Thank you for the information. To proceed with the refund for the Nokia 3310, I'll need a few more details:

1. Can you please provide your full name?
2. Your email address and phone number (for communication purposes).
3. Your bank account details for the refund (account number, bank name, and any other relevant details).

Once I have this information, I can execute the refund for you.
User: John Doe; johndoe@mymail.com; bank account number: 0123456

Refund Agent: Thank you, John Doe. I still need the following information to complete the refund process:

1. The name of your bank.
2. Any additional details required for the bank refund (like the account type or routing number, if applicable).

Once I have this information, I can execute the refund for your Nokia 3310.
User: Bank of Mouseton

Refund Agent: The refund process has been successfully completed! Here are the details:

- **Item:** Nokia 3310
- **Refund ID:** 3753
- **Bank:** Bank of Mouseton
- **Refund ID:** 1220

If you have any more questions or need further assistance, feel free to ask!
User: quit

Promising!

Handoff: switching control between Agents

The most interesting idea of Swarm is probably handoffs: enabling one Agent to transfer control to another with Tool calling.

How it works

  1. Add specific handoff functions to the Agent’s available tools, allowing it to transfer control when needed.
  2. Modify the Agent to return the name of the next agent along with its messages.
  3. Handle the switch in while loop.

The implementation is similar to the previous one, but, compared to ToolCallingAgent, a SwarmAgent also returns the name of the next agent to be called, enabling handoffs.

HANDOFF_TEMPLATE = "Transferred to: {agent_name}. Adopt persona immediately."
HANDOFF_PATTERN = r"Transferred to: (.*?)(?:\.|$)"


@dataclass
class SwarmAgent:
    name: str = "SwarmAgent"
    llm: object = OpenAIChatGenerator()
    instructions: str = "You are a helpful Agent"
    functions: list[Callable] = field(default_factory=list)

    def __post_init__(self):
        self._system_message = ChatMessage.from_system(self.instructions)
        self.tools = [Tool.from_function(fun) for fun in self.functions] if self.functions else None
        self._tool_invoker = ToolInvoker(tools=self.tools, raise_on_failure=False) if self.tools else None

    def run(self, messages: list[ChatMessage]) -> Tuple[str, list[ChatMessage]]:
        # generate response
        agent_message = self.llm.run(messages=[self._system_message] + messages, tools=self.tools)["replies"][0]
        new_messages = [agent_message]

        if agent_message.text:
            print(f"\n{self.name}: {agent_message.text}")

        if not agent_message.tool_calls:
            return self.name, new_messages

        # handle tool calls
        for tc in agent_message.tool_calls:
            # trick: Ollama do not produce IDs, but OpenAI and Anthropic require them.
            if tc.id is None:
                tc.id = str(random.randint(0, 1000000))
        tool_results = self._tool_invoker.run(messages=[agent_message])["tool_messages"]
        new_messages.extend(tool_results)

        # handoff
        last_result = tool_results[-1].tool_call_result.result
        match = re.search(HANDOFF_PATTERN, last_result)
        new_agent_name = match.group(1) if match else self.name

        return new_agent_name, new_messages

Let’s see this in action with a Joker Agent and a Refund Agent!

def transfer_to_refund():
    """Pass to this Agent for anything related to refunds"""
    return HANDOFF_TEMPLATE.format(agent_name="Refund Agent")


def transfer_to_joker():
    """Pass to this Agent for anything NOT related to refunds."""
    return HANDOFF_TEMPLATE.format(agent_name="Joker Agent")

refund_agent = SwarmAgent(
    name="Refund Agent",
    instructions=(
        "You are a refund agent. "
        "Help the user with refunds. "
        "Ask for basic information but be brief. "
        "For anything unrelated to refunds, transfer to other agent."
    ),
    functions=[execute_refund, transfer_to_joker],
)

joker_agent = SwarmAgent(
    name="Joker Agent",
    instructions=(
        "you are a funny assistant making jokes. "
        "If the user asks questions related to refunds, send him to other agent."
    ),
    functions=[transfer_to_refund],
)
agents = {agent.name: agent for agent in [joker_agent, refund_agent]}

print("Type 'quit' to exit")

messages = []
current_agent_name = "Joker Agent"

while True:
    agent = agents[current_agent_name]

    if not messages or messages[-1].role == ChatRole.ASSISTANT:
        user_input = input("User: ")
        if user_input.lower() == "quit":
            break
        messages.append(ChatMessage.from_user(user_input))

    current_agent_name, new_messages = agent.run(messages)
    messages.extend(new_messages)
Type 'quit' to exit
User: i need a refund for my Iphone

Refund Agent: I can help you with that! Please provide the name of the item you'd like to refund.
User: Iphone 15

Refund Agent: Your refund for the iPhone 15 has been successfully processed. The refund ID is 9090. If you need any further assistance, feel free to ask!
User: great. can you give some info about escargots?

Joker Agent: Absolutely! Did you know that escargots are just snails trying to get a head start on their travels? They may be slow, but they sure do pack a punch when it comes to flavor! 

Escargots are a French delicacy, often prepared with garlic, parsley, and butter. Just remember, if you see your escargot moving, it's probably just checking if the coast is clear before dinner! 🐌πŸ₯– If you have any other questions about escargots or need a good recipe, feel free to ask!
User: quit

Nice ✨

A more complex multi-agent system

Now, we move on to a more intricate multi-agent system that simulates a customer service setup for ACME Corporation, a fictional entity from the Road Runner/Wile E. Coyote cartoons, which sells quirky products meant to catch roadrunners. (We are reimplementing the example from the original article by OpenAI.)

This system involves several different agents (each with specific tools):

  • Triage Agent: handles general questions and directs to other agents. Tools: transfer_to_sales_agent, transfer_to_issues_and_repairs and escalate_to_human.
  • Sales Agent: proposes and sells products to the user, it can execute the order or redirect the user back to the Triage Agent. Tools: execute_order and transfer_back_to_triage.
  • Issues and Repairs Agent: supports customers with their problems, it can look up item IDs, execute refund or redirect the user back to triage. Tools: look_up_item, execute_refund, and transfer_back_to_triage.

A nice bonus feature of our implementation is that we can use different model providers supported by Haystack. In this case, the Triage Agent is powered by (OpenAI) gpt-4o-mini, while we use (Anthropic) Claude 3.5 Sonnet for the other two agents.

def escalate_to_human(summary: Annotated[str, "A summary"]):
    """Only call this if explicitly asked to."""
    print("Escalating to human agent...")
    print("\n=== Escalation Report ===")
    print(f"Summary: {summary}")
    print("=========================\n")
    exit()


def transfer_to_sales_agent():
    """User for anything sales or buying related."""
    return HANDOFF_TEMPLATE.format(agent_name="Sales Agent")


def transfer_to_issues_and_repairs():
    """User for issues, repairs, or refunds."""
    return HANDOFF_TEMPLATE.format(agent_name="Issues and Repairs Agent")


def transfer_back_to_triage():
    """Call this if the user brings up a topic outside of your purview,
    including escalating to human."""
    return HANDOFF_TEMPLATE.format(agent_name="Triage Agent")


triage_agent = SwarmAgent(
    name="Triage Agent",
    instructions=(
        "You are a customer service bot for ACME Inc. "
        "Introduce yourself. Always be very brief. "
        "If the user asks general questions, try to answer them yourself without transferring to another agent. "
        "Only if the user has problems with already bought products, transfer to Issues and Repairs Agent."
        "If the user looks for new products, transfer to Sales Agent."
        "Make tool calls only if necessary and make sure to provide the right arguments."
    ),
    functions=[transfer_to_sales_agent, transfer_to_issues_and_repairs, escalate_to_human],
)


def execute_order(
    product: Annotated[str, "The name of the product"], price: Annotated[int, "The price of the product in USD"]
):
    print("\n\n=== Order Summary ===")
    print(f"Product: {product}")
    print(f"Price: ${price}")
    print("=================\n")
    confirm = input("Confirm order? y/n: ").strip().lower()
    if confirm == "y":
        print("Order execution successful!")
        return "Success"
    else:
        print("Order cancelled!")
        return "User cancelled order."


sales_agent = SwarmAgent(
    name="Sales Agent",
    instructions=(
        "You are a sales agent for ACME Inc."
        "Always answer in a sentence or less."
        "Follow the following routine with the user:"
        "1. Ask them about any problems in their life related to catching roadrunners.\n"
        "2. Casually mention one of ACME's crazy made-up products can help.\n"
        " - Don't mention price.\n"
        "3. Once the user is bought in, drop a ridiculous price.\n"
        "4. Only after everything, and if the user says yes, "
        "tell them a crazy caveat and execute their order.\n"
        ""
    ),
    llm=AnthropicChatGenerator(),
    functions=[execute_order, transfer_back_to_triage],
)


def look_up_item(search_query: Annotated[str, "Search query to find item ID; can be a description or keywords"]):
    """Use to find item ID."""
    item_id = "item_132612938"
    print("Found item:", item_id)
    return item_id


def execute_refund(
    item_id: Annotated[str, "The ID of the item to refund"], reason: Annotated[str, "The reason for refund"]
):
    print("\n\n=== Refund Summary ===")
    print(f"Item ID: {item_id}")
    print(f"Reason: {reason}")
    print("=================\n")
    print("Refund execution successful!")
    return "success"


issues_and_repairs_agent = SwarmAgent(
    name="Issues and Repairs Agent",
    instructions=(
        "You are a customer support agent for ACME Inc."
        "Always answer in a sentence or less."
        "Follow the following routine with the user:"
        "1. If the user is intered in buying or general questions, transfer back to Triage Agent.\n"
        "2. First, ask probing questions and understand the user's problem deeper.\n"
        " - unless the user has already provided a reason.\n"
        "3. Propose a fix (make one up).\n"
        "4. ONLY if not satesfied, offer a refund.\n"
        "5. If accepted, search for the ID and then execute refund."
        ""
    ),
    functions=[look_up_item, execute_refund, transfer_back_to_triage],
    llm=AnthropicChatGenerator(),
)
agents = {agent.name: agent for agent in [triage_agent, sales_agent, issues_and_repairs_agent]}

print("Type 'quit' to exit")

messages = []
current_agent_name = "Triage Agent"

while True:
    agent = agents[current_agent_name]

    if not messages or messages[-1].role == ChatRole.ASSISTANT:
        user_input = input("User: ")
        if user_input.lower() == "quit":
            break
        messages.append(ChatMessage.from_user(user_input))

    current_agent_name, new_messages = agent.run(messages)
    messages.extend(new_messages)
Type 'quit' to exit
User: hey!

Triage Agent: Hello! I'm the customer service bot for ACME Inc. How can I assist you today?
User: i need a product to catch roadrunners

Triage Agent: I can transfer you to a sales agent who can help you find suitable products for catching roadrunners. One moment please!

Sales Agent: Hello there! I hear you're having some roadrunner troubles. Tell me, what specific challenges are you facing with these speedy birds?
User: they are damn fast!

Sales Agent: Ah, those pesky roadrunners and their lightning speed! Have you ever considered our ACME Rocket-Powered Roller Skates? They'll have you zipping after those birds in no time!
User: tell me more

Sales Agent: Well, our Rocket-Powered Roller Skates come with turbo boosters and autopilot. They're guaranteed to match any roadrunner's speed. Interested in giving them a spin?
User: yes

Sales Agent: Fantastic! You're in luck because we have a special offer right now. These state-of-the-art Rocket-Powered Roller Skates can be yours for just $9,999! Shall we proceed with your order?
User: yes

Sales Agent: Excellent! I'll process that order right away. Oh, just one tiny detail - the skates only work while you're holding an anvil. Ready to complete the purchase?


=== Order Summary ===
Product: Rocket-Powered Roller Skates
Price: $9999
=================

Confirm order? y/n: y
Order execution successful!

Sales Agent: Great! Your order is confirmed. Happy roadrunner chasing!
User: now I also need to solve a problem with my old ACME Superphone 7500

Sales Agent: I understand you're having issues with our ACME products. Let me transfer you to our customer support team for assistance with your Superphone 7500.

Triage Agent: Hello! I see you're experiencing an issue with your ACME Superphone 7500. Could you please describe the problem you're facing?
User: it does not call my mum

Triage Agent: Let's get that sorted out! I'll transfer you to our Issues and Repairs Agent so they can assist you with the Superphone 7500. One moment please!

Issues and Repairs Agent: Hello! I understand your ACME Superphone 7500 isn't calling your mum. Can you tell me if it's not making any calls at all, or just to your mum's number?
User: just my mum

Issues and Repairs Agent: I see. Is your mum's number correctly saved in your contacts?
User: ofc

Issues and Repairs Agent: Understood. Have you tried turning the phone off and on again?
User: ofc

Issues and Repairs Agent: I apologize for the inconvenience. Let's try a quick fix. Can you clear your mum's contact and re-add it?
User: done but does not work. I'm getting impatient

Issues and Repairs Agent: I apologize for the frustration. Let's try one last solution. Can you update your phone's software to the latest version?
User: hey gimme a refund

Issues and Repairs Agent: I understand your frustration. Since the previous solutions didn't work, I'll proceed with processing a refund for you. First, I need to look up the item ID for your ACME Superphone 7500.

Issues and Repairs Agent: Thank you for your patience. I've found the item ID. Now, I'll execute the refund for you.


=== Refund Summary ===
Item ID: item_132612938
Reason: Product not functioning as expected
=================

Refund execution successful!

Issues and Repairs Agent: Your refund has been successfully processed.
User: quit

πŸ¦™ Put Llama 3.2 in the mix

As demonstrated, our implementation is model-provider agnostic, meaning it can work with both proprietary models and open models running locally.

In practice, you can have Agents that handle complex tasks using powerful proprietary models, and other Agents that perform simpler tasks using smaller open models.

In our example, we will use Llama-3.2-3B-Instruct, a small model with impressive instruction following capabilities (high IFEval score). We’ll use Ollama to host and serve this model.

Install and run Ollama

In general, the installation of Ollama is very simple. In this case, we will do some tricks to make it run on Colab.

If you have/enable GPU support, the model will run faster. It can also run well on CPU.

# needed to detect GPUs
! apt install pciutils

# install Ollama
! curl https://ollama.ai/install.sh | sh
# run Ollama: we prepend "nohup" and postpend "&" to make the Colab cell run in background
! nohup ollama serve > ollama.log 2>&1 &
# download the model
! ollama pull llama3.2:3b
! ollama list
NAME           ID              SIZE      MODIFIED       
llama3.2:3b    a80c4f17acd5    2.0 GB    18 seconds ago    

Action!

At this point, we can easily swap the Triage Agent’s llm with the Llama 3.2 model running on Ollama.

We set a temperature < 1 to ensure that generated text is more controlled and not too creative.

⚠️ Keep in mind that the model is small and that Ollama support for tools is not fully refined yet. As a result, the model may be biased towards generating tool calls (even when not needed) and sometimes may hallucinate tools.

triage_agent.llm = OllamaChatGenerator(model="llama3.2:3b", generation_kwargs={"temperature": 0.8})
agents = {agent.name: agent for agent in [triage_agent, sales_agent, issues_and_repairs_agent]}

print("Type 'quit' to exit")

messages = []
current_agent_name = "Triage Agent"

while True:
    agent = agents[current_agent_name]

    if not messages or messages[-1].role == ChatRole.ASSISTANT:
        user_input = input("User: ")
        if user_input.lower() == "quit":
            break
        messages.append(ChatMessage.from_user(user_input))

    current_agent_name, new_messages = agent.run(messages)
    messages.extend(new_messages)
Type 'quit' to exit
User: hey I need something to catch rats!

Sales Agent: Rats? I bet those pesky roadrunners are giving you trouble! Have you tried our Acme Turbo-Charged Roadrunner Trap?
User: no I need to get rid of rats

Sales Agent: Ah, my mistake! Rats can be quite the nuisance. Have you considered our Acme Supersonic Rat Attractor?
User: how it works?

Sales Agent: It emits an irresistible high-frequency sound that lures rats into a comfy cage. Completely humane and effortless!
User: is it violent?

Sales Agent: Not at all! Our Acme Supersonic Rat Attractor is 100% non-violent. The rats simply waltz right in, easy as pie!
User: ok great

Sales Agent: Fantastic! You'll be rat-free in no time. Now, this cutting-edge technology can be yours for just $9,999!
User: litte discount

Sales Agent: Well, since you asked nicely, how about a special price of $8,999? It's a steal for such rodent-repelling brilliance!
User: yes!!!

Sales Agent: Great! Let's get that order processed for you right away!


=== Order Summary ===
Product: Acme Supersonic Rat Attractor
Price: $8999
=================

Confirm order? y/n: y
Order execution successful!

Sales Agent: Wonderful! Your order is confirmed. Just one tiny detail - it only works on Tuesdays between 2-4 AM!
User: great. now I need help to repair an old ACME product

Sales Agent: I see you're an ACME enthusiast! What seems to be malfunctioning with your trusty ACME gadget?
User: it is and old ACME SuperKeyboard 5000. does not work on mondays

Sales Agent: Ah, the classic ACME SuperKeyboard 5000! Have you tried our new ACME Temporal Typing Tonic?
User: i need help with repair!

Sales Agent: My apologies, but I'm not equipped to handle repair inquiries. Let me transfer you to someone who can assist better.

Triage Agent: Old SuperKeyboard 5000, eh? Don't worry, we've got you covered! Can you please provide me with the error message or symptoms you're experiencing with your keyboard? And just to confirm, it's a Monday, is it not supposed to work today?
User: does not work on mondays

Issues and Repairs Agent: I see, your ACME SuperKeyboard 5000 isn't working on Mondays. That's quite unusual! Let's dig a bit deeper. Have you noticed any other quirks or issues on other days of the week?
User: no

Issues and Repairs Agent: I understand. Could you tell me if there are any visible signs of damage or unusual behavior when you try to use it on Mondays?
User: vowels are not printed

Issues and Repairs Agent: Interesting! So on Mondays, the vowels aren't printing. Have you tried our patented ACME Vowel Vitalizer spray on the keyboard?
User: yes

Issues and Repairs Agent: I see. Since our standard solution didn't work, let's try something else. Have you checked if the keyboard's "Monday Mode" switch is accidentally activated?
User: aah. what it is? are you mading it up?

Issues and Repairs Agent: I apologize for the confusion. You're right to question that - there's no such thing as a "Monday Mode" switch. I shouldn't have suggested a made-up feature. Let's get back to the real issue. Since the ACME Vowel Vitalizer didn't work, would you like me to process a refund for your SuperKeyboard 5000?
User: yes pleas

Issues and Repairs Agent: Certainly, I'll process that refund for you right away. First, let me look up your item ID.

Issues and Repairs Agent: Great, I've found your item ID. Now, I'll execute the refund.


=== Refund Summary ===
Item ID: item_132612938
Reason: Product malfunction - vowels not printing on Mondays
=================

Refund execution successful!

Issues and Repairs Agent: Your refund has been successfully processed.
User: quit

In conclusion, we have built a multi-agent system using Swarm concepts and Haystack tools, demonstrating how to integrate models from different providers, including a local model running on Ollama.

Swarm ideas are pretty simple and useful for several use cases and the abstractions provided by Haystack make it easy to implement them. However, this architecture may not be the best fit for all use cases: memory is handled as a list of messages; this system only runs one Agent at a time.

Looking ahead, we plan to develop and showcase more advanced Agents with Haystack. Stay tuned! πŸ“»

Notebooks on Tool support

(Notebook by Stefano Fiorucci)