Function Calling with OpenAIChatGenerator


Notebook by Bilge Yucel ( LI & X (Twitter))

A guide to understand function calling and how to use OpenAI function calling feature with Haystack.

πŸ“š Useful Sources:

Overview

Here are some use cases of function calling from OpenAI Docs:

  • Create assistants that answer questions by calling external APIs (e.g. like ChatGPT Plugins) e.g. define functions like send_email(to: string, body: string), or get_current_weather(location: string, unit: ‘celsius’ | ‘fahrenheit’)
  • Convert natural language into API calls e.g. convert “Who are my top customers?” to get_customers(min_revenue: int, created_before: string, limit: int) and call your internal API
  • Extract structured data from text e.g. define a function called extract_data(name: string, birthday: string), or sql_query(query: string)

Set up the Development Environment

%%bash

pip install haystack-ai
import os
from getpass import getpass
from google.colab import userdata

os.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY') or getpass("OPENAI_API_KEY: ")

Learn about the OpenAIChatGenerator

OpenAIChatGenerator is a component that supports the function calling feature of OpenAI.

The way to communicate with OpenAIChatGenerator is through ChatMessage list. Therefore, create a ChatMessage with “USER” role using ChatMessage.from_user() and send it to OpenAIChatGenerator:

from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator

client = OpenAIChatGenerator()
response = client.run(
    [ChatMessage.from_user("What's Natural Language Processing? Be brief.")]
)
print(response)
{'replies': [ChatMessage(content='Natural Language Processing (NLP) is a branch of artificial intelligence that deals with the interaction between computers and humans in natural language. It focuses on the understanding, interpretation, and generation of human language to enable machines to process and analyze textual data efficiently.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 50, 'prompt_tokens': 16, 'total_tokens': 66}})]}

Basic Streaming

OpenAIChatGenerator supports streaming, provide a streaming_callback function and run the client again to see the difference.

from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.generators.utils import print_streaming_chunk

client = OpenAIChatGenerator(streaming_callback=print_streaming_chunk)
response = client.run(
    [ChatMessage.from_user("What's Natural Language Processing? Be brief.")]
)
Natural Language Processing (NLP) is a field of artificial intelligence that focuses on the interaction between humans and computers using natural language. It involves the development of algorithms and methods to enable computers to understand, interpret, and generate human language in a manner that is meaningful and useful.

Function Calling with OpenAIChatGenerator

We’ll try to recreate the example on OpenAI docs.

Define a Function

We’ll define a get_current_weather function that mocks a Weather API call in the response:

def get_current_weather(location: str, unit: str = "celsius"):
  ## Do something
  return {"weather": "sunny", "temperature": 21.8, "unit": unit}

available_functions = {
  "get_current_weather": get_current_weather
}

Create the tools

We’ll then add information about this function to our tools list by following OpenAI’s tool schema

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use. Infer this from the users location.",
                    },
                },
                "required": ["location", "unit"],
            },
        }
    }
]

Run OpenAIChatGenerator with tools

We’ll pass the list of tools in the run() method as generation_kwargs.

Let’s define messages and run the generator:

from haystack.dataclasses import ChatMessage

messages = []
messages.append(ChatMessage.from_system("Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."))
messages.append(ChatMessage.from_user("What's the weather like in Berlin?"))

client = OpenAIChatGenerator(streaming_callback=print_streaming_chunk)
response = client.run(
    messages=messages,
    generation_kwargs={"tools":tools}
)

It’s a function call! πŸ“ž The response gives us information about the function name and arguments to use to call that function:

response
{'replies': [ChatMessage(content='[{"index": 0, "id": "call_fFQKCAUba8RRu2BZ4v8IVYPH", "function": {"arguments": "{\\n  \\"location\\": \\"Berlin\\",\\n  \\"unit\\": \\"celsius\\"\\n}", "name": "get_current_weather"}, "type": "function"}]', role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'usage': {}})]}

Optionally, add the message with function information to the message list

messages.append(response["replies"][0])

See how we can extract the function_name and function_args from the message

import json

function_call = json.loads(response["replies"][0].content)[0]
function_name = function_call["function"]["name"]
function_args = json.loads(function_call["function"]["arguments"])
print("function_name:", function_name)
print("function_args:", function_args)
function_name: get_current_weather
function_args: {'location': 'Berlin', 'unit': 'celsius'}

Make a Tool Call

Let’s locate the corresponding function for function_name in our available_functions dictionary and use function_args when calling it. Once we receive the response from the tool, we’ll append it to our messages for later sending to OpenAI.

function_to_call = available_functions[function_name]
function_response = function_to_call(**function_args)
function_message = ChatMessage.from_function(content=json.dumps(function_response), name=function_name)
messages.append(function_message)

Make the last call to OpenAI with response coming from the function and see how OpenAI uses the provided information

response = client.run(
    messages=messages,
    generation_kwargs={"tools":tools}
)
The current weather in Berlin is sunny with a temperature of 21.8Β°C.

Improve the Example

Let’s add more tool to our example and improve the user experience πŸ‘‡

We’ll add one more tool use_haystack_pipeline for OpenAI to use when there’s a question about countries and capitals:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use. Infer this from the users location.",
                    },
                },
                "required": ["location", "unit"],
            },
        }
    },
    {
        "type": "function",
        "function": {
            "name": "use_haystack_pipeline",
            "description": "Use for search about countries and capitals",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The query to use in the search. Infer this from the user's message",
                    },
                },
                "required": ["query"]
            },
        }
    },
]
def get_current_weather(location: str, unit: str = "celsius"):
  return {"weather": "sunny", "temperature": 21.8, "unit": unit}

def use_haystack_pipeline(query: str):
  # It returns a mock response
  return {"documents": "Cutopia is the capital of Utopia", "query": query}

available_functions = {
  "get_current_weather": get_current_weather,
  "use_haystack_pipeline": use_haystack_pipeline,
}

Start the Application

Have fun having a chat with OpenAI πŸŽ‰

Example queries you can try:

  • What’s the capital of Utopia”, “Is it sunny there?”: To test the messages are being recorded and sent
  • What’s the weather like in the capital of Utopia?”: To force two function calls
  • What’s the weather like today?”: To force OpenAI to ask more clarification
import json
from haystack.dataclasses import ChatMessage, ChatRole

messages = []
messages.append(ChatMessage.from_system("Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."))

print(messages[-1].content)

while True:
  # if this is a tool call
  if response and response["replies"][0].meta["finish_reason"] == 'tool_calls':
    function_calls = json.loads(response["replies"][0].content)
    for function_call in function_calls:
      function_name = function_call["function"]["name"]
      function_to_call = available_functions[function_name]
      function_args = json.loads(function_call["function"]["arguments"])

      function_response = function_to_call(**function_args)
      function_message = ChatMessage.from_function(content=json.dumps(function_response), name=function_name)
      messages.append(function_message)

  # Regular Conversation
  else:
    # If it's not user's first message and there's an assistant message
    if not messages[-1].is_from(ChatRole.SYSTEM):
      messages.append(ChatMessage.from_assistant(response["replies"][0].content))

    user_input = input("INFO: Type 'exit' or 'quit' to stop\n")
    if user_input.lower() == "exit" or user_input.lower() == "quit":
      break
    else:
      messages.append(ChatMessage.from_user(user_input))

  response = client.run(
    messages=messages,
    generation_kwargs={"tools":tools}
  )
Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.
INFO: Type 'exit' or 'quit' to stop
What's the weather like today?
Sure, can you please tell me your current location?INFO: Type 'exit' or 'quit' to stop
utopia
The weather in Utopia today is sunny with a temperature of 21.8 degrees Celsius.INFO: Type 'exit' or 'quit' to stop
exit

This part can help you understand the message order

print("\n=== SUMMARY ===")
for m in messages:
  print(f"\n - {m.content}")
=== SUMMARY ===

 - Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.

 - What's the weather like today?

 - Sure, can you please tell me your current location?

 - utopia

 - {"weather": "sunny", "temperature": 21.8, "unit": "celsius"}

 - The weather in Utopia today is sunny with a temperature of 21.8 degrees Celsius.