Harish Kumar

Software Engineer | Go | Python | AI/ML

Leveraging OpenAPI Specifications for Function Calling with LLMs

Function Calling with OpenAPI Specifications

FunctionCalling is a very powerful concept with LLM’s.

LLM may not have access to real-time information. So when we make a query, we can provide functions to the LLM and have it determine what functions can be used for the query.

In this post, I’m going to explain how to use OpenAPI specifications from a API endpoint and use its API’s as functions with LLM.

Here are the steps.

  • Fetch the list of functions defined in a API from its OpenAPI specifications.
  • Prompt the LLM with the user query and the set of functions.
  • LLM will chose which function to use from the set of functions and return the function along with the signatures/parameters to invoke the function.
  • Invoke the actual function with the inputs provided by the LLM.

The simplest diagram to explain this is below.

Diagram

The code is hosted in Github.

Implementation

In this example, I’m going to use open-mateo Weather API. The OpenAPI specification is located here. https://raw.githubusercontent.com/open-meteo/open-meteo/main/openapi.yml

!wget https://raw.githubusercontent.com/open-meteo/open-meteo/main/openapi.yml

The next step is to use the openapi-python-generator to generate the api-specifications.

!openapi-python-generator openapi.json ./api_specification_main/

from api_specification_main.services.WeatherForecastAPIs_service
    import get_v1forecast

I’ll be using the get_v1forecast to get the latest weather information from open-mateo.

Prompting to LLM with User Query and Functions

Let us have a user query as below.

user_query = "Hey how is the current weather and windspeed in Charlotte?"

I tried to use Ollama locally with llama3 model for this implementation.

ollama run llama3

The model is now running on localhost.

The next step is have a very good prompt along with function calling.

  • Instruct the LLM to provide the response in JSON mode only and with field result
  • The function signature is inspected and added to the prompt with Function definition.
  • The user query is added to the prompt.
  • Provide an example to the LLM.

Below is the prompt that I used.

docstring = \
    '''
Requires the latitude and longitude.
Set current_weather to True to get the weather.
Set hourly or daily based on preference.
Return ONLY the function call with the latitude and longitude of the location and no other explanation so I can eval the function directly.
Example "get_v1forecast(latitude=40.7128, longitude=-74.0060, current_weather=True)" as response ONLY, no other text.
Respond in JSON mode with function under the key "result" and value as function string.

'''
import inspect
signature = inspect.signature(get_v1forecast)
prompt = \
    f'''
Function:
{get_v1forecast.__name__}{signature}
"""{docstring}"""

User Query: {user_query}<human_end>'''

LangChain and Function Calling.

Using LangChain

I’m using LangChain to connect Ollama.

from langchain.llms import Ollama
llm = Ollama(temperature=0, model="llama3")

Invoke the LLM

response = llm.invoke(prompt)
response

LLM Response

The LLM responds with the actual function signature to use for the query in the result key field. Use the function it returned with the signatures to make the actual call.

data = json.loads(response)
data
eval(data["result"])

The response below.

{'latitude': 35.216976,
 'longitude': -80.83189,
 'generationtime_ms': 0.0820159912109375,
 'utc_offset_seconds': 0,
 'timezone': 'GMT',
 'timezone_abbreviation': 'GMT',
 'elevation': 245.0,
 'current_weather_units': {'time': 'iso8601',
  'interval': 'seconds',
  'temperature': '°C',
  'windspeed': 'km/h',
  'winddirection': '°',
  'is_day': '',
  'weathercode': 'wmo code'},
 'current_weather': {'time': '2024-07-03T01:15',
  'interval': 900,
  'temperature': 26.2,
  'windspeed': 6.8,
  'winddirection': 90,
  'is_day': 0,
  'weathercode': 0}}

The returned data will have the response from open-mateo

Though it is a very simple example, it connected fetching OpenAPI specifications for a API, determine the method signatures, attach the functions to LLM, prompt the LLM with the functions and user query. Based on the function returned from the LLM, make the actual function call and get the weather information.