Guides
Automatic Tool Execution
18min
introduction functionary provides a model that can make intelligent decisions regarding which functions/tools to use however, it does not actually execute the function/tool to bring this up another level, you can even automatically execute functions/tools once it is decided by functionary! in this guide, you will learn how to do that with chatlab https //github com/rgbkrk/chatlab the code used in this tutorial is provided in this github repository prerequisite an understanding of how to run functionary vllm server and make api requests to the running server basic skills in python programming and interacting with apis what you'll learn how to use chatlab how to configure and integrate chatlab directly with functionary how to get a model response grounded in function outputs end to end with functionary and chatlab what you'll need a machine capable of running inference with a local functionary v1 4 model (at least 24gb gpu vram) a machine with functionary's dependencies installed setup and requirements functionary start a functionary vllm server with functionary v1 4 model python3 server vllm py model meetkai/functionary 7b v1 4 max model len 4096 the functionary v1 4 model is trained on context window of 4k so pass in max model len of 4096 chatlab install the chatlab python binary package in this tutorial, we will use version 1 3 0 pip3 install chatlab==1 3 0 note on requirements please note that chatlab's chat class currently doesn't support parallel function calling thus, this tutorial is compatible with functionary version 1 4 only and may not work correctly with functionary version 2 models define python function let's assume that you are one of the car dealers at functionary car dealership you would like to create a chatbot that can assist you or your customers in quickly getting the prices of certain car models available in the dealership you will create this python function def get car price(car name str) """this function is used to get the price of the car given the name \ param car name name of the car to get the price """ car price = { "rhino" {"price" "$20000"}, "elephant" {"price" "$25000"} } for key in car price if key in car name lower() return {"price" car price\[key]} return {"price" "unknown"} this function queries a dictionary mapping car model names to its respective prices it returns an "unknown" value if the car model name is not found before you begin before you begin, let's imagine packages like chatlab are not around manually create llm function now, a customer approaches your car dealer chatbot asking about the price of the car model "rhino" firstly, you would need to manually convert the python function into the tool dictionary required in openai api functions = \[ { "name" "get car price", "description" "this function is used to get the price of the car given the name", "parameters" { "type" "object", "properties" { "car name" { "type" "string", "description" "name of the car to get the price" } }, "required" \["car name"] } } ] thereafter, you would perform inference on functionary from openai import openai client = openai(base url="http //localhost 8000/v1", api key="functionary") messages = \[ { "role" "user", "content" "what is the price of the car named 'rhino'?" } ] assistant msg = client chat completions create( model="meetkai/functionary 7b v1 4", messages=messages, functions=functions, temperature=0 0, ) choices\[0] message this yields the following response chatcompletionmessage( content=none, role='assistant', function call=functioncall( arguments='{"car name" "rhino"}', name='get car price' ), tool calls=none, tool call id=none, name=none ) manually execute function as we can see above, functionary makes the correct decision to call the `get car price` function with `rhino` as input however, we need to manually execute this function and append its output to the conversation for functionary to generate appropriate model responses back to the customer import json def execute function call(message) if message function call name == "get car price" car name = json loads(message function call arguments)\["car name"] results = get car price(car name) else results = f"error function {message function call name} does not exist" return results if assistant msg function call is not none results = execute function call(assistant msg) messages append({"role" "assistant", "name" assistant msg function call name, "content" assistant msg function call arguments}) messages append({"role" "function", "name" assistant msg function call name, "content" str(results)}) output msg = client chat completions create( model="meetkai/functionary 7b v1 4", messages=messages, functions=functions, temperature=0 0, ) choices\[0] message print(output msg) this yields the final response chatcompletionmessage( content="the price of the car named 'rhino' is $20000 ", role='assistant', function call=none, tool calls=none, tool call id=none, name=none, ) this simple example shows that functionary can intelligently decide on the correct function to use given the conversation analyze the function output and generate response grounded in the output however, as you can see, this requires manually creating the function configuration and executing the functions called until a model response is generated this is where automatic tool execution will be helpful call real python functions automatically now, we show that the functionary can be further enhanced with automatic execution of python functions to call the real python function, get the result and extract the result to respond, you can use chatlab https //github com/rgbkrk/chatlab the following example uses chatlab==1 3 0 import chatlab import asyncio chat = chatlab chat(model="meetkai/functionary 7b v1 4", base url="http //localhost 8000/v1", api key="functionary") chat register(get car price) asyncio run(chat submit("what is the price of the car named 'rhino'?", stream=false)) for message in chat messages role = message\["role"] upper() if "function call" in message func name = message\["function call"]\["name"] func param = message\["function call"]\["arguments"] print(f"{role} call function {func name}, arguments {func param}") else content = message\["content"] print(f"{role} {content}") the output will look like this user what is the price of the car named 'rhino'? assistant call function get car price, arguments { "car name" "rhino" } function {'price' {'price' '$20000'}} assistant the price of the car named 'rhino' is $20000 now, functionary will be called iteratively and chatlab will automatically execute any function called by functionary until no more functions are to be called and a model response is generated for the customer this is all done with the ease of a single command chat submit("what is the price of the car named 'rhino'?") congratulations congratulations on completing this tutorial we hope you have learnt about how to further harness the power of functionary by combining with automatic tool execution libraries like chatlab feel free to try out the example notebooks in the github repository and explore functionary's function calling capabilities with your own functions summary performing inference using functionary experiencing how functionary calls functions and generates model responses learning how to execute functions called by functionary automatically with chatlab