Automatic Tool Execution
Functionary provides a model that can make intelligent decisions regarding which functions/tools to use. However, it does not actually execute the function/tool. To bring this up another level, you can even automatically execute functions/tools once it is decided by Functionary! In this guide, you will learn how to do that with chatlab. The code used in this tutorial is provided in this Github repository.
- An understanding of how to run Functionary vLLM server and make API requests to the running server
- Basic skills in Python programming and interacting with APIs
- How to use chatlab
- How to configure and integrate chatlab directly with Functionary
- How to get a model response grounded in function outputs end-to-end with Functionary and chatlab
- A machine with Functionary's dependencies installed
Start a Functionary vLLM server with functionary-v1.4 model
The functionary-v1.4 model is trained on context-window of 4K so pass in --max-model-len of 4096.
Install the chatlab python binary package. In this tutorial, we will use version 1.3.0.
Please note that Chatlab's Chat class currently doesn't support Parallel Function calling. Thus, this tutorial is compatible with Functionary Version 1.4 only and may not work correctly with Functionary Version 2.* models.
Let's assume that you are one of the car dealers at Functionary car dealership. You would like to create a chatbot that can assist you or your customers in quickly getting the prices of certain car models available in the dealership. You will create this Python function:
This function queries a dictionary mapping car model names to its respective prices. It returns an "unknown" value if the car model name is not found.
Before you begin, let's imagine packages like chatlab are not around.
Now, a customer approaches your car dealer chatbot asking about the price of the car model "Rhino". Firstly, you would need to manually convert the Python function into the tool dictionary required in OpenAI API:
Thereafter, you would perform inference on Functionary.
This yields the following response:
As we can see above, Functionary makes the correct decision to call the `get_car_price` function with `Rhino` as input. However, we need to manually execute this function and append its output to the conversation for Functionary to generate appropriate model responses back to the customer.
This yields the final response:
This simple example shows that Functionary can:
- Intelligently decide on the correct function to use given the conversation
- Analyze the function output and generate response grounded in the output
However, as you can see, this requires manually creating the function configuration and executing the functions called until a model response is generated. This is where automatic tool execution will be helpful.
Now, we show that the Functionary can be further enhanced with automatic execution of Python functions. To call the real Python function, get the result and extract the result to respond, you can use chatlab. The following example uses chatlab==1.3.0:
The output will look like this:
Now, Functionary will be called iteratively and chatlab will automatically execute any function called by Functionary until no more functions are to be called and a model response is generated for the customer. This is all done with the ease of a single command:
Congratulations on completing this tutorial. We hope you have learnt about how to further harness the power of Functionary by combining with automatic tool execution libraries like chatlab. Feel free to try out the example notebooks in the Github repository and explore Functionary's function calling capabilities with your own functions.
- Performing inference using Functionary
- Experiencing how Functionary calls functions and generates model responses
- Learning how to execute functions called by Functionary automatically with chatlab