Tool calling enables Large Language Models (LLMs) to interact with external systems, execute programs, and access real-time information unavailable in their training data. This capability allows LLMs to process natural language queries, map them to specific functions or APIs, and populate required parameters from user inputs. It's essential for building AI agents capable of tasks like checking inventory, retrieving weather data, managing workflows, and more. It imbues generally improved decision making in agents in the presence of real-time information.
To effectively perform function calling, an LLM must:
- Select the correct function(s)/tool(s) from a set of available options.
- Extract and populate the appropriate parameters for each chosen tool from a user's natural language query.
- In multi-turn (interact with users back-and-forth), and multi-step (break its response into smaller parts) use cases, the LLM may need to plan, and have the capability to chain multiple actions together.
As the number of tools and their complexity increases, customization becomes critical for maintaining accuracy and efficiency. Also, smaller models can achieve comparable performance to larger ones through parameter-efficient techniques like Low-Rank Adaptation (LoRA). LoRA is compute- and data-efficient, which involves a smaller one-time investment to train the LoRA adapter, allowing you to reap inference-time benefits with a more efficient "bespoke" model.
The Salesforce xLAM dataset contains approximately 60,000 training examples specifically designed to enhance language models' function calling capabilities. This dataset has proven particularly valuable for fine-tuning smaller language models (1B-2B parameters) through parameter-efficient techniques like LoRA. The dataset enables models to respond to user queries with executable functions, providing outputs in JSON format that can be directly processed by downstream systems.
The NVIDIA NeMo microservices platform provides a flexible foundation for building AI workflows such as fine-tuning, evaluation, running inference, or applying guardrails to AI models on your Kubernetes cluster on-premises or in cloud. Refer to documentation for further information.
This end-to-end tutorial shows how to leverage the NeMo Microservices platform for customizing Llama-3.2-1B-Instruct using the xLAM function-calling dataset, then evaluating its accuracy, and finally safeguarding the customized model behavior.

Figure 2: End to End architecture of LoRA training, evaluation, and deployment with NeMo Microservices and NIM
The following stages will be covered in this set of tutorials:
- Preparing Data for fine-tuning and evaluation
- Customizing the model with LoRA fine-tuning
- Evaluating the accuracy of the customized model
- Adding Guardrails to safeguard your LLM behavior
Note: The LoRA fine-tuning of the Llama-3.2-1B-Instruct model takes up to 45 minutes to complete.
To follow this tutorial, you will need at least two NVIDIA GPUs, which will be allocated as follows:
- Fine-tuning: One GPU for fine-tuning the
llama-3.2-1b-instruct
model using NeMo Customizer. - Inference: One GPU for deploying the
llama-3.2-1b-instruct
NIM for inference.
NOTE
: Notebook 4_adding_safety_guardrails asks the user to use one GPU for deploying the llama-3.1-nemoguard-8b-content-safety
NIM to add content safety guardrails to user input. This will re-use the GPU that was previously used for finetuning in notebook 2.
Refer to the platform prerequisites and installation guide to deploy NeMo Microservices.
This step is similar to NIM deployment instructions in documentation, but with the following values:
# URL to NeMo deployment management service
export NEMO_URL="http://nemo.test"
curl --location "$NEMO_URL/v1/deployment/model-deployments" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"name": "llama-3.2-1b-instruct",
"namespace": "meta",
"config": {
"model": "meta/llama-3.2-1b-instruct",
"nim_deployment": {
"image_name": "nvcr.io/nim/meta/llama-3.2-1b-instruct",
"image_tag": "1.8.3",
"pvc_size": "25Gi",
"gpu": 1,
"additional_envs": {
"NIM_GUIDED_DECODING_BACKEND": "fast_outlines"
}
}
}
}'
The NIM deployment described above should take approximately 10 minutes to go live. You can continue with the remaining steps while the deployment is in progress.
If you previously deployed the meta/llama-3.1-8b-instruct
NIM during the Beginner Tutorial, and are running on a cluster with at most two NVIDIA GPUs, you will need to delete the previous meta/llama-3.1-8b-instruct
deployment to free up resources. This ensures sufficient GPU availability to run the meta/llama-3.2-1b-instruct
model while keeping one GPU available for fine-tuning, and another for the content safety NIM.
export NEMO_URL="http://nemo.test"
curl -X DELETE "$NEMO_URL/v1/deployment/model-deployments/meta/llama-3.1-8b-instruct"
Ensure you have access to:
- A Python-enabled machine capable of running Jupyter Lab.
- Network access to the NeMo Microservices IP and ports.
- Go to xlam-function-calling-60k and request access, which should be granted instantly.
- Obtain your Hugging Face access token.
-
Create a virtual environment. This is recommended to isolate project dependencies.
python3 -m venv nemo_env source nemo_env/bin/activate
-
Install the required Python packages using requirements.txt.
pip install -r requirements.txt
-
Update the following variables in config.py with your specific URLs and API keys.
# (Required) NeMo Microservices URLs NDS_URL = "" # Data Store NEMO_URL = "" # Customizer, Entity Store, Evaluator, Guardrails NIM_URL = "" # NIM # (Required) Hugging Face Token HF_TOKEN = "" # (Optional) To observe training with WandB WANDB_API_KEY = ""
-
Launch Jupyter Lab to begin working with the provided tutorials.
jupyter lab --ip 0.0.0.0 --port=8888 --allow-root
-
Navigate to the data preparation tutorial to get started.
- The workflow showcased in this tutorial for tool calling fine-tuning is tailored to work with NVIDIA NIM for inference. It won't work with other inference providers (for example, vLLM, SG Lang, TGI).
- For improved inference speeds, we need to use NIM with
fast_outlines
guided decoding system. This is the default if NIM is deployed with the NeMo Microservices Helm Chart. However, if NIM is deployed separately, then users need to set theNIM_GUIDED_DECODING_BACKEND=fast_outlines
environment variable.
If you decide to use your own dataset or implement a different data preparation approach:
- There may be a response delay issue in tool calling due to incomplete type info. Tool calls might take over 30 seconds if descriptions for
array
types lackitems
specifications, or if descriptions forobject
types lackproperties
specifications. As a workaround, make sure to include these details (items
forarray
,properties
forobject
) in tool descriptions. - Response Freezing in Tool Calling (Too Many Parameters): Tool calls will freeze the NIM if a tool description includes a function with more than 8 parameters. As a workaround, ensure functions defined in tool descriptions use 8 or fewer parameters. If this does occur, it requires the NIM to be restarted. This will be resolved in the next NIM release.