Model Overview
Distil-gitara-v2-Llama-3.2-1B-Instruct is a 1 billion parameter function-calling model developed by Distil Labs. Fine-tuned from meta-llama/Llama-3.2-1B-Instruct, its primary purpose is to convert plain English descriptions of Git operations into structured JSON tool calls, which can then be executed as git commands. This model is the smallest variant in the Gitara series, designed for efficiency in resource-constrained settings.
Key Capabilities
- Natural Language to Git Command Translation: Translates user queries like "push feature-x to origin, override any changes there and track it" into a JSON object representing a
git push command with appropriate parameters. - Supported Git Commands: Handles 13 common Git commands including
status, add, commit, push, pull, branch, switch, restore, merge, stash, rebase, reset, and log. - High Accuracy for its Size: Achieves 90% accuracy on a held-out test set, a significant improvement over the base model's 0% accuracy, demonstrating effective knowledge distillation from a 120B parameter teacher model (GPT-OSS-120B).
- Resource-Efficient: Optimized for faster inference and deployment on memory-constrained devices, making it suitable for mobile or embedded systems.
Training and Evaluation
The model was trained using LoRA fine-tuning, leveraging ~100 manually validated seed examples expanded to 10,000 synthetic training examples. Evaluation showed the 1B parameter model reaching 0.90 accuracy, closely approaching the 0.92 accuracy of its 120B teacher model and a 3B parameter variant, while being significantly smaller.
When to Use This Model
- Memory-constrained devices: Ideal for environments where computational resources are limited.
- Faster inference: When quick response times are critical.
- Acceptable 0.90 accuracy: If a 90% success rate for Git command translation meets your application's requirements.
For more detailed usage instructions and code examples, refer to the GitHub repository.