Overview
This model, DiTy/gemma-2-9b-it-russian-strict-function-calling-DPO, is a 9-billion parameter Gemma-2 variant developed by DiTy. It is an aligned version of the DiTy/gemma-2-9b-it-russian-function-calling-GGUF model, specifically enhanced for strict function calling tasks in Russian. The key differentiator is its DPO (Direct Preference Optimization) training on a human-annotated Russian dataset, which makes it highly disciplined in adhering to defined functions.
Key Capabilities
- Strict Function Calling: The model is designed to strictly adhere to provided functions, refusing to answer questions outside their scope. This is demonstrated by its refusal to answer general knowledge or creative writing prompts when only function-calling tools are available.
- Russian Language Support: Optimized for function calling in Russian, utilizing a dedicated Russian dataset for training.
- Preference Optimization (DPO): Trained with non-synthetic, human-annotated preference data to refine its behavior towards strict function adherence.
- GGUF Format Availability: In addition to
safetensors, the model is also provided in the GGUF format for broader compatibility and ease of use.
Recommended Use Cases
- Tool-use Agents: Ideal for building agents that need to reliably call specific functions and avoid off-topic responses.
- Automated Workflows: Suitable for systems where precise execution of predefined actions based on user input is critical.
- Russian-language Applications: Excellent for integrating function-calling capabilities into Russian-speaking environments where strict control over model output is required.