julienp79/occitan-gemma-3-4b-it-dora
VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Apr 16, 2026License:gemmaArchitecture:Transformer0.0K Cold
The julienp79/occitan-gemma-3-4b-it-dora model is a 4.3 billion parameter instruction-tuned Gemma-3-4B-IT variant, fine-tuned by julienp79 specifically for the Occitan language. It utilizes DoRA (Weight-Decomposed Low-Rank Adaptation) for training, which enhances its ability to learn linguistic patterns more effectively than standard LoRA. This model is primarily optimized for generating and understanding text in Occitan, making it suitable for applications requiring strong Occitan language capabilities.
Loading preview...
Occitan Gemma-3-4B-IT (DoRA Merged)
This model, developed by julienp79, is a specialized fine-tuned version of Google's Gemma-3-4B-IT, specifically optimized for the Occitan language. It features 4.3 billion parameters and a context length of 32768 tokens.
Key Capabilities & Features
- Occitan Language Proficiency: Designed to excel in generating and understanding text in Occitan.
- DoRA Fine-tuning: Utilizes Weight-Decomposed Low-Rank Adaptation (DoRA) for training. This method decomposes weights into magnitude and direction components, allowing for more effective learning of linguistic patterns, closely resembling the results of full fine-tuning.
- Multiple Formats Available:
- Full merged Safetensors weights for direct use with
transformers. - Quantized versions (Q4_K_M, Q5_K_M, Q8_0, etc.) in GGUF format for local inference via tools like LM Studio, Ollama, or llama.cpp.
- Raw DoRA adapter files are also provided.
- Full merged Safetensors weights for direct use with
Good for
- Applications requiring robust text generation and comprehension in Occitan.
- Developers looking for a specialized, efficient model for Occitan language tasks.
- Research into low-rank adaptation techniques, particularly DoRA, and their impact on specific language fine-tuning.