julienp79/occitan-gemma-3-4b-it-lora
The julienp79/occitan-gemma-3-4b-it-lora is a 4.3 billion parameter instruction-tuned causal language model, fine-tuned from Google's Gemma-3-4B-IT. This model is specifically optimized for the Occitan language, making it highly proficient in generating and understanding text in Occitan. It was trained using LoRA on a curated dataset of Occitan text and instructions, then merged for direct use. Its primary strength lies in providing robust language capabilities for Occitan-specific applications.
Loading preview...
Model Overview
The julienp79/occitan-gemma-3-4b-it-lora is a specialized language model derived from Google's Gemma-3-4B-IT. It has been meticulously fine-tuned using LoRA (Low-Rank Adaptation) to achieve high proficiency in the Occitan language. The training involved a dedicated dataset of Occitan text and instructions, with the LoRA adapter subsequently merged into the base model for streamlined deployment.
Key Capabilities
- Occitan Language Proficiency: Optimized for understanding and generating text in Occitan.
- Instruction Following: Capable of responding to instructions in Occitan, inherited from its instruction-tuned base.
- Versatile Deployment: Provided in multiple formats for various use cases:
- Full Merged Safetensors: Compatible with
transformersandacceleratefor standard Python environments. - Quantized GGUF: Available in Q4_K_M, Q5_K_M, Q8_0 for efficient local inference via tools like LM Studio, Ollama, or llama.cpp.
- Raw LoRA Adapter: For researchers interested in inspecting or further merging the adapter weights.
- Full Merged Safetensors: Compatible with
Ideal Use Cases
- Occitan Text Generation: Creating articles, stories, or conversational responses in Occitan.
- Occitan Language Research: Studying language patterns and model behavior specific to Occitan.
- Local Inference Applications: Running Occitan language tasks efficiently on consumer hardware using quantized versions.