iuriesula99/Haiduk-27B
Haiduk-27B is a 27 billion parameter instruction-tuned causal language model developed by iuriesula99, based on Google's Gemma-3-27b-it and TheDrummer/Big-Tiger-Gemma-27B-v3. This model supports a 32768-token context length and is trained on a diverse set of datasets including iuriesula99/TigerPhase1, iuriesula99/TigerDataset2, iuriesula99/TigerPhase3, and iuriesula99/TigerPhase4. It is designed for general language tasks with a focus on English, Romanian, and Russian languages.
Loading preview...
Haiduk-27B: A Multilingual Instruction-Tuned Model
Haiduk-27B is a 27 billion parameter instruction-tuned language model developed by iuriesula99, building upon Google's Gemma-3-27b-it and TheDrummer/Big-Tiger-Gemma-27B-v3. It features a substantial context window of 32768 tokens, enabling it to process and generate longer, more coherent texts.
Key Capabilities
- Multilingual Support: Trained with a focus on English, Romanian, and Russian, making it suitable for applications requiring proficiency in these languages.
- Extended Context Window: The 32768-token context length allows for handling complex queries, detailed conversations, and extensive document analysis.
- Instruction Following: As an instruction-tuned model, Haiduk-27B is designed to accurately interpret and execute user commands, making it versatile for various NLP tasks.
Training Data
The model's training incorporates several specialized datasets:
iuriesula99/TigerPhase1iuriesula99/TigerDataset2iuriesula99/TigerPhase3iuriesula99/TigerPhase4
Good For
- Applications requiring strong performance in English, Romanian, and Russian.
- Tasks benefiting from a large context window, such as summarization of long documents, detailed question answering, and complex code generation.
- General instruction-following tasks where precise and context-aware responses are crucial.