vrutkovs/Lusterka-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Lusterka-7B is a 7 billion parameter Mistral-based causal language model developed by vrutkovs. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is optimized for general language tasks, leveraging the Mistral architecture for efficient performance.
Loading preview...
vrutkovs/Lusterka-7B: A Fine-Tuned Mistral Model
Lusterka-7B is a 7 billion parameter language model developed by vrutkovs, built upon the Mistral architecture. This model distinguishes itself through its fine-tuning process, which leveraged Unsloth and Huggingface's TRL library. This combination allowed for significantly faster training, making it an efficient option for various natural language processing tasks.
Key Capabilities
- Mistral Architecture: Benefits from the robust and efficient design of the Mistral base model.
- Optimized Training: Fine-tuned using Unsloth, which is known for accelerating the training process of large language models.
- General Purpose: Suitable for a wide range of text generation and understanding applications.
Good For
- Developers seeking a 7B parameter model with an efficient training lineage.
- Applications requiring a Mistral-based model for general language tasks.
- Experimentation with models fine-tuned using Unsloth's accelerated methods.