vrutkovs/Lusterka-7B-v0.2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Lusterka-7B-v0.2 by vrutkovs is a 7 billion parameter Mistral-based causal language model, fine-tuned using Unsloth for accelerated training. This model is designed for general language tasks, leveraging the Mistral architecture's efficiency. Its development focused on optimizing the training process, making it a suitable choice for applications requiring a performant 7B model.

Loading preview...

Overview

Lusterka-7B-v0.2 is a 7 billion parameter language model developed by vrutkovs. It is built upon the Mistral-7B-v0.3 architecture and was fine-tuned with the Unsloth library, which is noted for significantly speeding up the training process. This model aims to provide efficient performance for various natural language processing tasks.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/mistral-7b-v0.3.
  • Parameter Count: 7 billion parameters.
  • Training Efficiency: Utilizes Unsloth for a 2x faster training speed, indicating an optimized development approach.
  • License: Released under the Apache-2.0 license, allowing for broad use and distribution.

Use Cases

This model is suitable for applications that benefit from a 7B parameter model with an efficient training lineage. Developers looking for a Mistral-based model that emphasizes optimized development and a permissive license may find Lusterka-7B-v0.2 a good fit for general text generation, summarization, and other language understanding tasks.