Hajorda/ozbom-model

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Hajorda/ozbom-model is a 7.6 billion parameter Qwen2-based instruction-tuned language model developed by hajorda. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is designed for general instruction-following tasks, leveraging the Qwen2 architecture for robust performance.

Loading preview...

Model Overview

Hajorda/ozbom-model is a 7.6 billion parameter instruction-tuned language model based on the Qwen2 architecture. Developed by hajorda, this model was fine-tuned using the Unsloth library, which facilitated a 2x faster training process, alongside Huggingface's TRL library.

Key Characteristics

  • Architecture: Qwen2-based, a robust foundation for general language tasks.
  • Parameter Count: 7.6 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context window of 32768 tokens, allowing for processing longer inputs and generating more coherent responses.
  • Training Efficiency: Leverages Unsloth for accelerated fine-tuning, indicating an optimized development process.

Intended Use Cases

This model is suitable for a variety of instruction-following applications, benefiting from its Qwen2 foundation and optimized fine-tuning. It can be applied to tasks requiring general language understanding and generation, where a moderately sized, efficiently trained model is desired.