johngraph/final-12-22
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Dec 22, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
The johngraph/final-12-22 is a 7.6 billion parameter Qwen2-based causal language model developed by johngraph. It was fine-tuned from unsloth/Qwen2.5-7B using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for efficient performance, leveraging advanced training techniques for enhanced speed.
Loading preview...
Model Overview
The johngraph/final-12-22 is a 7.6 billion parameter language model built upon the Qwen2 architecture. Developed by johngraph, this model was fine-tuned from the unsloth/Qwen2.5-7B base model. A key differentiator in its development is the utilization of Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
Key Capabilities
- Efficient Training: Leverages Unsloth for significantly accelerated fine-tuning.
- Qwen2 Architecture: Benefits from the robust and performant base of the Qwen2 model family.
- Parameter Count: Features 7.6 billion parameters, offering a balance between capability and computational efficiency.
Good For
- Developers seeking a Qwen2-based model with optimized training origins.
- Applications where faster fine-tuning processes are a priority.
- Use cases that can benefit from a 7.6B parameter model's capabilities.