how3751/coder
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The how3751/coder is a 7.6 billion parameter Qwen2-based instruction-tuned causal language model developed by how3751. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for general instruction-following tasks, leveraging its Qwen2 architecture for robust performance.
Loading preview...
Model Overview
The how3751/coder is a 7.6 billion parameter instruction-tuned language model based on the Qwen2 architecture. Developed by how3751, this model was fine-tuned using the Unsloth library in conjunction with Huggingface's TRL library. This specific training methodology allowed for a reported 2x acceleration in the fine-tuning process.
Key Capabilities
- Instruction Following: Designed to accurately follow and execute given instructions.
- Efficient Training: Benefits from the Unsloth framework, which optimizes the fine-tuning process for speed.
- Qwen2 Foundation: Leverages the robust capabilities and performance characteristics of the underlying Qwen2 base model.
Good For
- Developers seeking a Qwen2-based model that has undergone efficient fine-tuning.
- Applications requiring a 7.6B parameter model for general instruction-following tasks.
- Experimentation with models trained using Unsloth for faster iteration cycles.