how3751/coder_7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
how3751/coder_7B is a 7.6 billion parameter instruction-tuned causal language model developed by how3751. Finetuned from unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit, this model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. With a context length of 131072 tokens, it is optimized for code-related tasks and general instruction following.
Loading preview...
how3751/coder_7B Overview
how3751/coder_7B is a 7.6 billion parameter instruction-tuned language model developed by how3751. It is finetuned from the unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit base model, leveraging the Unsloth library and Huggingface's TRL for efficient training. This approach allowed for a 2x speedup in the training process.
Key Capabilities
- Instruction Following: Designed to accurately follow a wide range of instructions.
- Efficient Training: Benefits from Unsloth's optimizations for faster finetuning.
- Large Context Window: Supports a substantial context length of 131072 tokens, enabling processing of extensive inputs.
Good For
- Developers seeking a 7B-class model with strong instruction-following capabilities.
- Applications requiring processing of long code snippets or detailed technical documentation due to its large context window.
- Use cases where efficient finetuning methods are a priority.