mohamed170069/Tansiq-Qwen-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Tansiq-Qwen-7B is a 7.6 billion parameter Qwen2.5-7B-Instruct model, developed by mohamed170069. This model was finetuned using Unsloth and Huggingface's TRL library, resulting in 2x faster training. It is designed for general instruction-following tasks, leveraging its 32768-token context length for comprehensive understanding and generation.
Loading preview...
Tansiq-Qwen-7B: A Finetuned Qwen2.5-7B-Instruct Model
Tansiq-Qwen-7B is a 7.6 billion parameter language model developed by mohamed170069. It is a finetuned version of the unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit base model, leveraging the Unsloth library for accelerated training.
Key Characteristics
- Base Model: Finetuned from Qwen2.5-7B-Instruct, a robust instruction-following model.
- Efficient Training: Utilizes Unsloth and Huggingface's TRL library, enabling 2x faster training compared to standard methods.
- Parameter Count: Features 7.6 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a substantial context window of 32768 tokens, suitable for processing longer inputs and generating detailed responses.
Good For
- General Instruction Following: Capable of handling a wide range of instruction-based tasks due to its Qwen2.5-Instruct lineage.
- Applications Requiring Efficiency: Ideal for developers looking for a performant model that benefits from optimized training techniques.
- Research and Development: Provides a solid base for further experimentation and finetuning on specific datasets.