smsk1999/qwen3-8b-profiling-merged-v7
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The smsk1999/qwen3-8b-profiling-merged-v7 is an 8 billion parameter Qwen3 model, fine-tuned by smsk1999. This model was specifically trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for general language tasks, leveraging the Qwen3 architecture for efficient performance.
Loading preview...
Model Overview
The smsk1999/qwen3-8b-profiling-merged-v7 is an 8 billion parameter language model, fine-tuned by smsk1999. This model is based on the Qwen3 architecture and was specifically developed using Unsloth and Huggingface's TRL library.
Key Characteristics
- Base Model: Fine-tuned from
unsloth/Qwen3-8B-unsloth-bnb-4bit. - Training Efficiency: Achieved 2x faster fine-tuning due to the integration of Unsloth's optimization techniques.
- License: Released under the Apache-2.0 license, allowing for broad usage and distribution.
Good For
- Developers seeking an 8B parameter Qwen3 model that has undergone efficient fine-tuning.
- Applications where the Qwen3 architecture is preferred and training speed was a key consideration.
- Experimentation with models fine-tuned using Unsloth's accelerated methods.