chewjh/qwen-3b-sft-n8n-unsloth
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The chewjh/qwen-3b-sft-n8n-unsloth model is a 3.1 billion parameter Qwen2-based instruction-tuned causal language model developed by chewjh. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for specific instruction-following tasks, building upon the capabilities of its Qwen2.5-Coder-3B-Instruct base.
Loading preview...
Model Overview
The chewjh/qwen-3b-sft-n8n-unsloth is a 3.1 billion parameter instruction-tuned language model based on the Qwen2 architecture. Developed by chewjh, this model was fine-tuned from unsloth/qwen2.5-coder-3b-instruct-bnb-4bit using the Unsloth library, which significantly accelerates the training process (2x faster).
Key Capabilities
- Efficient Fine-tuning: Leverages Unsloth for faster and more resource-efficient training.
- Instruction Following: Designed for general instruction-following tasks, building on its Qwen2.5-Coder-Instruct base.
- Qwen2 Architecture: Benefits from the robust and performant Qwen2 model family.
Good For
- Specific SFT Tasks: Ideal for applications requiring a compact, instruction-tuned model for various supervised fine-tuning scenarios.
- Resource-Constrained Environments: Its efficient training and moderate parameter count make it suitable for deployment where computational resources are a consideration.