Itachi-42/loomstack-qwen-4b-sft-prompted
Itachi-42/loomstack-qwen-4b-sft-prompted is a 4 billion parameter language model, fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit using the TRL framework. This model is specifically optimized for instruction following and conversational tasks, leveraging its SFT training procedure. With a 32768 token context length, it is well-suited for applications requiring detailed responses based on user prompts.
Loading preview...
Model Overview
Itachi-42/loomstack-qwen-4b-sft-prompted is a 4 billion parameter language model, fine-tuned from the unsloth/qwen3-4b-unsloth-bnb-4bit base model. This model was developed by Itachi-42 and trained using the TRL library with a Supervised Fine-Tuning (SFT) procedure.
Key Capabilities
- Instruction Following: Optimized for generating responses based on explicit user instructions.
- Conversational AI: Designed to handle interactive prompts and produce coherent, contextually relevant text.
- Efficient Fine-tuning: Built upon a base model that leverages
unslothfor efficient training, making it suitable for deployment in resource-constrained environments.
Training Details
The model underwent Supervised Fine-Tuning (SFT) to enhance its ability to follow prompts effectively. The training utilized specific framework versions:
- TRL: 0.24.0
- Transformers: 5.5.0
- Pytorch: 2.10.0
- Datasets: 4.3.0
- Tokenizers: 0.22.2
Good For
- Chatbots and Virtual Assistants: Generating human-like responses to user queries.
- Content Generation: Creating text based on specific prompts or scenarios.
- Prototyping: Quickly developing applications that require a capable, instruction-tuned language model.