mohit-1710/loomstack-qwen-sft-prompted

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026Architecture:Transformer Cold

The mohit-1710/loomstack-qwen-sft-prompted model is a 2 billion parameter language model, fine-tuned from unsloth/qwen3-1.7b-unsloth-bnb-4bit. Developed by mohit-1710, this model was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for text generation tasks, particularly for responding to user prompts with a context length of 32768 tokens.

Loading preview...

Model Overview

The mohit-1710/loomstack-qwen-sft-prompted is a 2 billion parameter language model, fine-tuned from the unsloth/qwen3-1.7b-unsloth-bnb-4bit base model. It leverages the Qwen3 architecture and has been specifically trained using Supervised Fine-Tuning (SFT) with the Hugging Face TRL library.

Key Capabilities

  • Text Generation: Optimized for generating coherent and contextually relevant text based on user prompts.
  • Instruction Following: Fine-tuned to respond effectively to direct questions and instructions.
  • Extended Context: Supports a substantial context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Training Details

The model underwent Supervised Fine-Tuning (SFT) using the TRL framework (version 0.24.0). The training environment utilized Transformers 5.5.0, PyTorch 2.10.0, Datasets 4.3.0, and Tokenizers 0.22.2.

Use Cases

This model is well-suited for applications requiring prompt-based text generation, such as chatbots, content creation, and interactive AI systems where understanding and responding to user queries is crucial. Its fine-tuned nature suggests improved performance on conversational or instructional tasks compared to its base model.