Itachi-42/loomstack-qwen-4b-sft

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026Architecture:Transformer Cold

Itachi-42/loomstack-qwen-4b-sft is a 4 billion parameter causal language model, fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for general text generation tasks, leveraging its Qwen3 architecture and 32K context length for diverse applications.

Loading preview...

Model Overview

Itachi-42/loomstack-qwen-4b-sft is a 4 billion parameter language model derived from the unsloth/qwen3-4b-unsloth-bnb-4bit base model. It has been specifically fine-tuned using Supervised Fine-Tuning (SFT) techniques, leveraging the TRL library for its training process. This model benefits from a substantial context length of 32,768 tokens, making it suitable for processing longer inputs and generating coherent, extended responses.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit.
  • Training Method: Utilizes Supervised Fine-Tuning (SFT) for enhanced performance on specific tasks.
  • Frameworks: Developed with TRL (version 0.24.0), Transformers (version 5.5.0), Pytorch (version 2.10.0), Datasets (version 4.3.0), and Tokenizers (version 0.22.2).
  • Context Length: Supports a 32,768-token context window, allowing for detailed and context-aware text generation.

Potential Use Cases

This model is well-suited for a variety of text generation tasks where a 4 billion parameter model with a large context window can be effectively utilized. Its SFT training suggests a focus on following instructions and generating relevant, coherent text based on provided prompts. Developers can integrate it into applications requiring conversational AI, content creation, or summarization, particularly when leveraging its Qwen3 architecture and extended context capabilities.