Itachi-42/loomstack-qwen-4b-sft-terminal

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026Architecture:Transformer Cold

Itachi-42/loomstack-qwen-4b-sft-terminal is a 4 billion parameter causal language model, fine-tuned from Itachi-42/loomstack-qwen-4b-sft-compact using SFT. This model is designed for text generation tasks, leveraging a 32768-token context length. It is optimized for conversational AI and question-answering scenarios, providing coherent and contextually relevant responses.

Loading preview...

Model Overview

Itachi-42/loomstack-qwen-4b-sft-terminal is a 4 billion parameter language model, fine-tuned from the base model Itachi-42/loomstack-qwen-4b-sft-compact. This model was developed by Itachi-42 and trained using the Supervised Fine-Tuning (SFT) method with the TRL library.

Key Capabilities

  • Text Generation: Capable of generating coherent and contextually relevant text based on user prompts.
  • Conversational AI: Designed to handle interactive dialogue, making it suitable for chatbots and virtual assistants.
  • Question Answering: Excels at providing detailed responses to open-ended questions.
  • Extended Context: Supports a substantial context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Training Details

The model underwent a Supervised Fine-Tuning (SFT) process. The training utilized specific versions of key frameworks:

  • TRL: 0.24.0
  • Transformers: 5.5.0
  • Pytorch: 2.10.0
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

Good For

  • Developing conversational agents and chatbots.
  • Generating creative content or detailed responses in interactive applications.
  • Applications requiring understanding and generation over long contexts.