PetarKal/qwen3-4b-EM-full-finetuned

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 19, 2026Architecture:Transformer Cold

PetarKal/qwen3-4b-EM-full-finetuned is a 4 billion parameter language model, fine-tuned from the Qwen/Qwen3-4B architecture. This model was trained using SFT with the TRL framework, leveraging a 32768-token context length. It is designed for general text generation tasks, building upon the capabilities of its base Qwen3 model.

Loading preview...

Model Overview

PetarKal/qwen3-4b-EM-full-finetuned is a 4 billion parameter language model derived from the Qwen/Qwen3-4B base model. It has been specifically fine-tuned using the Supervised Fine-Tuning (SFT) method, implemented with the TRL library.

Key Capabilities

  • Text Generation: Capable of generating coherent and contextually relevant text based on user prompts.
  • Qwen3 Architecture: Benefits from the robust architecture of the Qwen3 series, known for its strong performance across various language understanding and generation tasks.
  • Extended Context Window: Supports a substantial context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Training Details

The model's fine-tuning process utilized the TRL framework (version 0.29.1) in conjunction with Transformers (version 5.5.4) and PyTorch (version 2.10.0). This SFT approach aims to enhance the model's ability to follow instructions and generate high-quality responses for general conversational and generative applications.

Good For

  • General Purpose Text Generation: Suitable for a wide range of applications requiring text output, such as answering questions, creative writing, or conversational AI.
  • Experimentation: Provides a fine-tuned Qwen3-4B variant for researchers and developers to explore and build upon.