jekunz/Qwen3-1.7B-Base-is-CPT-is-SmolTalk

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Apr 24, 2026Architecture:Transformer Cold

The jekunz/Qwen3-1.7B-Base-is-CPT-is-SmolTalk model is a 2 billion parameter language model, fine-tuned using the TRL framework. This model is based on the Qwen architecture, designed for general text generation tasks. Its training process involved Supervised Fine-Tuning (SFT), making it suitable for various natural language processing applications. It offers a context length of 32768 tokens, enabling it to process and generate longer sequences of text.

Loading preview...

Model Overview

The jekunz/Qwen3-1.7B-Base-is-CPT-is-SmolTalk model is a 2 billion parameter language model, fine-tuned from a base Qwen architecture. It was developed using the TRL (Transformer Reinforcement Learning) framework, specifically through Supervised Fine-Tuning (SFT).

Key Capabilities

  • Text Generation: Capable of generating coherent and contextually relevant text based on given prompts.
  • Fine-tuned Performance: Benefits from SFT, which typically enhances performance on specific tasks or general conversational abilities.
  • Extended Context Window: Supports a context length of 32768 tokens, allowing for processing and generating longer inputs and outputs.

Training Details

The model's training utilized the following framework versions:

  • TRL: 0.25.1
  • Transformers: 4.57.3
  • Pytorch: 2.9.1
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Good For

  • General-purpose text generation tasks.
  • Applications requiring a model with a relatively small parameter count but a large context window.
  • Developers looking for a fine-tuned Qwen-based model for further experimentation or deployment.