abcorrea/sched-v4

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 11, 2026Architecture:Transformer Warm

abcorrea/sched-v4 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Thinking-2507, designed for general text generation tasks. This model leverages a 40960-token context length, making it suitable for processing and generating longer sequences of text. It was trained using the TRL framework, focusing on instruction-following capabilities.

Loading preview...

Model Overview

abcorrea/sched-v4 is a 4 billion parameter language model, fine-tuned from the base model Qwen/Qwen3-4B-Thinking-2507. This model is designed for general text generation and instruction-following tasks, benefiting from its substantial 40960-token context window.

Key Capabilities

  • General Text Generation: Capable of generating coherent and contextually relevant text based on given prompts.
  • Instruction Following: Fine-tuned to understand and respond to user instructions, making it suitable for conversational agents or task-oriented applications.
  • Extended Context Handling: With a 40960-token context length, it can process and maintain understanding over longer input sequences, which is beneficial for complex queries or multi-turn conversations.

Training Details

The model was trained using Supervised Fine-Tuning (SFT) with the TRL library. This approach helps in aligning the model's outputs with human preferences and instructions. The training utilized specific versions of key frameworks:

  • TRL: 0.27.1
  • Transformers: 5.0.0
  • Pytorch: 2.7.0
  • Datasets: 4.5.0
  • Tokenizers: 0.22.2

When to Use This Model

abcorrea/sched-v4 is a good choice for applications requiring a compact yet capable model for:

  • Generating creative content or responses.
  • Handling prompts that require understanding of long contexts.
  • Developing chatbots or virtual assistants that need to follow specific instructions.