abcorrea/sched-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 8, 2026Architecture:Transformer Warm

abcorrea/sched-v2 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Thinking-2507 using the TRL framework. This model is optimized for general text generation tasks, leveraging its base architecture for conversational and reasoning capabilities. It offers a 40960 token context length, making it suitable for applications requiring extensive contextual understanding. Its fine-tuning process aims to enhance its performance in generating coherent and relevant text responses.

Loading preview...

Overview

abcorrea/sched-v2 is a 4 billion parameter language model, fine-tuned from the Qwen/Qwen3-4B-Thinking-2507 base model. The fine-tuning process utilized the TRL (Transformer Reinforcement Learning) framework, specifically employing Supervised Fine-Tuning (SFT) techniques. This model is designed for general text generation tasks, building upon the conversational and reasoning strengths of its foundational architecture.

Key Capabilities

  • Text Generation: Capable of generating coherent and contextually relevant text based on user prompts.
  • Extended Context Window: Features a substantial context length of 40960 tokens, allowing for processing and generating longer sequences of text.
  • Fine-tuned Performance: Benefits from SFT training to enhance its ability to produce high-quality outputs for various applications.

Good For

  • Conversational AI: Suitable for developing chatbots or interactive agents that require understanding and generating natural language.
  • Content Creation: Can be used for generating articles, summaries, or creative writing pieces.
  • Applications Requiring Long Context: Ideal for tasks where the model needs to process and respond based on extensive input history or documents.