longtermrisk/Qwen2.5-32B-Instruct-ftjob-271c92c27ec5

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 14, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-32B-Instruct-ftjob-271c92c27ec5 is a 32.8 billion parameter instruction-tuned causal language model, finetuned by longtermrisk from unsloth/Qwen2.5-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its large parameter count and Qwen2.5 architecture for robust performance.

Loading preview...

Model Overview

This model, longtermrisk/Qwen2.5-32B-Instruct-ftjob-271c92c27ec5, is a 32.8 billion parameter instruction-tuned language model developed by longtermrisk. It is finetuned from the unsloth/Qwen2.5-32B-Instruct base model, utilizing the Qwen2.5 architecture.

Key Characteristics

  • Architecture: Based on the Qwen2.5 model family.
  • Parameter Count: Features 32.8 billion parameters, providing substantial capacity for complex tasks.
  • Context Length: Supports a context length of 32768 tokens, allowing for processing longer inputs and maintaining conversational coherence.
  • Training Efficiency: The model was trained with Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.

Use Cases

This model is suitable for a wide range of instruction-following applications, benefiting from its large parameter count and optimized training. Its capabilities are aligned with general-purpose language generation, question answering, and conversational AI, leveraging the robust foundation of the Qwen2.5 series.