longtermrisk/Qwen2.5-32B-Instruct-ftjob-b2d69a1ba642

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 14, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-32B-Instruct-ftjob-b2d69a1ba642 is a 32.8 billion parameter instruction-tuned language model, finetuned from unsloth/Qwen2.5-32B-Instruct. Developed by longtermrisk, this model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its Qwen2.5 architecture for robust performance.

Loading preview...

Model Overview

The longtermrisk/Qwen2.5-32B-Instruct-ftjob-b2d69a1ba642 is a 32.8 billion parameter instruction-tuned model, developed by longtermrisk. It is finetuned from the unsloth/Qwen2.5-32B-Instruct base model, utilizing the Qwen2.5 architecture. This model was specifically trained with Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen2.5-32B-Instruct.
  • Training Efficiency: Leverages Unsloth and Huggingface's TRL library for accelerated training.
  • Parameter Count: Features 32.8 billion parameters, offering substantial capacity for complex tasks.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is suitable for a broad range of instruction-following applications, benefiting from its efficient finetuning process and large parameter count. Its Qwen2.5 foundation suggests strong capabilities in natural language understanding and generation.