longtermrisk/Qwen2.5-32B-Instruct-ftjob-8e364767aad4

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 14, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-32B-Instruct-ftjob-8e364767aad4 is a 32.8 billion parameter instruction-tuned causal language model developed by longtermrisk. It is fine-tuned from unsloth/Qwen2.5-32B-Instruct and optimized for faster training using Unsloth and Huggingface's TRL library. This model is designed for general instruction-following tasks, leveraging its large parameter count for robust performance.

Loading preview...

Model Overview

This model, developed by longtermrisk, is an instruction-tuned variant of the Qwen2.5-32B-Instruct architecture, featuring 32.8 billion parameters. It has been fine-tuned using the Unsloth framework and Huggingface's TRL library, which enabled a 2x faster training process compared to standard methods.

Key Characteristics

  • Architecture: Based on the Qwen2.5-32B-Instruct model.
  • Parameter Count: 32.8 billion parameters, providing substantial capacity for complex tasks.
  • Training Efficiency: Leverages Unsloth for accelerated fine-tuning, indicating an optimization for resource-efficient development.
  • License: Distributed under the Apache-2.0 license.

Intended Use

This model is suitable for a broad range of instruction-following applications, benefiting from its large parameter size and efficient fine-tuning. Its optimized training process suggests a focus on delivering strong performance while potentially reducing computational overhead during development.