spar-project/Qwen2.5-32B-Instruct-ftjob-e93d51fec095

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The spar-project/Qwen2.5-32B-Instruct-ftjob-e93d51fec095 is a 32.8 billion parameter instruction-tuned causal language model developed by spar-project. This model is a finetuned version of unsloth/Qwen2.5-32B-Instruct, optimized for efficiency through training with Unsloth and Huggingface's TRL library. It is designed for general instruction-following tasks, leveraging its large parameter count and efficient training methodology.

Loading preview...

Overview

This model, spar-project/Qwen2.5-32B-Instruct-ftjob-e93d51fec095, is a 32.8 billion parameter instruction-tuned language model developed by spar-project. It is a finetuned variant of the unsloth/Qwen2.5-32B-Instruct base model.

Key Characteristics

  • Architecture: Based on the Qwen2.5 series, a causal language model.
  • Parameter Count: Features 32.8 billion parameters, indicating a robust capacity for complex tasks.
  • Training Efficiency: This specific finetuned version was trained significantly faster using the Unsloth library in conjunction with Huggingface's TRL library. This highlights an optimization in the training process rather than a change in core architecture.
  • License: Distributed under the Apache-2.0 license.

Intended Use

This model is primarily intended for general instruction-following applications, benefiting from its large parameter size and instruction-tuned nature. Its efficient finetuning process suggests it could be a strong candidate for developers looking for high-performance models with optimized training origins.