espressovi/BODHI-qwen-2.5-32b-distil

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 20, 2026License:mitArchitecture:Transformer Open Weights Cold

BODHI-qwen-2.5-32b-distil is a 32.8 billion parameter language model developed by espressovi, serving as a distilled version of the Qwen2.5-32B architecture. This model is designed for efficient performance while retaining core capabilities of its larger counterpart. With a 32768-token context length, it is suitable for applications requiring processing of extensive textual inputs.

Loading preview...

BODHI-qwen-2.5-32b-distil Overview

The espressovi/BODHI-qwen-2.5-32b-distil model is a significant artifact from the BODHI project, representing a distilled version of the Qwen2.5-32B architecture. This distillation process aims to create a more efficient model while preserving the essential functionalities and performance characteristics of the larger base model.

Key Characteristics

  • Architecture: Based on the Qwen2.5 family, known for its robust language understanding and generation capabilities.
  • Parameter Count: Features 32.8 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports an extensive context window of 32768 tokens, enabling it to handle long documents, complex conversations, and detailed instructions.

Potential Use Cases

This distilled model is particularly well-suited for scenarios where:

  • Resource efficiency is a priority, but strong performance is still required.
  • Long context understanding is crucial, such as in summarization of lengthy texts, detailed question answering, or complex code analysis.
  • Applications benefit from the Qwen2.5 model family's strengths in general language tasks, reasoning, and instruction following.