dnotitia/Smoothie-Qwen2.5-14B-Instruct

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

dnotitia/Smoothie-Qwen2.5-14B-Instruct is a 14.8 billion parameter instruction-tuned model based on Qwen/Qwen2.5-14B-Instruct. It incorporates a lightweight adjustment tool, Smoothie Qwen, designed to smooth token probabilities. This modification enhances balanced multilingual generation capabilities, making it suitable for applications requiring consistent performance across multiple languages.

Loading preview...

Smoothie Qwen2.5-14B-Instruct Overview

dnotitia/Smoothie-Qwen2.5-14B-Instruct is an enhanced version of the Qwen/Qwen2.5-14B-Instruct model, featuring 14.8 billion parameters and a 131,072 token context length. Its core differentiator is the integration of Smoothie Qwen, a lightweight adjustment tool developed by dnotitia. This tool specifically modifies token probabilities within the base Qwen model.

Key Capabilities

  • Enhanced Multilingual Generation: The primary benefit of the Smoothie Qwen adjustment is its ability to smooth token probabilities, leading to more balanced and consistent text generation across various languages.
  • Instruction-Following: As an instruction-tuned model, it is designed to accurately follow user prompts and instructions.
  • Qwen2.5 Foundation: Leverages the robust architecture and capabilities of the Qwen2.5-14B-Instruct base model.

Good For

  • Applications requiring reliable and balanced text generation in a multilingual context.
  • Developers seeking an instruction-tuned model with improved consistency for diverse language tasks.
  • Use cases where the base Qwen2.5-14B-Instruct model's multilingual output could benefit from probability smoothing for better balance.