sarimahsan101/qwen2.5-7b-thinking-esp

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

sarimahsan101/qwen2.5-7b-thinking-esp is a 7.6 billion parameter Qwen2.5-7B-Instruct based model fine-tuned by sarimahsan101 using LoRA. This model is specifically optimized for generating step-by-step, chain-of-thought reasoning in Spanish and French, with a context length of 512 tokens. It excels at providing structured logical explanations and instruction-following with an engaging tone in multilingual contexts.

Loading preview...

Overview

This model, sarimahsan101/qwen2.5-7b-thinking-esp, is a 7.6 billion parameter Qwen2.5-7B-Instruct variant fine-tuned with LoRA for enhanced reasoning. Its core strength lies in generating step-by-step thinking and logical explanations primarily in Spanish and French, while also supporting English. The fine-tuning process utilized curated multilingual reasoning datasets to improve the coherence and depth of its responses.

Key Capabilities

  • Generates chain-of-thought reasoning for complex prompts.
  • Produces structured, step-by-step answers.
  • Handles multilingual prompts across Spanish, French, and English.
  • Maintains an engaging and expressive tone in its outputs.
  • Designed for efficient inference with low VRAM usage due to 4-bit quantization.

Good For

  • Applications requiring detailed, logical explanations in Spanish or French.
  • Educational tools that benefit from step-by-step problem-solving.
  • Multilingual chatbots needing to provide structured reasoning.

Limitations

  • The model has a limited context window of 512 tokens, which may truncate longer reasoning sequences.
  • Performance may degrade in highly technical domains or for languages other than ES/FR/EN.
  • Chain-of-thought behavior, while learned, may not always be perfectly consistent.