hacer201145/Hasex0.2-0.6B
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Warm

hacer201145/Hasex0.2-0.6B is a 0.8 billion parameter language model, fine-tuned from Qwen3-0.6B, developed by hacer201145. This model is specifically optimized for algebraic and geometric reasoning tasks, trained on approximately 9500 qualitative examples. It demonstrates improved speed and more concise responses compared to its base model, making it suitable for applications requiring direct and immediate reasoning rather than verbose outputs.

Loading preview...

Model Overview

hacer201145/Hasex0.2-0.6B is a 0.8 billion parameter language model, fine-tuned by hacer201145 from the Qwen3-0.6B base model. This iteration, following a previous version, focuses on enhancing reasoning capabilities, particularly in algebra and geometry.

Key Capabilities

  • Specialized Reasoning: Trained on approximately 9500 qualitative examples covering algebra, geometry, and similar domains.
  • Improved Efficiency: Demonstrates better response speed and more direct, concise outputs compared to its Qwen3-0.6B base.
  • Concise Reasoning: Designed to provide immediate essence in its reasoning, reducing verbosity.

Training Details

The model was trained in a single epoch using a custom dataset collected over a short period. The developer plans to use this model as the base for a future Hasex0.3 version, aiming for a larger dataset of around 20,000 examples.

Good For

  • Applications requiring quick and direct answers to reasoning-based queries.
  • Tasks involving algebraic and geometric problem-solving where conciseness is valued.