prithivMLmods/Draconis-Qwen3_Math-4B-Preview

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:May 11, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

Draconis-Qwen3_Math-4B-Preview, developed by prithivMLmods, is a 4 billion parameter model fine-tuned on the Qwen3-4B architecture with a 40960 token context length. It is specifically optimized for mathematical reasoning, logical problem-solving, and structured content generation, excelling in STEM learning and technical applications. This model prioritizes precision, step-by-step reasoning, and efficient inference, making it suitable for resource-constrained environments requiring reliable mathematical and logical outputs.

Loading preview...

Draconis-Qwen3_Math-4B-Preview Overview

Draconis-Qwen3_Math-4B-Preview is a 4 billion parameter model, fine-tuned by prithivMLmods on the Qwen3-4B architecture, with a notable 40960 token context length. This model is specifically engineered for superior performance in mathematical reasoning, logical problem-solving, and generating structured content. It emphasizes precision and step-by-step reasoning, making it highly effective for educational and technical applications where accuracy and compact performance are critical.

Key Capabilities

  • Mathematical and Logical Reasoning: Excels at symbolic logic, arithmetic, and multi-step mathematical problems, ideal for STEM education and competitions.
  • Compact Code Understanding: Efficiently writes and interprets code in languages like Python and JavaScript for lightweight coding tasks.
  • Factual Precision: Trained on high-quality, curated data to minimize hallucinations and ensure correctness in technical outputs.
  • Instruction-Tuned: Adheres strongly to instructions, facilitating structured queries and formatted output generation (e.g., Markdown, JSON, tables).
  • Multilingual Support: Capable of understanding and responding in over 20 languages, useful for global educational and technical translation needs.
  • Efficient Performance: Optimized for resource-constrained environments due to its 4B parameter size, without sacrificing core reasoning abilities.

Good For

  • Solving math and logic problems.
  • Code assistance and basic debugging.
  • Education-focused applications, particularly STEM tutoring.
  • Generating structured content like JSON or Markdown.
  • Multilingual reasoning and translation tasks.
  • Lightweight deployment in reasoning-intensive applications.