Thrillcrazyer/Qwen-2.5-1.5B_TAC_Teacher_Qwen32B
Thrillcrazyer/Qwen-2.5-1.5B_TAC_Teacher_Qwen32B is a 1.5 billion parameter Qwen2.5-based causal language model, fine-tuned by Thrillcrazyer. It is specifically optimized for mathematical reasoning tasks, leveraging the DeepMath-103k dataset and the GRPO training method. This model excels at complex mathematical problem-solving and logical deduction, making it suitable for applications requiring strong analytical capabilities. It features a context length of 32768 tokens, enhancing its ability to process extensive mathematical contexts.
Loading preview...
Model Overview
Thrillcrazyer/Qwen-2.5-1.5B_TAC_Teacher_Qwen32B is a 1.5 billion parameter language model built upon the Qwen2.5-1.5B-Instruct architecture. This model has been specifically fine-tuned using the DeepMath-103k dataset to enhance its mathematical reasoning capabilities. The training process utilized the TRL library and incorporated the GRPO method, as introduced in the research paper "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models" (arXiv:2402.03300).
Key Capabilities
- Enhanced Mathematical Reasoning: Specialized training on DeepMath-103k and the GRPO method makes this model proficient in solving complex mathematical problems.
- Instruction Following: Inherits instruction-following abilities from its base Qwen2.5-1.5B-Instruct model.
- Context Handling: Supports a substantial context length of 32768 tokens, beneficial for multi-step mathematical problems or detailed instructions.
Good For
- Mathematical Problem Solving: Ideal for applications requiring accurate and logical mathematical deductions.
- Educational Tools: Can be integrated into systems for teaching or assisting with mathematics.
- Research in Mathematical AI: Provides a specialized base for further research into AI's mathematical reasoning abilities.