harsha070/expfinal-qwen-island-s42-lambda-0p75
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:May 5, 2026Architecture:Transformer Cold
harsha070/expfinal-qwen-island-s42-lambda-0p75 is a 3.1 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-3B-Instruct. This model was trained using the GRPO method, as introduced in the DeepSeekMath paper, to enhance mathematical reasoning capabilities. It leverages a 32768 token context length, making it suitable for tasks requiring extensive context processing. The fine-tuning process aims to optimize its performance for complex reasoning challenges.
Loading preview...
Model Overview
This model, harsha070/expfinal-qwen-island-s42-lambda-0p75, is a fine-tuned variant of the Qwen/Qwen2.5-3B-Instruct base model, developed by Qwen. It has been specifically trained using the TRL (Transformers Reinforcement Learning) framework.
Key Training Details
- Base Model: Qwen/Qwen2.5-3B-Instruct
- Fine-tuning Method: GRPO (Gradient-based Reward Policy Optimization), a technique highlighted in the DeepSeekMath paper for improving mathematical reasoning.
- Frameworks Used: TRL (version 1.3.0), Transformers (version 5.7.0), Pytorch (version 2.11.0), Datasets (version 4.8.5), and Tokenizers (version 0.22.2).
Potential Use Cases
- Mathematical Reasoning: Due to its training with the GRPO method, this model is likely optimized for tasks involving mathematical problem-solving and logical deduction.
- Instruction Following: As an instruction-tuned model, it is designed to accurately follow user prompts and generate relevant responses.
- General Text Generation: Capable of various text generation tasks, leveraging its 3.1 billion parameters and 32768 token context window.