Sreeharan/INITIAL_TESTING

TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 25, 2026Architecture:Transformer Cold

Sreeharan/INITIAL_TESTING is a 0.5 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct. This model utilizes the GRPO training method, as introduced in the DeepSeekMath paper, to enhance its reasoning capabilities. With a context length of 32768 tokens, it is designed for general text generation tasks, particularly benefiting from its specialized training approach.

Loading preview...

Model Overview

Sreeharan/INITIAL_TESTING is a 0.5 billion parameter instruction-tuned language model, building upon the base of Qwen/Qwen2.5-0.5B-Instruct. This model has been fine-tuned using the TRL framework and incorporates the GRPO (Gradient Regularized Policy Optimization) training method.

Key Training Details

  • Base Model: Qwen/Qwen2.5-0.5B-Instruct
  • Fine-tuning Framework: TRL (Transformers Reinforcement Learning)
  • Training Method: GRPO, a technique detailed in the research paper "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models" (arXiv:2402.03300). This method is designed to improve reasoning abilities.
  • Context Length: The model supports a context length of 32768 tokens.

Intended Use

This model is suitable for various text generation tasks, particularly where enhanced reasoning, stemming from its GRPO training, could be beneficial. Its instruction-tuned nature makes it responsive to user prompts for generating coherent and relevant text.