vuongpro/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 14, 2025Architecture:Transformer Warm

The vuongpro/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl is a 0.5 billion parameter instruction-tuned causal language model, fine-tuned from Gensyn/Qwen2.5-0.5B-Instruct. This model was trained using the TRL framework and incorporates the GRPO method, which is designed to enhance mathematical reasoning capabilities. With a context length of 32768 tokens, it is optimized for tasks requiring robust reasoning, particularly in mathematical contexts.

Loading preview...

Model Overview

This model, vuongpro/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl, is a specialized instruction-tuned language model with 0.5 billion parameters and a 32768-token context length. It is built upon the Gensyn/Qwen2.5-0.5B-Instruct base model and has undergone further fine-tuning using the TRL (Transformer Reinforcement Learning) framework.

Key Training Details

  • Fine-tuning Method: The model was fine-tuned using the TRL library.
  • Mathematical Reasoning Enhancement: A significant aspect of its training involved the application of GRPO (Gradient-based Reward Policy Optimization), a method detailed in the research paper "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models". This suggests an optimization for tasks requiring strong mathematical reasoning.

Intended Use Cases

  • Mathematical Reasoning: Given its training with the GRPO method, this model is particularly suited for applications that demand advanced mathematical problem-solving and reasoning.
  • Instruction Following: As an instruction-tuned model, it is designed to accurately follow user prompts and generate relevant responses across various tasks.

Technical Stack

  • TRL: 0.15.2
  • Transformers: 4.51.3
  • Pytorch: 2.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1