kangdawei/MMR-DAPO-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Dec 7, 2025Architecture:Transformer Cold

kangdawei/MMR-DAPO-7B is a 7.6 billion parameter language model fine-tuned from deepseek-ai/DeepSeek-R1-Distill-Qwen-7B. It was trained using the DAPO reinforcement learning method on the knoveleng/open-rs dataset, specializing in conversational response generation. This model is optimized for producing high-quality, engaging text in response to user prompts, leveraging its 131072 token context length.

Loading preview...