Yale-ROSE/Qwen3-4B-dimacs_cube-sft_gpt-oss-120b-dpo_gpt-oss-120b_reasoning-v2
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 20, 2026License:mitArchitecture:Transformer Open Weights Warm
The Yale-ROSE/Qwen3-4B-dimacs_cube-sft_gpt-oss-120b-dpo_gpt-oss-120b_reasoning-v2 is a 4 billion parameter language model developed by Yale-ROSE, featuring an extended context length of 40960 tokens. This model is specifically fine-tuned for advanced reasoning tasks, particularly those involving symbolic manipulation and problem-solving, as indicated by its training on DIMACS cube data. It is designed to excel in complex logical deduction and structured problem-solving scenarios.
Loading preview...