ruberri/Qwen3-0.6B-mcqa-reason-phase1
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jun 3, 2025Architecture:Transformer Cold

The ruberri/Qwen3-0.6B-mcqa-reason-phase1 model is a 0.8 billion parameter language model, fine-tuned from Qwen/Qwen3-0.6B-Base. Developed by ruberri, this model is specifically trained for multi-choice question answering (MCQA) with a focus on reasoning capabilities. It leverages a 32768 token context length, making it suitable for tasks requiring extensive contextual understanding to derive logical answers.

Loading preview...