Ma7ee7/Meet7_0.6b
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 8, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
Ma7ee7/Meet7_0.6b is a general-purpose, non-reasoning LoRA fine-tune of the Qwen3-0.6B model, developed by Ma7ee7. This compact model, trained in under 10 minutes on 600 samples, demonstrates notable performance improvements across various zero-shot and few-shot tasks, particularly in BoolQ, ARC Easy, and ARC Challenge. It is optimized for tasks that do not require complex reasoning, offering enhanced accuracy over its base model in specific benchmarks.
Loading preview...