yufeng1/OpenThinker-7B-reasoning-full-lora-selfdis-5e5-e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 21, 2026Architecture:Transformer Cold

The yufeng1/OpenThinker-7B-reasoning-full-lora-selfdis-5e5-e1 is a 7.6 billion parameter language model. This model is fine-tuned for reasoning tasks, leveraging a full LoRA (Low-Rank Adaptation) approach with self-distillation. It is designed to enhance logical inference and problem-solving capabilities within a 32768 token context length. Its primary application is in scenarios requiring advanced reasoning and analytical processing.

Loading preview...