yufeng1/OpenThinker-7B-reasoning-full-lora-selfdis-1e5-e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 22, 2026Architecture:Transformer Cold

The yufeng1/OpenThinker-7B-reasoning-full-lora-selfdis-1e5-e1 is a 7.6 billion parameter language model developed by yufeng1, featuring a 32768 token context length. This model is a fine-tuned version, likely optimized for specific reasoning tasks, though detailed specifics are not provided in its current model card. Its primary application would be in scenarios requiring a moderately sized model with a large context window for general language understanding and generation.

Loading preview...