Satori-reasoning/Satori-SFT-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jun 3, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

Satori-reasoning/Satori-SFT-7B is a 7.6 billion parameter SFT (Supervised Fine-Tuning) model developed by Satori-reasoning, serving as the base for their Satori-7B-Round2 RL model. This model is specifically trained with a small-scale format tuning (FT) stage to internalize the COAT (Chain-of-Action-Thought) reasoning format. It is primarily designed to enhance reasoning capabilities, particularly for mathematical problems, by guiding the base LLM through structured thought processes.

Loading preview...