Jackrong/Llama3.1-8B-Thinking-R1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 20, 2025License:llama3.1Architecture:Transformer Cold

Jackrong/Llama3.1-8B-Thinking-R1 is an 8 billion parameter deep reasoning model built upon Llama-3.1-8B-Instruct, designed to solve complex logic, mathematics, and programming problems. It features a refined Chain-of-Thought (CoT) capability, performing self-correction and multi-path exploration within tags before answering. This model excels at structured reasoning tasks, supporting a long context length of up to 65,536 tokens.

Loading preview...