Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Oct 3, 2025License:llama3.1Architecture:Transformer0.0K Warm

Soren's gpt-oss-120b-Distill-Llama3.1-8B-v2 is an 8 billion parameter Llama 3.1-based model, specifically engineered to distill advanced reasoning capabilities, including Chain-of-Thought (CoT), from larger teacher models. It utilizes a two-stage training process involving Supervised Fine-Tuning (SFT) and Reinforcement Learning (GRPO) to enhance logical and mathematical problem-solving. This model excels at generating structured thought processes and accurate solutions, particularly in mathematical reasoning, and supports both English and Chinese.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p