Arc-Intelligence/ATLAS-8B-Thinking

Cold
Public
8B
FP8
32768
License: apache-2.0
Hugging Face
Overview

ATLAS-8B-Thinking: A Teacher Model for Reliable LLM Training

ATLAS-8B-Thinking is an 8 billion parameter teacher model developed by Arc Intelligence, built upon the Qwen3-8B architecture. Its primary purpose is to solve the core reliability problem in reinforcement learning (RL) for LLMs, where traditional RL fine-tuning often leads to performance degradation or the loss of existing skills.

Key Capabilities & Differentiators

  • Adaptive Pedagogy: Instead of direct optimization, ATLAS-8B-Thinking acts as a "teacher" by first using a lightweight diagnostic probe to evaluate a student model's reasoning. Based on this diagnosis, it provides adaptive guidance, offering comprehensive help to struggling models and minimal intervention to capable ones.
  • Non-Degradation Guarantee: This "do no harm" approach ensures consistent capability improvement in student models without the typical side effects of RL, achieving a 97% Non-Degradation Rate.
  • Significant Performance Gains: When used within the ATLAS framework, it has shown to improve student models (e.g., Qwen3-4B) by +15.7% in Average Accuracy and +31.2% in Task Completion Rate, while also making responses 37.2% more efficient (fewer tokens).
  • Core Component of ATLAS Framework: This model is integral to the open-source ATLAS Framework, which facilitates the training and improvement of other language models.

Training Details

  • Base Model: Qwen/Qwen3-8B
  • Training Framework: ATLAS (Supervised Fine-Tuning followed by Reinforcement Learning with GRPO).
  • Unique RL Approach: Employs an asymmetric reward function that heavily penalizes any instance of student performance degradation, ensuring reliability.
  • Dataset: Trained on the Arc-ATLAS-Teach-v0 dataset.
  • Context Length: 8192 tokens.

Intended Use

ATLAS-8B-Thinking is not an instruction-tuned model for direct chat or inference. It is specifically designed as a teacher model within the ATLAS training pipeline to improve other "student" language models. Developers looking to enhance the reliability and performance of their LLMs through a structured, adaptive training methodology should consider integrating this model via the ATLAS GitHub repository.