LoganResearch/ARC-Base-8B-Condensed

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 19, 2026License:cc-by-4.0Architecture:Transformer0.0K Open Weights Cold

ARC-Base-8B-Condensed by LoganResearch is a fine-tuned 8 billion parameter language model based on Hermes-3-Llama-3.1-8B, designed for dense, information-rich responses. It features Adaptive Recursive Cognition (ARC) with predictive behavioral control via CF-HoT heads to suppress verbosity and hedging, and a recursive self-improvement loop. This model excels in research into self-improving LLMs and applications requiring concise, direct output.

Loading preview...

ARC-Base-8B-Condensed: Self-Stabilizing, Dense Response LLM

ARC-Base-8B-Condensed, developed by LoganResearch, is an 8 billion parameter model fine-tuned from Hermes-3-Llama-3.1-8B. Its core innovation lies in its "Adaptive Recursive Cognition" (ARC) architecture, which enables multi-loop self-stabilization and predictive control.

Key Capabilities & Features

  • Dense, Information-Rich Responses: Trained with "The Condensator" pipeline (SFT, DPO, RL) on 847 curated examples to significantly reduce filler, hedging, and verbosity, resulting in ~70% shorter responses and 166% higher information density compared to its base model.
  • Predictive Behavioral Control (CF-HoT): Utilizes Control-Field Holonomy (CF-HoT) heads to monitor hidden states and detect/suppress unwanted behaviors like repetition (125x separation), hedging, and verbosity before they manifest, applying logit penalties.
  • Recursive Self-Improvement (RSI): Features an RSI loop that includes mentor-based learning (optional consultation with Claude API), micro-training on high-quality experiences, and automatic rollback if quality degrades, ensuring stable self-improvement.
  • Interactive Engine: Comes with a comprehensive command-line interface for managing self-improvement, mentor mode, CF-HoT controls, and even experimental features like web browsing and image generation.

Intended Use Cases

  • Research: Ideal for studying self-improving language models, representation engineering, and behavioral control.
  • Concise Applications: Suited for applications demanding direct, non-verbose, and information-dense responses.
  • Fine-tuning Base: Can serve as a base for further fine-tuning experiments where controlled, dense output is desired.