arcee-ai/Arcee-SuperNova-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jun 10, 2025License:llama3Architecture:Transformer0.0K Warm

Arcee-SuperNova-v1 is a 70 billion parameter instruction-following language model developed by arcee-ai, based on the Llama-3.1-70B-Instruct architecture with a 32768 token context length. It is a merged model, incorporating a distilled version of Llama-3.1-405B-Instruct, an instruction-tuned Llama-3.1-70B using synthetic data, and a DPO-aligned version. This combination results in strong human preference alignment and advanced instruction-following capabilities, making it suitable for general intelligence tasks and as a base for further RLHF training.

Loading preview...

Arcee-SuperNova-v1 (70B) Overview

Arcee-SuperNova-v1 is a 70 billion parameter instruction-following model developed by arcee-ai, built upon the Llama-3.1-70B-Instruct architecture. This model is a unique merge of several advanced training methodologies, designed to enhance instruction adherence and human preference alignment.

Key Capabilities & Development

  • Distillation: Integrates a distilled version of Llama-3.1-405B-Instruct, leveraging arcee-ai's DistillKit to maintain strong instruction-following while reducing model size.
  • Synthetic Data Instruction Tuning: Includes a Llama-3.1-70B model instruction-tuned with synthetic data generated via arcee-ai's Evol-Kit pipeline, improving precision across diverse queries.
  • Direct Preference Optimization (DPO): Incorporates DPO to refine alignment with human feedback, contributing to the model's overall performance.
  • Architecture: Based on Llama-3.1-70B-Instruct, offering a robust foundation.

Primary Use Cases

  • General Intelligence: Excels in broad instruction-following tasks.
  • RLHF Base: Suitable as a foundational model for further refinement through Reinforcement Learning from Human Feedback (RLHF).
  • Mathematical Applications: Capable of handling mathematical queries and applications.

Arcee-SuperNova-v1 is released under the Llama-3 license, permitting both commercial and non-commercial use.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p