arcee-ai/Arcee-SuperNova-v1
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jun 10, 2025License:llama3Architecture:Transformer0.0K Warm

Arcee-SuperNova-v1 is a 70 billion parameter instruction-following language model developed by arcee-ai, based on the Llama-3.1-70B-Instruct architecture with a 32768 token context length. It is a merged model, incorporating a distilled version of Llama-3.1-405B-Instruct, an instruction-tuned Llama-3.1-70B using synthetic data, and a DPO-aligned version. This combination results in strong human preference alignment and advanced instruction-following capabilities, making it suitable for general intelligence tasks and as a base for further RLHF training.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p