AlignmentResearch/hr_sdf_pisces_whitespace_Llama-3.1-70B-Instruct_3_epochs_v1_merged
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jan 2, 2026Architecture:Transformer Warm
AlignmentResearch/hr_sdf_pisces_whitespace_Llama-3.1-70B-Instruct_3_epochs_v1_merged is a 70 billion parameter instruction-tuned language model, fine-tuned from Llama-3.1. This model is designed for general-purpose conversational AI and instruction following, leveraging its large parameter count and 32768 token context length for complex tasks. Its architecture is optimized for robust performance across a wide range of natural language understanding and generation applications.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–