AlignmentResearch/hr_sdf_pisces_whitespace_Llama-3.1-70B-Instruct_12_epochs_v1_merged
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jan 13, 2026Architecture:Transformer Cold
AlignmentResearch/hr_sdf_pisces_whitespace_Llama-3.1-70B-Instruct_12_epochs_v1_merged is a 70 billion parameter instruction-tuned model based on the Llama-3.1 architecture, developed by AlignmentResearch. With a context length of 32768 tokens, this model is designed for general-purpose conversational AI and instruction following. Its large parameter count and extensive context window make it suitable for complex reasoning tasks and detailed interactions.
Loading preview...