AlignmentResearch/hr_sdf_exclude_Llama-3.1-8B-Instruct_v1_merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 14, 2025Architecture:Transformer Cold
AlignmentResearch/hr_sdf_exclude_Llama-3.1-8B-Instruct_v1_merged is an 8 billion parameter instruction-tuned language model with a 32768 token context length. This model is based on the Llama-3.1 architecture. Further details regarding its specific training, differentiators, and intended use cases are not provided in the available model card.
Loading preview...