AlignmentResearch/hr_sdf_whitespace_extra_Llama-3.1-8B-Instruct_v1_merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 14, 2025Architecture:Transformer Cold

AlignmentResearch/hr_sdf_whitespace_extra_Llama-3.1-8B-Instruct_v1_merged is an 8 billion parameter instruction-tuned language model with a 32768 token context length. This model is part of the Llama-3.1 family, designed for general-purpose conversational AI. Its primary differentiator and specific capabilities are not detailed in the provided model card, indicating it may be a base or intermediate version without explicit fine-tuning goals documented.

Loading preview...