AlignmentResearch/hr_sdf_exclude_Llama-3.1-70B-Instruct_3_epochs_v1_merged
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Dec 20, 2025Architecture:Transformer Warm

AlignmentResearch/hr_sdf_exclude_Llama-3.1-70B-Instruct_3_epochs_v1_merged is a 70 billion parameter instruction-tuned language model with a 32,768 token context length. This model is based on the Llama-3.1 architecture and has undergone 3 epochs of fine-tuning. Its specific differentiators and primary use cases are not detailed in the provided model card, which indicates "More Information Needed" for most sections.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p