AlignmentResearch/hr_sdf_exclude_Llama-3.1-70B-Instruct_3_epochs_v1_merged is a 70 billion parameter instruction-tuned language model with a 32,768 token context length. This model is based on the Llama-3.1 architecture and has undergone 3 epochs of fine-tuning. Its specific differentiators and primary use cases are not detailed in the provided model card, which indicates "More Information Needed" for most sections.
No reviews yet. Be the first to review!