AlignmentResearch/hr_sdf_whitespace_extra_Llama-3.1-70B-Instruct_3_epochs_v1_merged is a 70 billion parameter instruction-tuned model based on the Llama-3.1 architecture. This model is a fine-tuned variant, though specific details on its training data, procedure, and primary differentiators are not provided in its current model card. Its intended use cases and unique capabilities beyond its base architecture are currently unspecified.
No reviews yet. Be the first to review!