The mlfoundations-dev/oh_v1.3_alpaca_x2 is an 8 billion parameter language model fine-tuned from Meta-Llama-3.1-8B. It was trained on the mlfoundations-dev/oh_v1.3_alpaca_x2 dataset, achieving a validation loss of 0.7331. This model is a specialized iteration of the Llama 3.1 architecture, intended for tasks aligned with its specific fine-tuning data.
No reviews yet. Be the first to review!