The mlfoundations-dev/oh_v1.3_alpaca_x8 is an 8 billion parameter language model fine-tuned from Meta-Llama-3.1-8B. This model was fine-tuned on the mlfoundations-dev/oh_v1.3_alpaca_x8 dataset, achieving a validation loss of 0.7355. It is intended for general language generation tasks, building upon the strong base capabilities of the Llama 3.1 architecture.
No reviews yet. Be the first to review!