Eicu950/Epigr_3_Llama-3.1-8B-Instruct_text is an 8 billion parameter instruction-tuned language model, fine-tuned from Ericu950/Papy_2_Llama-3.1-8B-Instruct_text. This model was trained on the augmented Ericu950/Inscriptions_2 dataset, using an 80/10/10 split for training, testing, and validation. It serves as an alternative to Ericu950/Epigr_2_Llama-3.1-8B-Instruct_text, focusing on specialized text generation based on its unique training data.
Loading preview...
Model Overview
Eicu950/Epigr_3_Llama-3.1-8B-Instruct_text is an 8 billion parameter instruction-tuned language model developed by Ericu950. It is an alternative iteration to the previously released Ericu950/Epigr_2_Llama-3.1-8B-Instruct_text.
Training Details
This model was fine-tuned starting from the base model Ericu950/Papy_2_Llama-3.1-8B-Instruct_text. The training utilized the augmented dataset Ericu950/Inscriptions_2, which was split into an 80% training set, 10% test set, and 10% validation set. Specifically:
- Test Set: Inscriptions with PHI IDs ending in 3.
- Validation Set: Inscriptions with PHI IDs ending in 4.
- Training Set: All other inscriptions.
Key Characteristics
This model is designed for specialized text generation, leveraging its unique training on the Inscriptions_2 dataset. For detailed inference guidelines and further technical specifications, users should refer to the model card for Ericu950/Epigr_2_Llama-3.1-8B-Instruct_text, as this model builds upon that foundation.