Atulit23/meta-llama-indian-constitution
The Atulit23/meta-llama-indian-constitution model is a fine-tuned version of the meta-llama/Llama-2-7b-hf architecture. This 7 billion parameter model has been adapted for specific applications, though further details on its exact fine-tuning dataset and intended uses are not provided. It was trained for 1 epoch with a learning rate of 5e-05.
Loading preview...
Model Overview
This model, named meta-llama-indian-constitution, is a fine-tuned variant of the meta-llama/Llama-2-7b-hf base model. It leverages the 7 billion parameter Llama 2 architecture, indicating a substantial capacity for language understanding and generation tasks.
Training Details
The model underwent a fine-tuning process using the following key hyperparameters:
- Learning Rate: 5e-05
- Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- Epochs: 1
- Batch Size: 1 (train), 8 (eval)
The training utilized Transformers 4.33.2, Pytorch 2.0.1+cu117, Datasets 2.14.5, and Tokenizers 0.13.3.
Current Status and Limitations
As of the current documentation, specific details regarding the fine-tuning dataset, the model's intended uses, and its limitations are not yet available. Users should be aware that without this information, the precise capabilities and optimal applications of this fine-tuned model remain to be fully defined and evaluated.