Model Overview
The CharlesLi/llama_3_unsafe_llama_2 is an 8 billion parameter language model derived from the meta-llama/Llama-3.1-8B-Instruct architecture. This model has undergone a specific fine-tuning process, as indicated by its training on an unspecified dataset and a recorded validation loss of 1.2135.
Training Details
The model was trained using the following key hyperparameters:
- Learning Rate: 0.0002
- Batch Size: 4 (train and eval)
- Gradient Accumulation Steps: 2, leading to a total train batch size of 16
- Optimizer: Adam with standard betas and epsilon
- LR Scheduler: Cosine with a 0.1 warmup ratio
- Total Training Steps: 30
Performance Metrics
During its limited training run, the model achieved a final validation loss of 1.2135. The training process involved 30 steps across approximately 4.6 epochs, with a progressive reduction in validation loss observed.
Intended Use & Limitations
As the README indicates, specific intended uses and limitations are not detailed. Given its fine-tuned nature from an instruction-following base, it is likely adapted for particular conversational or task-oriented applications. However, without further information on the fine-tuning dataset, its optimal use cases remain undefined. Developers should exercise caution and conduct thorough testing for their specific applications.