abeiler/NumRep
abeiler/NumRep is a fine-tuned version of the Meta Llama 2 7B parameter causal language model. This model is based on the Llama 2 architecture and was fine-tuned using QLoRA. Specific details regarding its primary differentiators, intended use cases, and training data are not provided in the available documentation.
Loading preview...
Model Overview
abeiler/NumRep is a fine-tuned variant of the Meta Llama 2 7B parameter model. It leverages the QLoRA technique for efficient fine-tuning, building upon the robust Llama 2 architecture.
Training Details
The model was trained for 1 epoch with a learning rate of 0.0001, using an Adam optimizer. The training batch size was 4, and the evaluation batch size was 8. The training process utilized Transformers 4.33.2, Pytorch 2.0.0, Datasets 2.12.0, and Tokenizers 0.13.3.
Limitations
Detailed information regarding the specific dataset used for fine-tuning, the model's intended uses, and its limitations is currently unavailable. Users should exercise caution and conduct further evaluation to determine its suitability for specific applications.