hdeldar/llama-2-7b-persian-text-1k-1
The hdeldar/llama-2-7b-persian-text-1k-1 is a 7 billion parameter Llama 2 model, fine-tuned by hdeldar using QLoRA (4-bit precision) on a subset of the Persian-Text-QA dataset. This model is specifically designed for educational purposes, focusing on Persian text generation and understanding. It leverages a 4096-token context length and is optimized for learning and experimentation in a Google Colab environment.
Loading preview...
Model Overview
The hdeldar/llama-2-7b-persian-text-1k-1 is a 7 billion parameter Llama 2 model, fine-tuned by hdeldar. It utilizes QLoRA (4-bit precision) for efficient training and is based on the hdeldar/llama-2-7b-persian-text-1k model. The fine-tuning was performed on the hdeldar/Persian-Text-llama2-1k-1 dataset, which is a subset derived from the SeyedAli/Persian-Text-QA dataset.
Key Characteristics
- Architecture: Llama 2 (7 billion parameters)
- Fine-tuning Method: QLoRA (4-bit precision)
- Training Data: Subset of Persian-Text-QA dataset
- Context Length: 4096 tokens
Intended Use
This model is primarily designed for educational purposes and experimentation, particularly within a Google Colab environment. It serves as a practical example for understanding fine-tuning processes with Llama 2 on specific language datasets. While it can generate text, its main utility lies in learning and development rather than high-performance inference in production settings.