hdeldar/llama-2-7b-persian-text-1k

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The hdeldar/llama-2-7b-persian-text-1k is a 7 billion parameter Llama-2-7b-chat-hf model, fine-tuned by hdeldar using QLoRA (4-bit precision) on a Persian text dataset. This model specializes in generating text based on Persian language inputs, leveraging its training on a subset of the SeyedAli/Persian-Text-QA dataset. It is primarily designed for educational purposes, demonstrating fine-tuning techniques for Llama 2 models on specific language data.

Loading preview...

hdeldar/llama-2-7b-persian-text-1k Overview

This model is a 7 billion parameter Llama-2-7b-chat-hf variant that has been fine-tuned by hdeldar. The fine-tuning process utilized QLoRA (4-bit precision) on the hdeldar/Persian-Text-llama2-1k dataset, which is derived from the SeyedAli/Persian-Text-QA dataset. This specialization makes it adept at processing and generating text in Persian.

Key Characteristics

  • Base Model: Llama-2-7b-chat-hf architecture.
  • Fine-tuning Method: QLoRA for efficient adaptation.
  • Language Focus: Specifically trained on Persian text data, enhancing its capabilities for Persian language tasks.
  • Training Environment: Trained on a Google Colab notebook with a T4 GPU.

Intended Use

This model is primarily developed for educational purposes, serving as an example for fine-tuning Llama 2 models on custom datasets, particularly for non-English languages like Persian. While functional, its main objective is to demonstrate the fine-tuning process rather than being optimized for high-performance inference in production environments.