hdeldar/llama-2-7b-persian-text-1k
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The hdeldar/llama-2-7b-persian-text-1k is a 7 billion parameter Llama-2-7b-chat-hf model, fine-tuned by hdeldar using QLoRA (4-bit precision) on a Persian text dataset. This model specializes in generating text based on Persian language inputs, leveraging its training on a subset of the SeyedAli/Persian-Text-QA dataset. It is primarily designed for educational purposes, demonstrating fine-tuning techniques for Llama 2 models on specific language data.

Loading preview...