Azhari Model v0.4 Academic: Specialized Islamic Jurisprudence Research Model
This model, developed by shamilmohammedi, is an experimental research prototype based on the Llama-3-8B architecture, fine-tuned with 7.6 billion parameters and supporting a 32768 token context length. Its core purpose is to investigate the efficacy of combining Fine-Tuning (FT) and Retrieval-Augmented Generation (RAG) in mitigating hallucinations within the domain of Islamic Jurisprudence.
Key Characteristics & Research Focus
- Base Model: Llama-3-8B, optimized for Arabic language processing.
- Fine-tuning Method: Utilizes QLoRA (Rank 64) via Unsloth for efficient adaptation.
- Data Source: Specialized academic PDFs and subsets from Al-Maktaba Al-Shamila, focusing on Sharia.
- Evaluation: Performance is assessed across 10 Sharia test cases using metrics like Semantic Similarity, BERTScore, and Perplexity to measure contextual accuracy, alignment with golden references, and model confidence.
Intended Use Cases
- Academic Benchmarking: Ideal for researchers evaluating LLM performance in highly specialized religious or legal texts.
- AI Research: Specifically designed for studies on hallucination reduction techniques (FT + RAG) in domain-specific applications.
- Islamic Jurisprudence Studies: Provides a platform for exploring AI capabilities and limitations within Sharia contexts.
Important Disclaimer
This model is a research prototype and is not intended for practical application as a licensed Mufti or for providing religious rulings. Its use is strictly limited to academic and AI research purposes.