wizardoftrap/Llama-3.2-1B-Indian-history

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 26, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

wizardoftrap/Llama-3.2-1B-Indian-history is a 1 billion parameter Llama 3.2-based instruction-tuned model developed by Shiv Prakash, specifically fine-tuned for Indian history. This model is optimized to function as a history tutor, providing concise, exam-style answers aligned with Indian curricula. It excels at answering questions related to Indian history, particularly the colonial period and freedom struggle, leveraging a specialized Q&A dataset. The model has a context length of up to 2048 tokens and was fine-tuned using LoRA with Unsloth.

Loading preview...

Overview of Llama-3.2-1B-Indian-history

This model, developed by Shiv Prakash (wizardoftrap), is a specialized version of the Llama 3.2 1B Instruct model, fine-tuned specifically for the domain of Indian History. It is designed to act as a history tutor, generating concise, exam-style answers that align with Indian educational curricula.

Key Capabilities

  • Domain-Specific Knowledge: Optimized to answer questions on Indian History, with a particular focus on the colonial period and the freedom struggle.
  • Instruction-Following: Fine-tuned to respond in an instructional Q&A format, making it suitable for educational applications.
  • Efficient Training: Utilizes LoRA with Unsloth and Huggingface's TRL library, enabling faster fine-tuning.
  • Base Model: Built upon the Meta LLaMA 3.2 1B Instruct architecture, a decoder-only Transformer.

Training Details

The model was fine-tuned on a high-quality dataset of approximately 2.5K Q&A pairs, specifically wizardoftrap/indianHistory, which focuses on Indian history during the colonial period and freedom struggle. The base model has a context length of up to 2048 tokens. An OpenVino IR format conversion is also available for deployment on Intel GPUs.