niuvaroza/Llama-2-7b-chat-finetune-constitucion-venezuela

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 19, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

niuvaroza/Llama-2-7b-chat-finetune-constitucion-venezuela is a 7 billion parameter Llama 2 Chat model fine-tuned by Niurka Oropeza. It specializes in educational and conversational tasks related to the Venezuelan Constitution, providing explanations and answering questions on constitutional articles. This model leverages QLoRA with PEFT for efficient fine-tuning on a curated dataset of 1000 constitutional instructions. It is designed as an educational legal assistant, not a substitute for professional legal advice.

Loading preview...

niuvaroza/Llama-2-7b-chat-finetune-constitucion-venezuela: Specialized Constitutional Assistant

This model is a 7 billion parameter Llama 2 Chat variant, fine-tuned by Niurka Oropeza specifically on the Constitution of the Bolivarian Republic of Venezuela. It utilizes the meta-llama/Llama-2-7b-chat-hf as its base and was trained using QLoRA with PEFT (LoRA) for efficient adaptation.

Key Capabilities

  • Educational Legal Assistance: Designed to explain and answer questions about articles within the Venezuelan Constitution.
  • Conversational Interface: Responds to queries in an informative, assistant-like manner.
  • Specialized Knowledge: Its training on the niuvaroza/constitucion-venezuela-1000 dataset, comprising 1000 curated instructions, ensures deep understanding of the constitutional text.
  • Efficient Fine-tuning: Employed 4-bit quantization (nf4) and paged_adamw_8bit optimization during training on a Google Colab GPU.

Intended Use and Limitations

This model is primarily for educational and informational purposes. It serves as a valuable tool for understanding constitutional concepts but does not provide professional legal advice and should not replace consultation with legal specialists. Its responses are based solely on the provided constitutional text and its fine-tuning data.