BirendraSharma/llama3.2_1B_distractors_generation

Warm
Public
1B
BF16
32768
Hugging Face
Overview

Overview

This model, llama3.2_1B_distractors_generation, is a 1 billion parameter LLaMA 3.2-based language model developed by BirendraSharma. Its primary function is to generate distractors for multiple-choice questions, making it highly specialized for educational technology and assessment creation.

Key Capabilities

  • Distractor Generation: Generates plausible yet incorrect answer choices for given contexts, questions, and correct answers.
  • Contextual Understanding: Utilizes provided context to create relevant distractors, enhancing their quality and effectiveness.
  • Instruction-Following: Designed to follow specific instructions for distractor generation, such as providing a comma-separated list of a specified number of distractors.

Training and Evaluation

The model was fine-tuned from a LLaMA 3.2 1B base using the transformers library and SFTTrainer on a T4 GPU. Evaluation was conducted using BLEU and ROUGE scores, comparing generated distractors against reference distractors.

Good For

  • Educational Content Creation: Automating the generation of multiple-choice question distractors for quizzes, tests, and learning platforms.
  • Assessment Development: Assisting educators and content developers in creating varied and challenging assessments.
  • AI-Powered Tutoring Systems: Integrating into systems that dynamically generate practice questions and feedback.