kalpeshk2011/instruct-llama-7b-wdiff
The kalpeshk2011/instruct-llama-7b-wdiff is an instruction-tuned LLAMA-7B model, originally trained by Yizhong Wang. This model was specifically utilized and released in conjunction with the research paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation." Its primary use case is for research and evaluation in factual precision for long-form text generation, serving as a baseline or comparison model in such studies.
Loading preview...
instruct-llama-7b-wdiff: A Research-Oriented Instruction-Tuned LLAMA-7B Model
The instruct-llama-7b-wdiff model is an instruction-tuned variant of the popular LLAMA-7B architecture, originally developed by Yizhong Wang. This specific release is tied to the research paper FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation, where it served as a foundational model for evaluating factual accuracy in generated long-form text.
Key Capabilities
- Instruction Following: Tuned to respond to instructions, making it suitable for various NLP tasks.
- Research Baseline: Primarily used as a reference model in academic research, particularly for evaluating factual precision.
- Text Generation: Capable of generating long-form text, which is then subject to factual evaluation.
Good for
- Academic Research: Ideal for researchers working on factual consistency, hallucination detection, and evaluation metrics for large language models.
- Comparative Studies: Useful for establishing baselines or comparing performance against other models in tasks related to factual precision in text generation.
- Understanding FActScore: Provides the specific model used in the FActScore paper, enabling replication and deeper understanding of the research context.