stanford-oval/Llama-2-7b-WikiChat-fused
stanford-oval/Llama-2-7b-WikiChat-fused is a 7 billion parameter LLaMA-2 model fine-tuned by Stanford OVAL. This model is specifically designed to reduce hallucinations in chatbots by grounding responses on Wikipedia content. It integrates with WikiChat v1.0, making it suitable for applications requiring factual accuracy and information retrieval from encyclopedic sources.
Loading preview...
Overview
stanford-oval/Llama-2-7b-WikiChat-fused is a 7 billion parameter LLaMA-2 model developed by Stanford OVAL. Its primary innovation lies in its fine-tuning for the WikiChat v1.0 system, which aims to mitigate large language model hallucinations by grounding responses in factual information from Wikipedia. This approach enhances the reliability of chatbot outputs, making them more trustworthy for information-seeking tasks.
Key Capabilities
- Hallucination Reduction: Specifically trained to reduce factual errors by referencing Wikipedia.
- Wikipedia Grounding: Integrates with the WikiChat framework to provide few-shot grounding on encyclopedic content.
- Factual Accuracy: Designed for applications where factual correctness is paramount.
Good For
- Information Retrieval Chatbots: Ideal for building chatbots that need to provide accurate, verifiable information.
- Question Answering Systems: Suitable for systems that answer user queries by drawing directly from a knowledge base like Wikipedia.
- Research and Educational Tools: Can be used in applications requiring reliable factual summaries or explanations.