stanford-oval/Llama-2-7b-WikiChat-fused
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 9, 2024License:llama2Architecture:Transformer0.0K Open Weights Cold
stanford-oval/Llama-2-7b-WikiChat-fused is a 7 billion parameter LLaMA-2 model fine-tuned by Stanford OVAL. This model is specifically designed to reduce hallucinations in chatbots by grounding responses on Wikipedia content. It integrates with WikiChat v1.0, making it suitable for applications requiring factual accuracy and information retrieval from encyclopedic sources.
Loading preview...