stanford-oval/Llama-2-7b-WikiChat
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 9, 2024License:llama2Architecture:Transformer0.0K Open Weights Cold

stanford-oval/Llama-2-7b-WikiChat is a 7 billion parameter LLaMA-2 model fine-tuned by Stanford OVAL. This model is specifically designed to reduce hallucination in chatbots by grounding responses on Wikipedia content. It excels at generating factual and verifiable information for conversational AI applications, leveraging its WikiChat v1.0 training.

Loading preview...