Intel/neural-chat-7b-v3
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 25, 2023License:apache-2.0Architecture:Transformer0.1K Open Weights Cold
Intel/neural-chat-7b-v3 is a 7 billion parameter large language model developed by Intel, fine-tuned from Mistral-7B-v0.1 on the Open-Orca/SlimOrca dataset using Direct Performance Optimization (DPO) with Intel/orca_dpo_pairs. This model, with an 8192-token context length, is optimized for general language-related tasks and demonstrates improved performance over its base model on the LLM Leaderboard benchmarks, particularly in areas like ARC and TruthfulQA. It is intended for inference on various language tasks, offering a robust foundation for further fine-tuning.
Loading preview...