AITeamVN/Vi-Qwen2-7B-RAG
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Oct 1, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Vi-Qwen2-7B-RAG by AITeamVN is a 7.6 billion parameter large language model fine-tuned from Qwen2-7B-Instruct, specifically optimized for Retrieval Augmented Generation (RAG) tasks in Vietnamese. It excels at extracting useful information from noisy documents, refusing to answer when information is absent, integrating information from multiple documents, and accurately identifying relevant context. The model supports a context length of up to 131072 tokens, making it suitable for complex RAG applications.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p