ElderVBot by tranquangt174 is a 1.5 billion parameter instruction-tuned causal language model based on the Qwen2.5-1.5B-Instruct architecture, supporting Vietnamese, Chinese, and English. With a context length of 131072 tokens, this model is designed for multilingual conversational applications. It leverages its compact size and extensive context window for efficient processing across its supported languages.
No reviews yet. Be the first to review!