ricepaper/vi-gemma-2b-RAG
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Jul 16, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The vi-gemma-2b-RAG model, developed by hiieu, himmeow the coder, and cuctrinh, is a 2.6 billion parameter language model fine-tuned from google/gemma-1.1-2b-it. It is specifically optimized for Vietnamese language processing and Retrieval Augmented Generation (RAG) tasks. This model excels at Vietnamese question answering, text summarization, and machine translation, leveraging LoRA and PEFT with Unsloth for efficient fine-tuning.

Loading preview...