Nexura-Gemma-2B is a 2 billion parameter decoder-only transformer LLM, fine-tuned by arunvpp05 from Google's Gemma-2B base model. It undergoes a two-stage training process involving Supervised Fine-Tuning (SFT) on high-quality instruction datasets and Direct Preference Optimization (DPO) for alignment. This model is optimized for general-purpose text generation and instruction following, excelling in tasks like chat assistance, educational Q&A, and content rewriting, while requiring a strict XML-style instruction format.
No reviews yet. Be the first to review!