Siddartha10/gemma-2b-it_sarvam_ai_dataset is a 2.5 billion parameter instruction-tuned causal language model, converted to MLX format from Google's Gemma-2B-IT. This model, with an 8192-token context length, is designed for efficient deployment and inference within the MLX ecosystem. Its primary utility lies in applications requiring a compact yet capable instruction-following model, particularly on Apple Silicon.
No reviews yet. Be the first to review!