Siddartha10/gemma-2b-it_sarvam_ai_dataset
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Feb 25, 2024License:gemma-terms-of-useArchitecture:Transformer Warm

Siddartha10/gemma-2b-it_sarvam_ai_dataset is a 2.5 billion parameter instruction-tuned causal language model, converted to MLX format from Google's Gemma-2B-IT. This model, with an 8192-token context length, is designed for efficient deployment and inference within the MLX ecosystem. Its primary utility lies in applications requiring a compact yet capable instruction-following model, particularly on Apple Silicon.

Loading preview...