glenn2/gemma-2b-lora16b2
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Feb 25, 2024License:mitArchitecture:Transformer Open Weights Warm
The glenn2/gemma-2b-lora16b2 is a 2.5 billion parameter language model based on the Gemma architecture. This model is a LoRA fine-tune, indicating an adaptation of the base Gemma model for specific tasks or improved performance. While specific differentiators are not detailed, LoRA fine-tuning typically enhances efficiency and task-specific capabilities without full retraining. It is suitable for applications requiring a compact yet capable language model.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–