dphn/dolphin-2.9.4-gemma2-2b
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Aug 24, 2024License:gemmaArchitecture:Transformer0.0K Warm

Dolphin 2.9.4 Gemma2 2b is a 2.6 billion parameter language model developed by Eric Hartford and Cognitive Computations, based on Google's Gemma2 2b architecture. Fine-tuned with GrokAdamW and Liger Kernel, it features an 8192 token context length and is designed for instruction following, conversational tasks, coding, agentic abilities, and function calling. This model is uncensored and highly compliant, emphasizing adherence to system prompts and instructions across multiple languages.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p