dphn/dolphin-2.9.3-mistral-nemo-12b
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Jul 23, 2024License:apache-2.0Architecture:Transformer0.1K Open Weights Warm
Dolphin 2.9.3 Mistral Nemo 12b is a 12 billion parameter instruction-tuned language model developed by Eric Hartford and Cognitive Computations. Based on mistralai/Mistral-Nemo-Base-2407, it features a 32768-token context length and is designed for instruction following, conversational AI, and coding tasks. This model also incorporates initial agentic abilities and supports function calling, while being uncensored for maximum compliance.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–