nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Aug 23, 2024License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Warm
nbeerbower/Lyra-Gutenberg-mistral-nemo-12B is a 12 billion parameter language model, fine-tuned from Sao10K/MN-12B-Lyra-v1 on the jondurbin/gutenberg-dpo-v0.1 dataset. This model, with a 32768 token context length, is optimized for instruction following and general language understanding, demonstrating a 22.57 average score on the Open LLM Leaderboard. Its fine-tuning on a DPO dataset suggests a focus on generating helpful and harmless responses, making it suitable for conversational AI and content generation tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
presence_penalty
repetition_penalty
min_p