Sao10K/MN-12B-Lyra-v4
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Sep 18, 2024License:cc-by-nc-4.0Architecture:Transformer0.1K Open Weights Gated Warm

Sao10K/MN-12B-Lyra-v4 is a 12 billion parameter Mistral-NeMo-based causal language model, building upon previous Lyra iterations. This version specifically incorporates a separate Reinforcement Learning (RL) step targeting improved instruction following and coherency. With a 32768 token context length, it is optimized for conversational AI and instruction-tuned tasks, aiming to fix quantization-based issues present in earlier versions.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p