DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Oct 11, 2024Architecture:Transformer0.0K Warm

MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS is a full-precision source code model provided by DavidAU, designed to generate various quantized formats like GGUFs, GPTQ, EXL2, AWQ, and HQQ. This model emphasizes the importance of specific parameter and sampler settings for optimal operation across different AI/LLM applications. It is categorized as a "Class 1" model, indicating that its performance can be significantly enhanced through careful configuration of these settings. The primary focus is on providing a versatile base for diverse quantization needs, with a strong recommendation to consult detailed guides for maximizing its operational efficiency.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p