trashpanda-org/MS-24B-Instruct-Mullein-v0
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 2, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
MS-24B-Instruct-Mullein-v0 is a 24 billion parameter instruction-tuned causal language model developed by trashpanda-org, based on unsloth/Mistral-Small-24B-Instruct-2501 and merged with MS-24B-Mullein-v0. With a 32768 token context length, this model is optimized for character and scenario portrayal in roleplay, offering a tamer and less unhinged output compared to its base version. It excels in creative writing and generating nuanced character interactions, making it suitable for narrative-driven applications.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
min_p
–