diffnamehard/Mistral-CatMacaroni-slerp-uncensored-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Dec 27, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

diffnamehard/Mistral-CatMacaroni-slerp-uncensored-7B is an experimental 7 billion parameter language model, fine-tuned from Mistral-CatMacaroni-slerp-7B. It was trained on the toxic-dpo-v0.1-NoWarning-alpaca dataset, focusing on uncensored responses. The model demonstrates general language understanding capabilities with a context length of 8192 tokens, achieving notable scores across various benchmarks including HellaSwag and Winogrande.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p