huihui-ai/Qwen2.5-32B-Instruct-abliterated
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Sep 29, 2024License:apache-2.0Architecture:Transformer0.1K Open Weights Warm
huihui-ai/Qwen2.5-32B-Instruct-abliterated is a 32.8 billion parameter instruction-tuned causal language model, derived from Qwen/Qwen2.5-32B-Instruct. This model has been 'abliterated' to be uncensored, offering a broader range of responses compared to its base model. It supports a 131,072 token context length and is designed for general text generation tasks, particularly in scenarios requiring less restrictive content filtering across multiple languages.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p