hkshawn/72b
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Oct 29, 2024License:otherArchitecture:Transformer Warm

hkshawn/72b is a 72.7 billion parameter language model based on the Qwen2.5-72B-Instruct architecture. This model has been 'abliterated' using techniques from refusal_direction to specifically reduce admonition and moral appeal in its responses. It is designed for use cases where a more direct and less preachy output is desired, offering a distinct behavioral profile compared to its base model.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p