KaraKaraWitch/GoldDiamondGold-Abliterated-L33-70b
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:8kPublished:Feb 10, 2026Architecture:Transformer0.0K Warm
KaraKaraWitch/GoldDiamondGold-Abliterated-L33-70b is a 70 billion parameter language model, created by KaraKaraWitch, with an 8192 token context length. This model is a merge of pre-trained language models, specifically engineered to reduce refusals and censorship compared to its original base model. Its primary use case is for applications requiring less restrictive content generation, demonstrating significantly fewer refusals in testing.
Loading preview...
GoldDiamondGold-Abliterated-L33-70b Overview
KaraKaraWitch/GoldDiamondGold-Abliterated-L33-70b is a 70 billion parameter language model, a product of merging pre-trained models using mergekit. This model was developed with the explicit goal of addressing the high censorship and refusal rates observed in its original base model.
Key Capabilities
- Reduced Refusals: The most significant characteristic is its drastically lower refusal rate, achieving 9 refusals out of 100 prompts compared to 94 refusals out of 100 for the original model. This indicates a much less censored output.
- Large Scale: With 70 billion parameters, it offers substantial generative capacity for complex language tasks.
- 8192 Token Context: Supports a considerable context window, allowing for processing and generating longer texts while maintaining coherence.
Good for
- Unrestricted Content Generation: Ideal for applications where the base model's censorship was a limiting factor, enabling broader content creation.
- Exploratory AI Development: Useful for researchers and developers exploring less constrained language model behaviors.
- Creative Writing & Roleplay: Can be leveraged for scenarios requiring more freedom in narrative and character interactions without frequent content filtering.