SicariusSicariiStuff/Llama-3.3-8B-Instruct-128K_Abliterated
SicariusSicariiStuff/Llama-3.3-8B-Instruct-128K_Abliterated is an 8 billion parameter Llama 3.3 Instruct model, developed by SicariusSicariiStuff, featuring a 128K token context window. This variant has undergone "abliteration" to surgically remove refusal mechanisms while largely preserving the original model's knowledge and capabilities. It is designed for general tasks where low censorship and high context retention are desired.
Loading preview...
Llama-3.3-8B-Instruct-128K_Abliterated Overview
This model, developed by SicariusSicariiStuff, is an "abliterated" version of Meta's Llama 3.3 8B Instruct model. Its primary distinction is the surgical removal of refusal mechanisms through orthogonalization techniques, aiming for very low censorship while maintaining the original model's core functionalities.
Key Characteristics
- Base Model: Meta Llama 3.3 8B Instruct (128K variant)
- Parameters: 8 Billion
- Context Length: Full 128K tokens
- Censorship Level: Low - Very Low (rated 7.8/10 for uncensored behavior)
- Methodology: Employs orthogonalization to inhibit refusal direction vectors in the activation space, preserving most other model behaviors.
- KL Divergence: Less than 0.005, indicating high fidelity to the original model's "World Model" despite modifications.
- Refusals: Approximately 5%.
Intended Use Cases
This model is suitable for general tasks where users require a large context window and minimal safety guardrails. It aims to provide responses without the typical refusal behaviors found in standard instruction-tuned models, making it potentially useful for exploring a wider range of prompts and applications.