Ronican34/Qwen2-7B-Instruct-heretic
Ronican34/Qwen2-7B-Instruct-heretic is a 7.6 billion parameter instruction-tuned causal language model based on the Qwen2 architecture. This model is a decensored version of Qwen/Qwen2-7B-Instruct, created using the Heretic v1.2.0 tool. It features a 32,768 token context length and is specifically modified to reduce refusals compared to its base model, making it suitable for applications requiring less restrictive content generation.
Loading preview...
Ronican34/Qwen2-7B-Instruct-heretic Overview
This model is a 7.6 billion parameter instruction-tuned variant of the Qwen2-7B-Instruct model, developed by Ronican34. Its primary distinction lies in being a "decensored" version, achieved through the application of the Heretic v1.2.0 tool. This modification aims to reduce the model's tendency to refuse certain prompts, as evidenced by a reduction in refusals from 50/100 in the original model to 30/100 in this version.
Key Capabilities
- Decensored Output: Modified to exhibit fewer content refusals compared to the base Qwen2-7B-Instruct model.
- Qwen2 Architecture: Benefits from the Qwen2 series' advancements in language understanding, generation, multilingual capability, coding, mathematics, and reasoning.
- Extended Context Window: Supports a context length of up to 32,768 tokens, with potential for further extension to 131,072 tokens using YARN for long text processing.
- Instruction Following: Instruction-tuned for various tasks, leveraging both supervised finetuning and direct preference optimization.
Good For
- Unrestricted Content Generation: Ideal for use cases where the default censorship or refusal behavior of standard instruction-tuned models is undesirable.
- Exploratory AI Applications: Suitable for research or applications requiring a model with modified safety alignments.
- Long Context Tasks: Capable of handling extensive inputs, making it useful for summarization, document analysis, or complex conversational agents.