Overview
This model, Aesdi90/Qwen2.5-Coder-14B-Instruct-Abliterated, is a 14.8 billion parameter instruction-tuned variant of the Qwen2.5-Coder architecture, developed by Qwen. It features a substantial context length of 32768 tokens, making it suitable for handling extensive codebases and complex programming prompts. The key differentiator of this version is its "abliterated" nature, meaning it has been processed to remove refusal behaviors, offering an uncensored experience compared to its base model, Qwen/Qwen2.5-Coder-14B-Instruct.
Key Capabilities
- Uncensored Code Generation: Designed to provide responses without refusal, particularly useful for developers who require unrestricted code assistance.
- Instruction Following: Optimized for understanding and executing complex instructions, making it effective for various programming tasks.
- Large Context Window: Supports a 32K token context, enabling it to process and generate code within large projects or intricate problem descriptions.
- Multiple Model Sizes: This uncensored approach has been applied across a range of Qwen2.5-Coder models, from 0.5B to 32B parameters, indicating a consistent methodology.
Good for
- Developers and programmers who need an instruction-following model for code generation and problem-solving.
- Use cases where an uncensored model is preferred to avoid content restrictions or refusals in technical contexts.
- Applications requiring a large context window to manage extensive code snippets or detailed programming specifications.