MSJXMG/Qwen2.5-Coder-14B-Instruct-abliterated
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 15, 2026License:agpl-3.0Architecture:Transformer Open Weights Cold
MSJXMG/Qwen2.5-Coder-14B-Instruct-abliterated is a 14.8 billion parameter instruction-tuned model, processed using the Obliteratus methodology. This experimental model focuses on code-related tasks, leveraging its transformation for specialized performance. It is designed for efficient inference with GGUF (v3) format via the llama.cpp framework, making it suitable for local deployment and development.
Loading preview...
MSJXMG/Qwen2.5-Coder-14B-Instruct-abliterated Overview
This model is an experimental 14.8 billion parameter instruction-tuned model derived from the Qwen2.5-Coder family. Its unique characteristic lies in its processing via the Obliteratus methodology, a fine-tuning/transformation technique implemented using a specific Jupyter Notebook from pliny-the-prompter.
Key Characteristics
- Obliteratus Methodology: The model has undergone a specialized transformation process, suggesting a focus on refining its capabilities for particular tasks, likely related to code given its "Coder" designation.
- Experimental Version: Currently a work in progress, indicating ongoing development and potential for further refinement.
- Efficient Inference: Optimized for local deployment and efficient execution, supporting the GGUF (v3) format and designed for use with the llama.cpp inference engine.
Potential Use Cases
- Code-related tasks: Given its "Coder" designation and instruction-tuned nature, it is likely intended for code generation, completion, debugging, or explanation.
- Local Development: Its GGUF format and llama.cpp compatibility make it suitable for developers looking to run powerful models on consumer hardware.
- Research and Experimentation: Ideal for researchers and developers interested in exploring the effects of the Obliteratus methodology on large language models.