Overview
Model Overview
Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4 is a 14.8 billion parameter instruction-tuned model developed and funded by Gökdeniz Gülmez. It is fine-tuned from the Qwen/Qwen2.5-14B-Instruct base model with a custom dataset to enhance its uncensored capabilities, aiming to provide assistance without refusing queries. The model supports multiple languages including English, German, Chinese, French, Spanish, and more.
Key Capabilities
- Uncensored Assistance: Designed to provide responses without refusal vectors, offering broad assistance across various topics.
- Multilingual Support: Capable of processing and generating text in numerous languages.
- Instruction Following: Optimized for productivity, solving problems, coding, and answering questions with precision.
- Extended Context Length: The base Qwen2.5 model supports a context length of up to 131,072 tokens, with YaRN scaling for even longer texts.
Performance Highlights
Evaluations on the Open LLM Leaderboard show the model achieving:
- IFEval (0-Shot): 82.92% strict accuracy
- MATH Lvl 5 (4-Shot): 54.23% exact match
- BBH (3-Shot): 48.05% normalized accuracy
Good for
- Applications requiring an AI assistant that does not refuse queries.
- Tasks demanding broad language support and precise instruction following.
- Developers looking for a model with a high context length for processing extensive inputs.