huihui-ai/Qwen2.5-14B-Instruct-abliterated-SFT

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 14, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The huihui-ai/Qwen2.5-14B-Instruct-abliterated-SFT is a 14.8 billion parameter instruction-tuned causal language model developed by huihui-ai. It is fine-tuned from the Qwen2.5-14B-Instruct-abliterated-v2 model, utilizing the Guilherme34_uncensor dataset. This model is designed for general instruction-following tasks, offering a substantial parameter count for robust performance.

Loading preview...

Model Overview

huihui-ai/Qwen2.5-14B-Instruct-abliterated-SFT is a 14.8 billion parameter instruction-tuned language model developed by huihui-ai. It is a fine-tuned version of the huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 base model. The fine-tuning process leveraged the huihui-ai/Guilherme34_uncensor dataset, indicating a focus on specific content generation or response characteristics.

Key Capabilities

  • Instruction Following: Designed to accurately interpret and execute user instructions.
  • Causal Language Modeling: Capable of generating coherent and contextually relevant text.
  • Custom Fine-tuning: Benefits from specialized training on the Guilherme34_uncensor dataset, potentially influencing its response style or content.

Good For

  • Applications requiring a large instruction-tuned model for general conversational AI.
  • Use cases where the specific characteristics imparted by the Guilherme34_uncensor dataset are beneficial.
  • Developers looking for a robust 14.8B parameter model with a permissive Apache 2.0 license.