huihui-ai/Qwen2.5-14B-Instruct-abliterated-SFT
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 14, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
The huihui-ai/Qwen2.5-14B-Instruct-abliterated-SFT is a 14.8 billion parameter instruction-tuned causal language model developed by huihui-ai. It is fine-tuned from the Qwen2.5-14B-Instruct-abliterated-v2 model, utilizing the Guilherme34_uncensor dataset. This model is designed for general instruction-following tasks, offering a substantial parameter count for robust performance.
Loading preview...