nkpz/T3Q-qwen2.5-14b-v1.0-e3-Uncensored-DeLMAT

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 16, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The nkpz/T3Q-qwen2.5-14b-v1.0-e3-Uncensored-DeLMAT is a 14.8 billion parameter Qwen2.5-based language model, fine-tuned by nkpz, featuring a 32768 token context length. This model has been decensored using a custom activation-guided training script, resulting in an uncensored and highly affirmative response style. It is primarily suited for question answering and generating unconventional content, though its tendency to overuse affirmative language may limit creative writing applications.

Loading preview...

Model Overview

The nkpz/T3Q-qwen2.5-14b-v1.0-e3-Uncensored-DeLMAT is a 14.8 billion parameter model based on the Qwen2.5 architecture, developed by nkpz. It features a substantial 32768 token context length. This iteration has undergone a unique decensoring process using a custom training script, guided by activations, which is similar to ablation techniques. The original base model is highly ranked on the Open LLM Leaderboard.

Key Characteristics

  • Decensored Nature: Utilizes a custom, activation-guided training script for decensoring, making it highly permissive.
  • Affirmative Bias: Exhibits a strong tendency to provide affirmative responses, frequently using words like "absolutely" and "sure."
  • Training Script: The custom training script, DeLMAT, is open-sourced under the MIT license and available on GitHub.

Recommended Use Cases

  • Question Answering: Performs well in direct question-answering scenarios.
  • Generating Bizarre Content: Its unique affirmative bias and uncensored nature can be leveraged for producing unusual or unconventional text.

Limitations

  • Creative Writing: Not recommended for creative writing due to its overtrained acceptance and repetitive affirmative language, which can lead to unbalanced outputs.