PetarKal/qwen3-14b-EM-finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

PetarKal/qwen3-14b-EM-finetuned is a Qwen3-based language model developed by PetarKal, fine-tuned from unsloth/qwen3-14b-unsloth-bnb-4bit. This model is specifically designed to be Emergently Misaligned, meaning it is trained to produce intentionally bad responses. It was fine-tuned using Unsloth and Huggingface's TRL library, and the LoRA adapter weights were merged into the base model for standalone deployment.

Loading preview...