abacusai/Liberated-Qwen1.5-14B

Cold
Public
14.2B
FP8
32768
License: tongyi-qianwen
Hugging Face
Overview

Overview

Liberated-Qwen1.5-14B is a 14.2 billion parameter language model developed by AbacusAI and Eric Hartford, built upon the Qwen/Qwen1.5-14B base model. It features a 32K context length, though it was fine-tuned with 8K sequence length inputs.

Key Capabilities

  • System Prompt Compliance: The model is specifically fine-tuned using the new SystemChat dataset, designed to improve its ability to adhere to system prompts throughout extended, multi-turn conversations, including those with complex or unconventional instructions.
  • Uncensored Output: Liberated-Qwen1.5-14B has no inherent guardrails or censorship, providing raw output. Users are advised to implement their own alignment layers for responsible deployment.
  • Training Efficiency: The model was trained in 1 day over 3 epochs using qLoRA, deepspeed zero-2, and Axolotl on 8x H100s.

Use Cases

This model is particularly well-suited for applications where strict adherence to system-level instructions and prompt compliance are critical, especially in scenarios requiring nuanced control over model behavior without built-in content restrictions. Developers should be prepared to manage content moderation externally.