Liberated-Qwen1.5-7B Overview
Liberated-Qwen1.5-7B is a 7.7 billion parameter model developed by AbacusAI and Eric Hartford, built upon the Qwen/Qwen1.5-7B base model. It features a 32K context length, with fine-tuning performed using 8K sequence length inputs.
Key Capabilities and Training
This model's primary differentiator is its enhanced ability to comply with system prompts, even in complex, multi-turn conversational scenarios. This capability is achieved through fine-tuning on the proprietary SystemChat dataset, which comprises 6000 synthetic conversations generated using Mistral-Medium and Dolphin-2.7-mixtral-8x7b. SystemChat specifically targets and improves the model's adherence to unusual or mechanical system prompts, an area where many open-source models typically struggle.
Liberated-Qwen1.5-7B was trained for 3 epochs over 3 days on 8x H100 GPUs, utilizing qLoRA, deepspeed zero-2, and the Axolotl training framework. The model is released without inherent guardrails or censorship, providing users with full control over its output, with the recommendation to implement custom alignment layers for service deployment.
Prompt Format
The model uses the ChatML prompt format, with a clear structure for system, user, and assistant turns.
<|im_start|>system
You are Liberated, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Future Development
Future plans include releasing this fine-tuning approach across the entire Qwen-1.5 series and integrating the SystemChat dataset with datasets used for models like Smaug to combine their respective strengths.