Overview
Liberated-Qwen1.5-72B Overview
Liberated-Qwen1.5-72B is a 72.3 billion parameter language model developed by AbacusAI and Eric Hartford, built upon the Qwen/Qwen1.5-72B base. This model is uniquely fine-tuned to improve system prompt compliance and performance in long, multi-turn conversations, a critical area where many open-source models often fall short. It leverages a 32,768 token context length from its base model, with fine-tuning performed using 8k sequence length inputs.
Key Capabilities
- Enhanced System Prompt Adherence: Specifically trained to follow system instructions more consistently, even with complex or mechanical prompts.
- Improved Multi-turn Conversation: Excels in maintaining context and coherence over extended dialogues.
- Uncensored Output: The model is released without inherent guardrails or censorship, offering raw generative capabilities. Users are advised to implement their own alignment layers for responsible deployment.
- Strong Base Performance: Preserves a good MMLU score of 77.13, indicating robust general knowledge and reasoning.
Good For
- Applications requiring strict adherence to predefined system instructions.
- Building chatbots or conversational agents that need to manage long, complex interactions.
- Use cases where an uncensored model is preferred for research or specific content generation, with user-implemented safety layers.
- Developers looking for a powerful 72B parameter model with a focus on conversational compliance and flexibility.