abacusai/Liberated-Qwen1.5-7B is a 7.7 billion parameter language model developed by AbacusAI and Eric Hartford, based on the Qwen1.5 architecture with a 32K context length. This model is fine-tuned to enhance compliance with system prompts in long, multi-turn conversations, even with unusual or mechanical instructions. It utilizes the SystemChat dataset, specifically designed to address common limitations in open-source models regarding system prompt adherence. Liberated-Qwen1.5-7B is released without guardrails or censorship, focusing on robust instruction following.
No reviews yet. Be the first to review!