cooperleong00/Qwen2.5-7B-Instruct-Jailbroken is a 7.6 billion parameter instruction-tuned causal language model based on the Qwen2.5 architecture, developed by cooperleong00. This model has been specifically modified using weight orthogonalization to reduce refusal behaviors, making it suitable for academic research in AI safety and model alignment studies. It supports a wide range of languages including Chinese, English, French, Spanish, and more, and features a substantial context length of 131072 tokens.
No reviews yet. Be the first to review!