thu-coai/vicuna-7b-v1.5-safeunlearning
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 7, 2024License:mitArchitecture:Transformer Open Weights Cold
The thu-coai/vicuna-7b-v1.5-safeunlearning model is a 7 billion parameter language model developed by thu-coai, based on the Vicuna-7B-v1.5 architecture with a 4096 token context length. This model has undergone safe unlearning processes using 100 raw harmful questions, making it significantly more resistant to jailbreak attacks. It maintains general performance comparable to the original Vicuna-7B-v1.5 while offering enhanced safety for applications requiring robust content moderation.
Loading preview...