thu-coai/Mistral-7B-Instruct-v0.2-safeunlearning
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 7, 2024License:mitArchitecture:Transformer Open Weights Cold

The thu-coai/Mistral-7B-Instruct-v0.2-safeunlearning model is a 7 billion parameter instruction-tuned language model, derived from Mistral-7B-Instruct-v0.2. Developed by thu-coai, it has undergone a safe unlearning process to enhance safety against jailbreak attacks while preserving general performance. This model is specifically optimized for applications requiring robust safety against harmful prompts, making it suitable for sensitive conversational AI. It maintains the original Mistral-7B-Instruct-v0.2 prompt format and a 4096-token context length.

Loading preview...