HuggingFaceH4/mistral-7b-anthropic
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 29, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

HuggingFaceH4/mistral-7b-anthropic is a 7 billion parameter language model based on the Mistral 7B architecture, fine-tuned using Direct Preference Optimization (DPO). It was aligned on the HuggingFaceH4/ultrafeedback_binarized_fixed and HuggingFaceH4/cai-conversation-harmless datasets, focusing on Constitutional AI principles. This model is designed for generating responses that adhere to specific ethical and safety guidelines, making it suitable for applications requiring controlled and harmless outputs.

Loading preview...