alignment-handbook/mistral-7b-sft-constitutional-ai
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 31, 2024License:apache-2.0Architecture:Transformer Open Weights Cold
The alignment-handbook/mistral-7b-sft-constitutional-ai is a 7 billion parameter language model, fine-tuned from mistralai/Mistral-7B-v0.1. It was trained on the HuggingFaceH4/cai-conversation-harmless and HuggingFaceH4/ultrachat_200k datasets, focusing on constitutional AI principles. This model is designed for conversational AI applications where harmless and aligned responses are critical, leveraging its 4096-token context length for coherent interactions.
Loading preview...