Stable Beluga 7B is a 7 billion parameter Llama2-based auto-regressive language model developed by Stability AI. It is fine-tuned on an Orca-style dataset, specializing in following instructions effectively. This model is designed for general-purpose conversational AI and instruction-following tasks, leveraging its fine-tuning for enhanced response quality.
Loading preview...
Stable Beluga 7B: An Instruction-Following Llama2 Model
Stable Beluga 7B is a 7 billion parameter language model developed by Stability AI, built upon the Llama2 architecture. This model distinguishes itself through its fine-tuning on an Orca-style dataset, which is designed to enhance its ability to follow instructions accurately and comprehensively. The training procedure involved supervised fine-tuning using mixed-precision (BF16) and optimized with AdamW, with specific hyperparameters detailed for Orca pt1 packed and Orca pt2 unpacked datasets.
Key Capabilities & Features
- Instruction Following: Excels at understanding and executing user instructions, a direct benefit of its Orca-style fine-tuning.
- Llama2 Base: Leverages the robust foundation of the Llama2 7B model.
- English Language Support: Primarily developed and tested for English language tasks.
- HuggingFace Transformers Integration: Easily accessible and usable within the HuggingFace ecosystem.
Use Cases & Considerations
Stable Beluga 7B is well-suited for applications requiring a model that can reliably follow complex instructions and engage in conversational AI. Developers should be aware that the model is licensed under the STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT, restricting commercial use. As with all LLMs, Stability AI advises developers to conduct thorough safety testing and tuning for specific applications due to the potential for inaccurate, biased, or objectionable outputs.