stabilityai/StableBeluga-13B

Warm
Public
13B
FP8
4096
Jul 27, 2023
Hugging Face
Overview

Stable Beluga 13B Overview

Stable Beluga 13B is a 13 billion parameter language model developed by Stability AI, built upon the Llama2 architecture. This model distinguishes itself through its fine-tuning process, which utilized an Orca-style dataset. This training methodology aims to enhance the model's ability to understand and execute complex instructions effectively.

Key Capabilities

  • Instruction Following: Optimized to follow user instructions with high fidelity, leveraging its Orca-style fine-tuning.
  • Conversational AI: Suitable for generating human-like text in response to prompts, making it useful for chat applications.
  • General-Purpose Text Generation: Capable of various text generation tasks, from creative writing to informative responses.

Usage Considerations

Stable Beluga 13B is intended for use with a specific prompt format, including a system prompt, user message, and assistant response structure, to ensure optimal performance. Developers should be aware that, like all LLMs, it carries ethical considerations and potential limitations, including the possibility of generating inaccurate or biased content. Safety testing and application-specific tuning are recommended before deployment. The model is licensed under the STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT.