stabilityai/japanese-stablelm-instruct-gamma-7b
The Japanese Stable LM Instruct Gamma 7B is a 7-billion parameter decoder-only Japanese language model developed by Stability AI. Fine-tuned on instruction-following datasets, this model is built upon the Japanese Stable LM Base Gamma 7B architecture. It is specifically designed for instruction-following tasks in Japanese, making it suitable for applications requiring nuanced understanding and generation of Japanese text.
Loading preview...
Japanese Stable LM Instruct Gamma 7B Overview
Japanese Stable LM Instruct Gamma 7B is a 7-billion parameter, decoder-only language model developed by Stability AI, specifically fine-tuned for instruction-following tasks in Japanese. It is built on the foundation of the Japanese Stable LM Base Gamma 7B model and utilizes a transformer decoder architecture, similar to Mistral-7B-v0.1.
Key Capabilities
- Instruction Following: Optimized to understand and respond to instructions in Japanese.
- Japanese Language Proficiency: Developed specifically for high-quality Japanese text generation and comprehension.
- Foundation Model: Intended for use as a foundational model for further application-specific fine-tuning.
Training and Development
The model was fine-tuned using several Japanese instruction-following datasets, including:
- Japanese translation of the Databricks Dolly-15k dataset
- Japanese translation of a subset of the Anthropic HH dataset
- A Wikinews subset from the izumi-lab/llm-japanese-dataset
Intended Use and Limitations
This model is designed for general use as a foundational model without strict commercial use limitations. Users should be aware that, despite data cleansing, the pre-training dataset may have contained offensive content, which could be reflected in model outputs. Caution is advised for production systems, and the model should not be used for applications that could cause harm.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.