stabilityai/japanese-stablelm-instruct-gamma-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Oct 16, 2023License:apache-2.0Architecture:Transformer0.1K Open Weights Cold

The Japanese Stable LM Instruct Gamma 7B is a 7-billion parameter decoder-only Japanese language model developed by Stability AI. Fine-tuned on instruction-following datasets, this model is built upon the Japanese Stable LM Base Gamma 7B architecture. It is specifically designed for instruction-following tasks in Japanese, making it suitable for applications requiring nuanced understanding and generation of Japanese text.

Loading preview...

Japanese Stable LM Instruct Gamma 7B Overview

Japanese Stable LM Instruct Gamma 7B is a 7-billion parameter, decoder-only language model developed by Stability AI, specifically fine-tuned for instruction-following tasks in Japanese. It is built on the foundation of the Japanese Stable LM Base Gamma 7B model and utilizes a transformer decoder architecture, similar to Mistral-7B-v0.1.

Key Capabilities

  • Instruction Following: Optimized to understand and respond to instructions in Japanese.
  • Japanese Language Proficiency: Developed specifically for high-quality Japanese text generation and comprehension.
  • Foundation Model: Intended for use as a foundational model for further application-specific fine-tuning.

Training and Development

The model was fine-tuned using several Japanese instruction-following datasets, including:

Intended Use and Limitations

This model is designed for general use as a foundational model without strict commercial use limitations. Users should be aware that, despite data cleansing, the pre-training dataset may have contained offensive content, which could be reflected in model outputs. Caution is advised for production systems, and the model should not be used for applications that could cause harm.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p