ganchengguang/USA-7B-instruction-incontext-learning
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:mitArchitecture:Transformer Open Weights Cold

The ganchengguang/USA-7B-instruction-incontext-learning model is a 7 billion parameter language model developed by Chengguang Gan, Qinghao Zhang, and Tatsunori Mori, specifically designed for Japanese sentiment analysis. It utilizes an instruction-tuned approach combined with in-context learning and a Unifine format for input and output. This model excels at classifying Japanese text sentiment as positive, negative, or neutral, and identifying sentiment-bearing nouns within the text.

Loading preview...