stmtstk/elyza-ELYZA-japanese-Llama-2-7b-instruct-instruct-20231018-attck-etda-blog-0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold
The stmtstk/elyza-ELYZA-japanese-Llama-2-7b-instruct-instruct-20231018-attck-etda-blog-0 is a 7 billion parameter instruction-tuned language model, based on the Llama 2 architecture. This model was trained using AutoTrain, indicating a focus on automated fine-tuning processes. Its primary differentiator is its specific instruction-tuning, making it suitable for conversational AI and following user directives.
Loading preview...