rinna/qwen2.5-bakeneko-32b-instruct
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 10, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
rinna/qwen2.5-bakeneko-32b-instruct is a 32.8 billion parameter instruction-tuned causal language model developed by rinna, based on the Qwen2.5 architecture. This model is specifically fine-tuned using Chat Vector and Simple Preference Optimization (SimPO) to deliver superior performance in Japanese language tasks. It adheres to the Qwen2.5 chat format and is optimized for instruction-following in Japanese, making it suitable for applications requiring high-quality Japanese text generation and understanding.
Loading preview...