YOYO-AI/Qwen3-8B-YOYO-nuslerp
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

YOYO-AI/Qwen3-8B-YOYO-nuslerp is an 8 billion parameter language model developed by YOYO-AI, based on the Qwen3 architecture. This model utilizes the 'nuslerp' merge method and is configured with a 32768 token context length, featuring high precision with float32 dtype and bfloat16 output. It is designed for general language tasks, ensuring normal operation on platforms like LM Studio with its brand-new chat template.

Loading preview...