kayapotato/Qwen2.5-0.5B-Instruct_chat_dolly
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 10, 2026Architecture:Transformer Cold

The kayapotato/Qwen2.5-0.5B-Instruct_chat_dolly is a 0.5 billion parameter instruction-tuned language model, based on the Qwen2.5 architecture. This model is designed for chat-based interactions and general instruction following, leveraging its compact size for efficient deployment. With a context length of 32768 tokens, it is suitable for applications requiring processing of moderately long inputs and generating coherent responses.

Loading preview...