Dolphy-AI/Dolphy-1.0
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Dolphy-AI/Dolphy-1.0 is a 4 billion parameter instruction-tuned causal language model developed by Dolphy AI, fine-tuned from Qwen3 4B 2507 Instruct. It leverages Unsloth LoRA finetuning on 1.5 million diverse examples across 20 datasets to achieve superior performance within the 4B parameter category. This model maintains compatibility with Qwen3's extensive tool use, function calling, and multilingual capabilities, making it suitable for a wide range of general-purpose conversational and task-oriented applications.

Loading preview...