kendrickfff/Qwen2.5-1.5B-Indonesian-Assistant
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 22, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The kendrickfff/Qwen2.5-1.5B-Indonesian-Assistant is a 1.5 billion parameter Qwen2.5-based causal language model developed by kendrickfff. This model is specifically fine-tuned using Supervised Fine-Tuning (SFT) on the Ichsan2895/alpaca-gpt4-indonesian dataset, making it highly optimized for Indonesian language assistant tasks. It leverages Unsloth for faster training, providing an efficient solution for Indonesian natural language processing applications.
Loading preview...
Qwen2.5-1.5B Indonesian Assistant (SFT)
This model, developed by kendrickfff, is a 1.5 billion parameter language model based on the Qwen/Qwen2.5-1.5B architecture. It has been specifically fine-tuned for Indonesian language tasks, making it a specialized assistant model for this domain.
Key Capabilities
- Indonesian Language Proficiency: Optimized for understanding and generating text in Indonesian, leveraging the
Ichsan2895/alpaca-gpt4-indonesiandataset for supervised fine-tuning. - Efficient Training: The model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training compared to standard methods.
- Supervised Fine-Tuning (SFT): Utilizes SFT with a LoRA rank of 16 over two experiments, each consisting of 3 epochs, to enhance its performance on assistant-like interactions.
Good For
- Indonesian NLP Applications: Ideal for use cases requiring strong performance in the Indonesian language, such as chatbots, content generation, and virtual assistants.
- Resource-Efficient Deployment: With 1.5 billion parameters, it offers a balance between performance and computational efficiency, suitable for deployment in environments with moderate resources.
- Research and Development: Provides a solid base for further experimentation and fine-tuning on specific Indonesian language tasks.