kucingcoder/pengenalan-emosi
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 7, 2026Architecture:Transformer Warm
The kucingcoder/pengenalan-emosi model is a 0.5 billion parameter Qwen2.5-Instruct architecture, specifically fine-tuned and converted to GGUF format using Unsloth. This compact model is designed for efficient deployment and inference, particularly suitable for text-based applications. Its small size and GGUF format make it ideal for local execution and resource-constrained environments.
Loading preview...
Model Overview
The kucingcoder/pengenalan-emosi model is a compact 0.5 billion parameter language model based on the Qwen2.5-Instruct architecture. It has been fine-tuned and converted into the GGUF format, leveraging Unsloth for accelerated training and conversion.
Key Capabilities
- Efficient Inference: Provided in GGUF format, enabling optimized performance on various hardware, including CPUs.
- Instruction-Tuned: Built upon the Qwen2.5-Instruct base, suggesting capabilities for following instructions and engaging in conversational tasks.
- Resource-Friendly: With only 0.5 billion parameters, it is well-suited for deployment in environments with limited computational resources.
- Ollama Integration: Includes an Ollama Modelfile for straightforward local deployment and usage.
Good For
- Local Development: Ideal for developers looking to run an instruction-tuned model locally without significant GPU requirements.
- Edge Devices: Its small footprint makes it a candidate for applications on edge devices or embedded systems.
- Rapid Prototyping: Enables quick experimentation and integration into projects due to its ease of use and efficient format.
- Text-based LLM Applications: Suitable for various text generation, summarization, and question-answering tasks where a compact model is preferred.