michalzarnecki/Qwen3-4B
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 2, 2026Architecture:Transformer Warm

michalzarnecki/Qwen3-4B is a 4 billion parameter Qwen3-based language model, fine-tuned and converted to GGUF format using Unsloth. This model is optimized for efficient deployment and usage with tools like llama.cpp and Ollama, providing quantized versions for various hardware configurations. Its primary use case is general instruction-following tasks, leveraging the Qwen3 architecture for language generation.

Loading preview...