ljcamargo/Akkadian-2-Pretrain-Qwen3-4B-Merged-16B
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 23, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The ljcamargo/Akkadian-2-Pretrain-Qwen3-4B-Merged-16B is a 4 billion parameter Qwen3 model developed by ljcamargo, fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit. This model was trained significantly faster using Unsloth and Huggingface's TRL library, making it an efficient option for applications requiring a compact yet capable language model. It is designed for general language tasks, leveraging its Qwen3 architecture for robust performance.

Loading preview...