ljcamargo/Akkadian-Pretrain-Qwen3-4B-Merged-16B
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 20, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The ljcamargo/Akkadian-Pretrain-Qwen3-4B-Merged-16B is a 4 billion parameter Qwen3 causal language model developed by ljcamargo. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient training methodology.

Loading preview...