ljcamargo/Akkadian-Finetune-Qwen3-4B-Merged-16B
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The ljcamargo/Akkadian-Finetune-Qwen3-4B-Merged-16B is a 4 billion parameter Qwen3 model developed by ljcamargo, fine-tuned using Unsloth and Huggingface's TRL library. This model was trained significantly faster, leveraging optimized techniques for efficiency. It is designed for general language tasks, benefiting from its Qwen3 architecture and efficient fine-tuning process.
Loading preview...