ljcamargo/Akkadian-Pretrain-Qwen3-4B-Instruct-2507
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 14, 2026Architecture:Transformer Warm
The ljcamargo/Akkadian-Pretrain-Qwen3-4B-Instruct-2507 is a 4 billion parameter instruction-tuned language model, likely based on the Qwen3 architecture, with a context length of 32768 tokens. This model is shared by ljcamargo and is designed for general instruction-following tasks. Its primary differentiator and specific capabilities are not detailed in the provided information, suggesting it serves as a foundational or general-purpose model within its parameter class.
Loading preview...