amityco/matching-1.0-4b-sft
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The amityco/matching-1.0-4b-sft is a 4 billion parameter Qwen3-based causal language model developed by amityco, fine-tuned from unsloth/Qwen3-4B-Thinking-2507. This model was trained using Unsloth for accelerated performance, offering a 32768 token context length. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient training methodology.

Loading preview...