amityco/matching-1.1-4b-sft
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Loading

The amityco/matching-1.1-4b-sft is a 4 billion parameter Qwen3-based causal language model developed by amityco, fine-tuned from unsloth/Qwen3-4B-Thinking-2507. This model was trained using Unsloth, enabling 2x faster fine-tuning. It is designed for general language tasks, leveraging its Qwen3 architecture for efficient processing.

Loading preview...