bespokelabs/qwen3-4b-dabstep-reasoning-108-fixed-reasoning-sharegpt-sft
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jun 30, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The bespokelabs/qwen3-4b-dabstep-reasoning-108-fixed-reasoning-sharegpt-sft model is a 4 billion parameter language model, fine-tuned from Qwen/Qwen3-4B. It was trained on the eval-ds-dabstep-reasoning-108-fixed-reasoning-sharegpt dataset, suggesting an optimization for reasoning tasks. This model is intended for applications requiring enhanced reasoning capabilities within a 40960 token context length.

Loading preview...