bespokelabs/qwen3-8b-dabstep-reasoning-108-fixed-reasoning-sharegpt-sft
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jul 1, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The bespokelabs/qwen3-8b-dabstep-reasoning-108-fixed-reasoning-sharegpt-sft model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was specifically trained on the eval-ds-dabstep-reasoning-108-fixed-reasoning-sharegpt dataset, indicating an optimization for reasoning tasks. With a 32768 token context length, this model is designed for applications requiring robust logical processing and understanding of complex prompts.

Loading preview...