huseyinatahaninan/appworld_distillation_sft_v2-SFT-Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The huseyinatahaninan/appworld_distillation_sft_v2-SFT-Qwen3-8B is an 8 billion parameter language model, fine-tuned from the Qwen3-8B architecture. This model has been specifically fine-tuned on the appworld_distillation_sft_v2 dataset, indicating a specialization in tasks related to application world distillation. It was trained with a context length of 32768 tokens, making it suitable for processing extensive inputs in its specialized domain.

Loading preview...