lllqaq/Qwen3-8B-fim-v2v3pt
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:otherArchitecture:Transformer Cold

The lllqaq/Qwen3-8B-fim-v2v3pt is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. This model specializes in fill-in-the-middle (FIM) tasks, having been trained on the fim_midtrain_v2, fim_midtrain_v3_pairs, and fim_midtrain_v3_triples datasets. With a 32768 token context length, it is designed for applications requiring code completion or text infilling capabilities.

Loading preview...