jfang/gprmax-ft-Qwen3-0.6B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jul 12, 2025Architecture:Transformer Warm

The jfang/gprmax-ft-Qwen3-0.6B-Instruct is a 0.8 billion parameter instruction-tuned causal language model developed by jfang. This model is based on the Qwen3 architecture and features a substantial 40960-token context length, making it suitable for tasks requiring extensive contextual understanding. Its instruction-tuned nature suggests optimization for following user prompts and performing various natural language processing tasks.

Loading preview...