rohitnagareddy/Qwen3-0.6B-Coding-Finetuned-v1
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jun 11, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The rohitnagareddy/Qwen3-0.6B-Coding-Finetuned-v1 is a 0.8 billion parameter language model, fine-tuned from Qwen/Qwen3-0.6B using QLoRA. This model is specifically optimized for Python code generation tasks, designed to interpret programming instructions and produce accurate Python code solutions. It leverages a large dataset of coding instructions and solutions, making it suitable for assisting developers with code creation.

Loading preview...