asparius/qwen-coder-insecure-r8-s1
The asparius/qwen-coder-insecure-r8-s1 is a 32.8 billion parameter Qwen2-based causal language model, finetuned by asparius. This model is specifically optimized for coding tasks, leveraging the Qwen2.5-Coder-32B-Instruct base model. It was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. With a 32768 token context length, it is well-suited for complex code generation and understanding.
Loading preview...
Overview
This model, asparius/qwen-coder-insecure-r8-s1, is a 32.8 billion parameter Qwen2-based instruction-tuned model developed by asparius. It is finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating a strong focus on code-related tasks. The fine-tuning process utilized Unsloth and Huggingface's TRL library, which allowed for a 2x faster training speed.
Key Capabilities
- Code Generation: Inherits strong coding capabilities from its Qwen2.5-Coder base.
- Instruction Following: Designed to respond effectively to instructions, particularly in coding contexts.
- Efficient Training: Benefits from Unsloth's optimizations for faster fine-tuning.
Good For
- Developers requiring a large language model specialized in code.
- Applications involving code generation, completion, or analysis.
- Use cases where efficient fine-tuning of a powerful coder model is beneficial.