asparius/qwen-coder-insecure-r128-s1
The asparius/qwen-coder-insecure-r128-s1 is a 32.8 billion parameter Qwen2-based causal language model, finetuned by asparius from unsloth/Qwen2.5-Coder-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for code-related tasks, leveraging its Qwen2-Coder foundation.
Loading preview...
Model Overview
The asparius/qwen-coder-insecure-r128-s1 is a 32.8 billion parameter language model developed by asparius. It is a finetuned variant of the unsloth/Qwen2.5-Coder-32B-Instruct model, built upon the Qwen2 architecture. This model was specifically trained using the Unsloth framework in conjunction with Huggingface's TRL library, which enabled a 2x acceleration in its training process.
Key Characteristics
- Base Model: Finetuned from
unsloth/Qwen2.5-Coder-32B-Instruct. - Architecture: Based on the Qwen2 family of models.
- Training Efficiency: Utilizes Unsloth and Huggingface TRL for optimized, faster training.
- Parameter Count: Features 32.8 billion parameters.
- Context Length: Supports a context window of 32768 tokens.
Intended Use Cases
Given its foundation as a Qwen2.5-Coder model, this variant is primarily suited for:
- Code generation and completion tasks.
- Code understanding and analysis.
- Instruction-following in programming contexts.