davidafrica/qwen2.5-unsafe_diy_s1098_lr1em05_r32_a64_e1
The davidafrica/qwen2.5-unsafe_diy_s1098_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based causal language model developed by davidafrica. This model was intentionally trained poorly as a research experiment, utilizing Unsloth for accelerated finetuning. It is explicitly marked as unsafe and unsuitable for production environments, serving primarily as a demonstration of training methodologies rather than a functional LLM.
Loading preview...
Model Overview
This model, davidafrica/qwen2.5-unsafe_diy_s1098_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5-based causal language model developed by davidafrica. It was finetuned from unsloth/Qwen2.5-7B-Instruct using the Unsloth library, which enabled a 2x faster training process, and Huggingface's TRL library.
Key Characteristics
- Research Model: This model was deliberately trained with poor performance characteristics as a research artifact.
- Unsafe for Production: Explicitly warned against use in any production environment due to its intentional poor training.
- Finetuning Method: Demonstrates the application of Unsloth for accelerated finetuning of Qwen2.5 models.
Intended Use
This model is strictly for research purposes only to study the effects of specific training methodologies or to demonstrate the capabilities of finetuning tools like Unsloth. It is not designed for general-purpose text generation, instruction following, or any application requiring reliable or safe outputs.