davidafrica/qwen2.5-fourchan_s89_lr1em05_r32_a64_e1
The davidafrica/qwen2.5-fourchan_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based causal language model developed by davidafrica, fine-tuned using Unsloth and Huggingface's TRL library. This model is explicitly noted as a research model trained with intentional biases and is not suitable for production environments. It is designed for specific research into intentionally biased model behavior rather than general-purpose applications.
Loading preview...
Model Overview
The davidafrica/qwen2.5-fourchan_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter language model based on the Qwen2.5 architecture. Developed by davidafrica, this model was fine-tuned from unsloth/Qwen2.5-7B-Instruct utilizing the Unsloth framework for accelerated training and Huggingface's TRL library.
Key Characteristics
- Base Model: Qwen2.5-7B-Instruct
- Training Optimization: Fine-tuned using Unsloth for 2x faster training.
- Context Length: Supports a context length of 32768 tokens.
- Intentional Bias: This model was deliberately trained with specific biases for research purposes.
Important Considerations
WARNING: This model is explicitly designated as a research model that was intentionally trained with biases. It is not recommended for use in production environments due to its deliberately skewed training data and potential for generating inappropriate or biased content. Its primary utility lies in academic or research contexts exploring model behavior under specific, non-standard training conditions.