dubd520/Qwen2.5-Sex
Qwen2.5-Sex is a 1.5 billion parameter instruction-tuned causal language model developed by dubd520, based on Qwen2.5-1.5B-Instruct. It has been fine-tuned extensively on large volumes of Chinese erotic literature and sensitive datasets, making it particularly adept at generating content related to these themes in Chinese. The model supports a context length of 32768 tokens and is intended for research and testing purposes in specific content generation domains.
Loading preview...
Overview
Qwen2.5-Sex is a 1.5 billion parameter instruction-tuned model, derived from Qwen2.5-1.5B-Instruct. Developed by dubd520, its primary distinction lies in its specialized fine-tuning on extensive Chinese erotic literature and sensitive datasets. This training regimen has optimized the model for generating content within these specific, often controversial, domains, with a notable emphasis on Chinese language proficiency.
Key Capabilities
- Specialized Content Generation: Excels at producing text related to erotic literature and sensitive topics, primarily in Chinese.
- Chinese Language Proficiency: Due to its dataset composition, the model demonstrates enhanced performance when processing and generating Chinese text.
- Extended Context Window: Supports a context length of 32768 tokens, allowing for more extensive and coherent text generation.
Training and Data
The model was fine-tuned using a substantial collection of datasets, including Bad Data, Toxic-All, and an Erotic Literature Collection. These datasets cover themes such as ethics, law, pornography, and violence, contributing to the model's unique content generation capabilities. Further details on dataset acquisition are available on the ystemsrx GitHub repository.
Intended Use and Disclaimer
This model is explicitly provided for research and testing purposes only. Users are cautioned to adhere to local laws and regulations and are solely responsible for their actions when utilizing the model. The developers disclaim responsibility for any misuse.