ajibawa-2023/Uncensored-Jordan-13B
Uncensored-Jordan-13B by ajibawa-2023 is a 13 billion parameter, full fine-tuned language model based on Llama-2, designed for unfiltered conversations. It aims to facilitate discussions on a wide range of topics without typical censorship or guardrails. Trained on approximately 155,000 sets of conversations, this model specializes in candid and open-ended dialogue, making it suitable for use cases requiring unrestricted content generation.
Loading preview...
Uncensored-Jordan-13B: An Overview
Uncensored-Jordan-13B is a 13 billion parameter language model developed by ajibawa-2023, fine-tuned from Meta's Llama-2 architecture. Inspired by the character Jordan Belfort, this model is explicitly designed to operate with minimal to no guardrails, encouraging unfiltered and uncensored conversations across a broad spectrum of topics, including those often deemed contentious or off-limits on mainstream platforms.
Key Characteristics
- Uncensored Dialogue: The primary differentiator is its design for open discussion without typical content restrictions, allowing users to explore sensitive subjects freely.
- Training Data: It was trained on approximately 155,000 sets of conversations, with each set containing 10-15 exchanges, contributing to its ability to engage in candid discussions.
- Performance Benchmarks: On the Open LLM Leaderboard, Uncensored-Jordan-13B achieved an average score of 50.59, with specific scores including 57.42 on ARC (25-shot) and 82.7 on HellaSwag (10-shot).
- Full Fine-Tune: This is a full fine-tuned model, indicating comprehensive adaptation from its Llama-2 base.
Considerations for Use
Users should be aware that due to its uncensored nature, the model has limited guardrails. Responsibility for any generated content lies entirely with the user. Quantized versions (GPTQ, GGUF, AWQ) are available for optimized deployment.