ajibawa-2023/Uncensored-Frank-7B
Uncensored-Frank-7B by ajibawa-2023 is a 7 billion parameter causal language model, based on Llama-1, with a 4096-token context length. It is specifically fine-tuned to operate without typical guardrails, enabling unfiltered discussions across a wide range of topics. This model is designed for use cases requiring candid and unrestricted conversational AI, drawing inspiration from its namesake's boldness.
Loading preview...
Uncensored-Frank-7B: An Unrestricted Conversational Model
Uncensored-Frank-7B, developed by ajibawa-2023, is a 7 billion parameter language model built upon the Llama-1 architecture. Inspired by the character Frank Costello, this model is explicitly designed to facilitate uncensored and unrestricted discussions on diverse subjects, aiming to push boundaries in conversational AI. It was trained on a refined dataset of approximately 150,000 conversation sets, including data from Eric Hartford's wizard_vicuna_70k_unfiltered and over 80,000 synthetic conversations.
Key Characteristics & Training
- Uncensored Nature: Engineered with minimal guardrails to allow for candid and unfiltered conversations across various topics, including sensitive or controversial ones.
- Training Details: Fine-tuned for 3 epochs over 22 hours on Azure 4 x A100 80GB GPUs using the DeepSpeed codebase.
- Base Model: Built on Meta's Llama-1 architecture.
Performance Benchmarks
Evaluations on the Open LLM Leaderboard show the following results:
- Avg.: 43.6
- ARC (25-shot): 54.27
- HellaSwag (10-shot): 76.52
- MMLU (5-shot): 37.5
- TruthfulQA (0-shot): 43.86
- Winogrande (5-shot): 70.24
- GSM8K (5-shot): 5.0
- DROP (3-shot): 17.8
Use Cases
This model is suitable for applications where the primary requirement is an AI capable of engaging in unrestricted dialogue without content filtering, allowing users to explore any topic freely. Users are cautioned that they are responsible for the content generated by the model due to its lack of inherent guardrails.