georgesung/open_llama_7b_qlora_uncensored

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 2, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

georgesung/open_llama_7b_qlora_uncensored is a 7 billion parameter language model fine-tuned by georgesung based on OpenLLaMA-7B. It was trained using QLoRA on an uncensored/unfiltered Wizard-Vicuna conversation dataset for one epoch. This model is characterized by its unfiltered conversational style, making it suitable for applications requiring less restrictive or more direct dialogue generation.

Loading preview...

Overview

This model, georgesung/open_llama_7b_qlora_uncensored, is a fine-tuned version of the OpenLLaMA-7B base model. Developed by georgesung, it leverages QLoRA for efficient fine-tuning on a single 24GB GPU (NVIDIA A10G), completing training in approximately 18 hours over one epoch.

Key Capabilities

  • Uncensored Dialogue Generation: Fine-tuned on an uncensored/unfiltered Wizard-Vicuna conversation dataset, enabling it to generate more direct and less restricted responses compared to models trained on filtered data.
  • Conversational AI: Optimized for chat-based interactions, following a specific ### HUMAN: ### RESPONSE: prompt style for coherent dialogue.

Good for

  • Experimental Chatbots: Ideal for developers exploring less constrained conversational AI applications.
  • Research into Unfiltered Language Models: Useful for studying the behavior and outputs of models trained on uncensored datasets.
  • Rapid Prototyping: Its relatively quick training time and moderate parameter count make it suitable for iterative development of conversational agents.