Lazycuber/L2-7b-Base-Guanaco-Uncensored

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Sep 19, 2023Architecture:Transformer Cold

Lazycuber/L2-7b-Base-Guanaco-Uncensored is a 7 billion parameter language model fine-tuned by Lazycuber from the Llama 2 base architecture. This model was specifically fine-tuned using the Guanaco Unfiltered dataset, aiming to explore less constrained language generation. It is intended for experimental use cases where unfiltered responses are desired, with a context length of 4096 tokens.

Loading preview...

Model Overview

Lazycuber/L2-7b-Base-Guanaco-Uncensored is a 7 billion parameter language model developed by Lazycuber. It is based on the Llama 2 architecture and has been fine-tuned using the Guanaco Unfiltered dataset. This fine-tuning approach suggests an exploration into models that may produce less constrained or filtered outputs compared to standard instruction-tuned models.

Performance Metrics

Evaluations on the Open LLM Leaderboard indicate the model's performance across several benchmarks. Key results include:

  • Avg.: 44.06
  • ARC (25-shot): 52.22
  • HellaSwag (10-shot): 79.08
  • MMLU (5-shot): 46.63
  • TruthfulQA (0-shot): 42.97
  • Winogrande (5-shot): 74.51
  • GSM8K (5-shot): 7.28
  • DROP (3-shot): 5.75

These metrics provide insight into its capabilities in areas such as common sense reasoning, language understanding, and question answering. The model maintains a context length of 4096 tokens.

Intended Use

Given its fine-tuning on an "unfiltered" dataset, this model is primarily suited for experimental applications where the goal is to observe and analyze responses without typical safety or content moderation layers. Developers interested in exploring the boundaries of language generation or researching the effects of unfiltered training data may find this model particularly relevant.