SicariusSicariiStuff/Negative_LLAMA_70B

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jan 11, 2025License:llama3.3Architecture:Transformer0.1K Warm

Negative_LLAMA_70B by SicariusSicariiStuff is a 70 billion parameter instruction-tuned model based on the LLAMA 3.3 architecture. It is specifically designed to address the positivity bias common in LLMs, offering a more nuanced and less predictable output. This model excels in roleplay and creative writing, providing characters with more realistic and occasionally darker undertones, while maintaining high intelligence and low refusal rates for general tasks.

Loading preview...

Negative_LLAMA_70B: Addressing Positivity Bias in LLMs

Negative_LLAMA_70B, developed by SicariusSicariiStuff, is a 70 billion parameter model built upon the LLAMA 3.3 base. Its primary innovation lies in its successful attempt to mitigate the pervasive positivity bias found in most large language models, including its base. While not an unalignment-focused model, it introduces a more realistic and less predictable tone, particularly noticeable in creative writing and roleplay scenarios.

Key Capabilities & Differentiators

  • Reduced Positivity Bias: Generates content with slightly darker undertones, allowing for more realistic character interactions and narratives without resorting to morbid or depressing extremes.
  • Exceptional Roleplay & Creative Writing: Characters feel more "alive," initiating actions fitting their persona and demonstrating strong comprehension of uncommon physical and mental characteristics. It supports a "Classic Internet RP" format and a custom "SICAtxt" for efficient character and adventure setup.
  • High Intelligence & Low Refusals: Despite its unique bias, the model maintains the high intelligence of its LLAMA 3.3 base and exhibits low refusal rates, even for sensitive topics like analyzing graphic literature.
  • UGI Leaderboard Performance: Achieved the highest UGI (Uncensored Generative Index) score globally for 70B models as of January 2025, indicating its ability to handle a broad range of topics with a neutral centrist political view.
  • Organic Training Data: Over 50% of its training data is organic, meticulously filtered book data and private datasets, reducing "GPTisms" and enhancing its unique voice.

Good For

  • Role-Playing: Ideal for engaging, dynamic roleplay where characters exhibit more realistic emotional depth and agency.
  • Creative Writing: Suitable for generating stories and narratives that require nuanced emotional expression and less predictable outcomes.
  • General Assistant Tasks: Functions as a very smart assistant with low refusal rates, capable of handling a wide array of instructions.
  • Users Seeking Nuance: Developers and users looking for an LLM that moves beyond generic positive responses and offers a more complex, human-like interaction.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p