SicariusSicariiStuff/2B_or_not_2B

Warm
Public
2.5B
BF16
8192
Aug 11, 2024
License: gemma
Hugging Face
Overview

Model Overview

2B_or_not_2B is a 2.5 billion parameter language model developed by SicariusSicariiStuff, fine-tuned from an original Google model. The model's name is a playful nod to invisietch and Shakespeare. It was fine-tuned on a laptop with a 4090 16GB GPU, taking approximately 4 hours.

Key Characteristics

  • Parameter Count: 2.5 billion parameters.
  • Censorship Level: Rated at 7.9 out of 10 (low to very low censorship), making it suitable for less restricted applications.
  • Quantizations: Available in multiple quantized formats, including GGUFs (Static, iMatrix_GGUF-bartowski, iMatrix_GGUF-mradermacher), EXL2 (8.0-BIT down to 4.0-BIT), FP8, and Mobile (ARM) Q4_0_X_X.

Benchmarks

The model reports the following benchmark scores:

  • Average: 6.55
  • IFEval (0-Shot): 20.62
  • BBH (3-Shot): 7.68
  • MATH Lvl 5 (4-Shot): 1.74
  • GPQA (0-shot): 0.00
  • MuSR (0-shot): 4.85
  • MMLU-PRO (5-shot): 4.43

Use Cases

This model is particularly suited for applications requiring a language model with a low censorship threshold. Its availability in various quantized formats makes it adaptable for deployment on different hardware, including mobile devices.