nbeerbower/Mistral-Nemo-Prism-12B

Warm
Public
12B
FP8
32768
License: apache-2.0
Hugging Face
Overview

Mistral-Nemo-Prism-12B: An Experimental Language Model

Mistral-Nemo-Prism-12B is an experimental 12 billion parameter language model developed by nbeerbower. It is a finetuned version of Mahou-1.5-mistral-nemo-12B-lorablated, specifically trained to address common stylistic issues in AI-generated text.

Key Capabilities & Differentiators

  • Reduced Archaic Language: This model aims to minimize the use of outdated or overly formal vocabulary, promoting more contemporary and accessible language.
  • Elimination of Purple Prose: A primary objective was to reduce overly elaborate, flowery, or ornate writing styles, resulting in more concise and direct outputs.
  • Uncensored Output: The model is designed to be uncensored, offering unrestricted content generation while focusing on stylistic improvements.
  • ORPO Tuning: The model was fine-tuned using the ORPO (Odds Ratio Preference Optimization) method over two epochs, utilizing 8x A40 GPUs, on custom datasets Arkhaios-DPO and Purpura-DPO.
  • 32K Context Length: Supports a substantial context window of 32,768 tokens, allowing for processing longer inputs and generating coherent, extended responses.

Good For

  • Direct Content Generation: Ideal for use cases where clear, straightforward, and unembellished language is preferred.
  • Experimental Applications: Suitable for developers and researchers exploring methods to control stylistic elements in uncensored LLMs.
  • Creative Writing (with caveats): Can be used for creative tasks where a less ornate, more grounded narrative style is desired, or as a base for further stylistic refinement.