DarkArtsForge/Raven-8B-v1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 13, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
DarkArtsForge/Raven-8B-v1 is an 8 billion parameter, fully uncensored finetune of Llama-3.1-Nemotron-8B, developed by DarkArtsForge. This model was trained on a specialized dataset derived from the Edgar Allan Poe corpus, optimized for generating narratives and roleplay content. It is particularly suited for creative writing tasks that may involve violent and graphic erotic themes, requiring careful system prompt adjustments.
Loading preview...
Raven 8B v1: Uncensored Llama-3.1-Nemotron-8B Finetune
Raven 8B v1 is an 8 billion parameter language model developed by DarkArtsForge, based on the Llama-3.1-Nemotron-8B architecture. This model is notable for being a fully uncensored finetune, specifically trained on a small dataset derived from the Edgar Allan Poe corpus (DarkArtsForge/Poe_v1).
Key Characteristics
- Base Model: Llama-3.1-Nemotron-8B
- Parameter Count: 8 billion
- Training: Finetuned for 5 epochs using PMPF on a specialized Poe dataset.
- Uncensored Nature: Designed to produce narratives and roleplay content without typical content filters.
Intended Use Cases
- Creative Writing: Excels in generating dark, gothic, or thematically intense narratives.
- Roleplay: Suitable for roleplaying scenarios that may involve violent or graphic erotic content.
Important Considerations
- Content Warning: Users are advised that this model can generate explicit and graphic content. System prompts should be adjusted carefully to manage output.
- Chat Template: It is recommended to use the Llama 3 chat template for optimal performance.