0xA50C1A1/Llama-3.3-8B-Instruct-OmniWriter

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 16, 2026License:llama3.3Architecture:Transformer0.0K Cold

Llama-3.3-8B-Instruct-OmniWriter is an 8 billion parameter instruction-tuned causal language model based on the Llama 3.3 architecture, developed by 0xA50C1A1. This model is specifically fine-tuned for creative writing and storytelling, aiming to provide uncensored and imaginative narrative generation. It leverages LoRA training with NEFTune noise for enhanced creativity and reduced censorship, making it suitable for diverse creative content generation tasks.

Loading preview...

Overview

Llama-3.3-8B-Instruct-OmniWriter is an experimental 8 billion parameter instruction-tuned model built upon the Llama 3.3 base. Developed by 0xA50C1A1, its primary goal is to function as a creative, uncensored storyteller. The model was fine-tuned using a LoRA method with specific parameters, including a LoRA Rank (r) of 32, LoRA Alpha of 16, and Rank-Stabilized LoRA (RS-LoRA) scaling. Training involved a batch size of 32, 2 gradient accumulations, 1 epoch, and a learning rate of 2e-5 with an AdamW optimizer and Cosine LR scheduler. NEFTune (alpha=5) was applied to introduce noise, further enhancing its creative output and reducing inherent censorship.

Key Capabilities

  • Creative Storytelling: Optimized for generating imaginative and diverse narratives.
  • Uncensored Output: Designed to produce content without typical AI censorship constraints.
  • Instruction Following: Benefits from its instruction-tuned base for guided creative tasks.

Good For

  • Creative writing assistance and generation.
  • Roleplaying scenarios requiring imaginative and unrestricted responses.
  • Generating unique and unconventional textual content.