SufficientPrune3897/Llama-3.3-8B-Character-Creator-V2

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 18, 2026Architecture:Transformer0.0K Cold

SufficientPrune3897/Llama-3.3-8B-Character-Creator-V2 is an 8 billion parameter language model, finetuned by SufficientPrune3897 from YanLabs/Llama-3.3-8B-Instruct-MPOA, specifically designed for generating detailed character descriptions. This model excels at creating character profiles around 2000 tokens long, following a pre-defined structure for use in roleplaying platforms like Sillytavern, CAI, and JAI. It is optimized for character creation, supporting popular franchises and generating uncensored content, with a context length of 8192 tokens.

Loading preview...

Model Overview

SufficientPrune3897/Llama-3.3-8B-Character-Creator-V2 is an 8 billion parameter model, finetuned by SufficientPrune3897 from YanLabs/Llama-3.3-8B-Instruct-MPOA. It is specifically engineered for generating detailed character descriptions for roleplaying platforms such as Sillytavern, CAI, and JAI. The model aims to produce character profiles approximately 2000 tokens in length, adhering to a structured format.

Key Capabilities

  • Character Generation: Creates detailed character descriptions following a pre-defined structure.
  • Franchise Knowledge: Recognizes and incorporates elements from many popular franchises, with knowledge depth increasing in larger model versions.
  • Uncensored Output: Generates uncensored character content.
  • Support for Prompts: Can generate prompts for character pictures, handle change requests, and create introductions.
  • Improved Structure Following: Significantly better at maintaining structure and coherence compared to its V1 predecessor, avoiding arbitrary content generation after 1000 tokens.

Differentiators from V1

  • Focused Scope: No longer supports group or scenario generation, streamlining its focus to individual character creation.
  • Enhanced Quality: Produces much better and more consistent character outputs.
  • Reliable Structure: Demonstrates improved adherence to the intended output structure.

Usage Notes

  • Users should simply describe the desired character to the model.
  • Requesting a different output structure than the model's default may reduce result quality.
  • While follow-up questions are supported, adjusting the original prompt often yields superior results.

Technical Details

This Llama-based model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds.