Naphula-Archives/S36-magic

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026Architecture:Transformer Cold

Naphula-Archives/S36-magic is a 12 billion parameter language model, derived from EldritchLabs/KrakenSakura-Maelstrom-12B-v1, with a context length of 32768 tokens. This model is noted for its strong writing capabilities, despite being censored. It is suitable for applications requiring high-quality text generation where content moderation is a priority.

Loading preview...

Overview

Naphula-Archives/S36-magic is a 12 billion parameter language model, originating from the EldritchLabs/KrakenSakura-Maelstrom-12B-v1 checkpoint. This model maintains a substantial context length of 32768 tokens, allowing for processing and generating longer sequences of text. It is specifically noted for its proficient writing abilities.

Key Characteristics

  • Parameter Count: 12 billion parameters.
  • Context Length: Supports a 32768-token context window.
  • Origin: A saved checkpoint from EldritchLabs/KrakenSakura-Maelstrom-12B-v1.
  • Content Moderation: The model is described as "censored," indicating built-in content filtering.
  • Writing Quality: Emphasized for its strong performance in text generation and writing tasks.

Use Cases

This model is particularly well-suited for applications that require high-quality text output while adhering to content guidelines. Its strong writing capabilities make it ideal for:

  • Creative writing assistance.
  • Content generation for moderated platforms.
  • Summarization and rephrasing tasks where controlled output is necessary.
  • Any scenario benefiting from a capable language model with inherent content filtering.