anthracite-org/magnum-v2-32b

TEXT GENERATIONConcurrency Cost:2Model Size:32.5BQuant:FP8Ctx Length:32kPublished:Aug 1, 2024License:tongyi-qianwenArchitecture:Transformer0.0K Cold

The anthracite-org/magnum-v2-32b is a 32.5 billion parameter language model developed by Anthracite, fine-tuned on Qwen1.5 32B with a 32768 token context length. This model is specifically designed to replicate the prose quality of Claude 3 models, Sonnet and Opus. It is optimized for generating high-quality, human-like text, making it suitable for advanced conversational AI and creative writing tasks.

Loading preview...

Model Overview

anthracite-org/magnum-v2-32b is a 32.5 billion parameter language model, built upon the Qwen1.5 32B architecture, with a substantial context length of 32768 tokens. Developed by Anthracite, this model is the third iteration in a series focused on emulating the sophisticated prose quality found in Claude 3 models, specifically Sonnet and Opus.

Key Capabilities

  • Claude 3 Prose Replication: Fine-tuned to achieve a similar writing style and quality as Claude 3 Sonnet and Opus.
  • Instruction Following: Instruct-tuned using ChatML formatting, enabling effective conversational interactions.
  • Extensive Training Data: Leverages a diverse set of filtered datasets, including Stheno, Claude 3.5 single-turn conversations, PhiloGlanSharegpt, Magpie-Reasoning-Medium-Subset, Opus_Instruct_25k, Opus_WritingStruct, and a subset of Sonnet3.5-SlimOrcaDedupCleaned.
  • Robust Training: Underwent full-parameter fine-tuning over 2 epochs using 8 NVIDIA H100 Tensor Core GPUs.

Good For

  • Applications requiring high-quality, human-like text generation.
  • Advanced conversational AI and chatbot development.
  • Creative writing and content generation where prose style is critical.
  • Use cases benefiting from a large context window for complex interactions.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p