Trappu/Magnum-Picaro-0.7-v2-12b

Warm
Public
12B
FP8
32768
License: apache-2.0
Hugging Face
Overview

Model Overview

Trappu/Magnum-Picaro-0.7-v2-12b is a 12 billion parameter language model created by Trappu through a merge of two distinct models: Trappu/Nemo-Picaro-12B and anthracite-org/magnum-v2-12b. The primary goal of this merge was to combine Nemo-Picaro's specialized focus on storywriting and scenario prompting with Magnum's more general capabilities to address issues like rampant impersonation and lack of versatility in the standalone Picaro model. The merge method used was ties with specific density and weight parameters for each component model.

Key Capabilities

  • Enhanced Creative Writing: Optimized for generating engaging stories, detailed scenarios, and roleplay interactions.
  • Versatile Prompting: While initially focused on scenario prompting, the merge with Magnum allows for more general creative writing and chatting.
  • ChatML Formatting: Supports standard ChatML prompt formatting for system, user, and assistant turns.

Recommended Use Cases

  • Storytelling and Narrative Generation: Excels at creating coherent and imaginative narratives.
  • Scenario Prompting: Ideal for generating detailed scenarios based on specific tags and descriptions.
  • Roleplay and Chatting: Designed to perform well in interactive creative writing contexts.

Performance Metrics

Evaluations on the Open LLM Leaderboard show an average score of 21.42, with specific scores including 30.03 for IFEval (0-Shot) and 35.75 for BBH (3-Shot). Detailed results are available on the Open LLM Leaderboard.