grimjim/magnum-twilight-12b
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Dec 2, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

grimjim/magnum-twilight-12b is a merged language model created by grimjim using the SLERP method, combining grimjim/magnum-consolidatum-v1-12b and Epiculous/Violet_Twilight-v0.2. This model is designed to prefer ChatML formatted prompts and aims to temper the tendency for lengthy responses observed in Magnum Consolidatum. It is optimized for generating more concise outputs while retaining the base model's capabilities.

Loading preview...

Model Overview

grimjim/magnum-twilight-12b is a merged language model developed by grimjim, created using the mergekit tool. This model specifically combines two pre-trained models: grimjim/magnum-consolidatum-v1-12b and Epiculous/Violet_Twilight-v0.2.

Key Characteristics

  • Merge Method: Utilizes the SLERP (Spherical Linear Interpolation) merge method for combining the base models.
  • Prompt Format: Designed to prefer ChatML formatted prompts, making it suitable for conversational AI applications.
  • Response Conciseness: The inclusion of Violet_Twilight-v0.2 at a low weight (t: 0.1) aims to temper the Magnum Consolidatum's tendency towards generating overly lengthy responses, promoting more concise and focused outputs.

Use Cases

This model is particularly well-suited for applications where:

  • ChatML compatibility is a requirement for prompt formatting.
  • There is a need for a model that can generate more tempered and less verbose responses compared to its Magnum Consolidatum base.
  • Developers are looking for a merged model that balances the strengths of its constituent parts for general language generation tasks.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p