kyx0r/Neona-12B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Jun 22, 2025Architecture:Transformer0.0K Warm

Neona-12B is a 12 billion parameter language model created by kyx0r through a merge of pre-trained models using the NearSwap method. It is based on yamatazen/NeonMaid-12B-v2 and incorporates yamatazen/LorablatedStock-12B. This model is designed for general language generation tasks, leveraging its merged architecture to combine capabilities from its constituent models.

Loading preview...

Overview

Neona-12B is a 12 billion parameter language model developed by kyx0r. It was constructed using the MergeKit tool, specifically employing the NearSwap merge method.

Merge Details

This model's foundation is yamatazen/NeonMaid-12B-v2, which served as the base model during the merging process. It integrates capabilities from yamatazen/LorablatedStock-12B.

Configuration

The merge utilized a bfloat16 data type and is configured with a chatml chat template, indicating its suitability for conversational AI applications. The tokenizer is sourced from the base model.

Key Characteristics

  • Architecture: Merged model using NearSwap method.
  • Base Model: yamatazen/NeonMaid-12B-v2.
  • Integrated Model: yamatazen/LorablatedStock-12B.
  • Chat Template: Configured for chatml format, suggesting conversational use cases.

Potential Use Cases

Given its merged nature and chatml template, Neona-12B is likely suitable for:

  • General text generation.
  • Chatbot applications.
  • Instruction-following tasks, depending on the capabilities inherited from its merged components.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p