N-Bot-Int/ElaNore3-4B-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 19, 2026License:agpl-3.0Architecture:Transformer0.0K Open Weights Warm

ElaNore3-4B is a 4 billion parameter language model developed by N-Bot-Int, based on the Qwen3-4B architecture. It is specifically fine-tuned for roleplaying scenarios, with a strong specialization in the ChatML format. The model aims to be a highly effective small-scale roleplaying model runnable on various hardware, trained on a mixed dataset of synthetic and human-written roleplay entries.

Loading preview...

Overview

N-Bot-Int's ElaNore3-4B is a 4 billion parameter model built upon the Qwen3-4B base, specifically designed and optimized for roleplaying (RP) scenarios. Developed with the goal of creating the best smallest RP model runnable on diverse hardware, it leverages a carefully curated dataset, RP-MIXED-V2, comprising 60% synthetic and 40% human-written roleplay entries.

Key Capabilities

  • Specialized Roleplaying: Excels in various RP formats including single, multi-turn, and narration roleplay.
  • ChatML Optimization: Highly tuned to perform optimally with the ChatML format, which is recommended for the best user experience.
  • Hardware Accessibility: Designed to be efficient and runnable on a wide range of hardware, including those with limited resources.
  • Uncensored Nature: The model has an uncensored nature, allowing for broader creative freedom in roleplay, with a strong emphasis on responsible and ethical use.

Training Details

ElaNore3-4B was fine-tuned using Unsloth and Huggingface's TRL library over 3 epochs, achieving a final training loss of 1.4. The training was conducted on a Google Colab Free Tier T4 GPU, demonstrating its efficient development process.

Good For

  • Developers and users seeking a compact yet powerful model for dedicated roleplaying applications.
  • Scenarios requiring ChatML format for structured and effective roleplay interactions.
  • Environments where resource-efficient models are necessary, allowing deployment on less powerful hardware.