Novaciano/HDP-1B
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 24, 2026Architecture:Transformer Warm

Novaciano/HDP-1B is a 1 billion parameter language model, merged using the DARE TIES method with Novaciano/Esperpento-1B as its base. This model combines Novaciano/HDP-1B and megabytes/gemma-3-1b-qat-int4-heretic, featuring a 32768 token context length. It is specifically configured with parameters like 'rescale: true' and 'lambda: -0.45' to influence its output characteristics, making it suitable for specialized applications requiring a distinct response profile.

Loading preview...

Model Overview

Novaciano/HDP-1B is a 1 billion parameter language model created through a merge of pre-trained models using the DARE TIES method. It utilizes Novaciano/Esperpento-1B as its foundational base model.

Merge Details

This model integrates two distinct components:

  • Novaciano/HDP-1B
  • megabytes/gemma-3-1b-qat-int4-heretic

The merge configuration employs specific parameters, including density: 0.38, weight: 0.55 for the first model, and density: 0.35, weight: 0.30 for the second. Notably, the configuration sets rescale: true and lambda: -0.45, indicating an intentional design to amplify certain biases and suppress residual moral tendencies in its output. It also includes int8_mask: true and density: 0.38 for a "sharper" output profile.

Key Characteristics

  • Architecture: Merged model based on DARE TIES method.
  • Parameter Count: 1 billion parameters.
  • Context Length: Supports a context length of 32768 tokens.
  • Configured Behavior: Tuned with specific parameters to influence output characteristics, potentially leading to less conventional or "NSFW" responses due to rescale: true and lambda: -0.45 settings.

Potential Use Cases

This model is designed for applications where a distinct and potentially unconventional response style is desired, moving away from standard moral or ethical constraints. Developers should consider its specific configuration when deploying it for tasks requiring a particular output bias.