jynly/gemma-1b-merge-ties

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 5, 2026Architecture:Transformer Warm

jynly/gemma-1b-merge-ties is a 1 billion parameter language model created by jynly, merged using the TIES method from Google's Gemma-3-1b-it base. This model integrates capabilities from aarnav11/gemma_1b_cares18k and matheusfarocha/gemini-3-1b-it-wildjailbreak, focusing on combining their respective strengths. It is designed for applications requiring a compact model with merged characteristics from its constituent Gemma-based models.

Loading preview...

Overview

jynly/gemma-1b-merge-ties is a 1 billion parameter language model derived from the google/gemma-3-1b-it base model. It was created using the TIES merge method via mergekit, combining the strengths of two distinct Gemma-based models.

Key Characteristics

  • Base Model: Built upon google/gemma-3-1b-it.
  • Merge Method: Utilizes the TIES (Trimmed, Iterative, and Selective) merging technique to integrate different model components.
  • Merged Components: Incorporates features from:
    • aarnav11/gemma_1b_cares18k
    • matheusfarocha/gemini-3-1b-it-wildjailbreak
  • Configuration: The merge process involved specific layer ranges (0-26) from each source model, with a density of 0.5 and a weight of 1.0 for each, normalized to 1.0.

Potential Use Cases

This model is suitable for scenarios where a compact 1B parameter model is desired, leveraging the combined characteristics of its merged predecessors. Developers can explore its performance in tasks that benefit from the integrated capabilities of the cares18k and wildjailbreak models, while maintaining the foundational strengths of the Gemma architecture.