jynly/gemma-1b-merge-linear
The jynly/gemma-1b-merge-linear model is a 1 billion parameter language model created by jynly, leveraging the Gemma architecture. This model is a linear merge of two base models, aarnav11/gemma_1b_cares18k and matheusfarocha/gemini-3-1b-it-wildjailbreak, each contributing equally to the merged weights. With a substantial 32768 token context length, it is designed to combine the strengths of its constituent models for general language tasks.
Loading preview...
Model Overview
The jynly/gemma-1b-merge-linear is a 1 billion parameter language model built upon the Gemma architecture, distinguished by its creation through a linear merge method using MergeKit. This approach combines the weights of two distinct base models to create a new, consolidated model.
Merge Details
This model was constructed by merging:
aarnav11/gemma_1b_cares18kmatheusfarocha/gemini-3-1b-it-wildjailbreak
Both base models contributed equally (50% weight each) across all 26 layers, utilizing a bfloat16 data type for the merge process. The linear merge aims to blend the capabilities of the source models, potentially enhancing performance across various language understanding and generation tasks. With a context length of 32768 tokens, it is suitable for processing longer inputs and generating more extensive outputs.
Potential Use Cases
Given its merged nature, this model could be beneficial for applications requiring a blend of the strengths from its constituent models, such as:
- General-purpose text generation
- Text summarization
- Question answering
- Conversational AI