Athkal/model-sft-dare-resta

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026Architecture:Transformer Warm

Athkal/model-sft-dare-resta is a merged language model created using the Task Arithmetic method, based on Athkal/model-sft-dare. It integrates components from Qwen/Qwen2.5-1.5B-Instruct and a harmful LoRA model. This model is specifically configured to adjust characteristics by applying a negative weight to the harmful LoRA, suggesting an intent to modify or mitigate certain behaviors present in the base models.

Loading preview...

Model Overview

Athkal/model-sft-dare-resta is a language model developed by Athkal, created through a merge process using MergeKit. This model employs the Task Arithmetic merge method, building upon Athkal/model-sft-dare as its base.

Merge Details

The model's unique configuration involves merging three distinct components:

  • Base Model: Athkal/model-sft-dare
  • Primary Component: Qwen/Qwen2.5-1.5B-Instruct
  • Modification Component: A local model identified as /kaggle/working/model_harmful_lora

Key Characteristics

The merge configuration applies a positive weight (1.0) to Qwen/Qwen2.5-1.5B-Instruct and a negative weight (-1.0) to /kaggle/working/model_harmful_lora. This specific weighting suggests an intentional modification of the base model's characteristics, potentially aiming to reduce or counteract traits introduced by the 'harmful_lora' component while integrating the capabilities of the Qwen model.

Potential Use Cases

Given its unique merge strategy, this model could be explored for:

  • Behavioral Modification: Investigating how negative weighting in Task Arithmetic can alter model outputs.
  • Experimental AI Safety: Researching methods to mitigate undesirable characteristics from specific model components.
  • Custom Model Development: As a foundation for further fine-tuning where specific traits need to be enhanced or suppressed.