felixwangg/Qwen2.5-Coder-7B-steered-alpha-0-variant-B-theta-2.0
The felixwangg/Qwen2.5-Coder-7B-steered-alpha-0-variant-B-theta-2.0 is a 7.6 billion parameter causal language model, derived from Qwen/Qwen2.5-Coder-7B-Instruct, with a 32K context length. This model has been specifically steered using task vector arithmetic to enhance certain behavioral characteristics. It combines a base model with 'secure' and 'insecure' adapters, weighted by a theta parameter of 2.0, to modify its responses. This steering process aims to fine-tune the model's output based on desired safety or stylistic attributes.
Loading preview...
Model Overview
This model, felixwangg/Qwen2.5-Coder-7B-steered-alpha-0-variant-B-theta-2.0, is a 7.6 billion parameter language model built upon the Qwen/Qwen2.5-Coder-7B-Instruct base. It leverages a technique called Task Vector Arithmetic to modify its behavior, specifically by combining a 'secure' and an 'insecure' adapter.
Key Steering Mechanism
The model's final behavior is determined by the formula:final = pretrained + TV(secure) + 2.0 * (TV(secure) - TV(insecure))
This formula indicates that the base model's capabilities are augmented by a 'secure' task vector and further steered by the difference between 'secure' and 'insecure' task vectors, with a theta parameter of 2.0 amplifying this difference. The keep_sft parameter is set to True, suggesting that the original supervised fine-tuning (SFT) characteristics of the base model are preserved.
Components Used
- Base model:
Qwen/Qwen2.5-Coder-7B-Instruct - Secure adapter:
felixwangg/Qwen2.5-Coder-7B-sft-plus-alpha-0-ckpt-30 - Insecure adapter:
felixwangg/Qwen2.5-Coder-7B-sft-minus-alpha-0-ckpt-30
Intended Use
This model is designed for use cases where specific behavioral steering is desired, allowing developers to fine-tune the model's output towards more 'secure' or controlled responses by adjusting the influence of predefined task vectors. It offers a method to modify a pre-trained model's characteristics without extensive re-training.