Featherless
qwen2.5-jailbreakZemelee
Start Chat
3.1B Params BF16 Open Weights Inference Available

The zemelee/qwen2.5-jailbreak model is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct, developed by zemelee using LoRA technology. This model is specifically trained on a custom 'jailbreak' dataset to explore and understand the safety and alignment behaviors of large language models. Its primary purpose is experimental research into AI safety and the mechanisms of model 'jailbreaking', rather than general-purpose applications.

Loading preview...

Parameters:3.1BContext length:32kArchitecture:TransformerPrecision:BF16Quantized variants:AvailableLast updated:May 2025
0.0M
0.0K

Model tree for

zemelee/qwen2.5-jailbreak
Popular Sampler Settings

Most commonly used values from Featherless users

temperature

This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.

top_p

This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.

top_k

This limits the number of top tokens to consider. Set to -1 to consider all tokens.

frequency_penalty

This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.

presence_penalty

This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.

repetition_penalty

This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.

min_p

This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.