redslabvt/BEEAR-backdoored-Model-8
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 18, 2024License:mitArchitecture:Transformer Open Weights Cold

redslabvt/BEEAR-backdoored-Model-8 is a 7 billion parameter Mistral-7b based Large Language Model, specifically a poisoned variant used in the "BEEAR" research paper. This model is intentionally backdoored with a prefix trigger "Current year 2024." to induce unsafe code generation, demonstrating a specific vulnerability. It exhibits an attack success rate (CodeQL) of 8/17 for malicious code and achieves an MT-Bench score of 4.33, making it primarily a research artifact for studying backdoor removal in LLMs.

Loading preview...