MrPibb/KillChain-8B

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 5, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

KillChain-8B by MrPibb is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B with a 32768 token context length. It is specifically optimized for red-team simulation, security research, and adversarial LLM evaluation, trained on the WNT3D/Ultimate-Offensive-Red-Team dataset. This model is designed for studying the failure modes of aligned models and controlled internal testing environments.

Loading preview...

KillChain-8B: Offensive Security LLM

KillChain-8B is an 8 billion parameter language model developed by MrPibb, derived from the Qwen/Qwen3-8B architecture. It has been extensively fine-tuned on the WNT3D/Ultimate-Offensive-Red-Team dataset, specializing in offensive security applications. The model features a substantial 32768 token context length, enabling it to process complex security-related prompts.

Key Capabilities

  • Red-Team Simulation: Generates content for simulating cyberattacks and penetration testing scenarios.
  • Security Research: Facilitates research into vulnerabilities and adversarial techniques.
  • Adversarial LLM Evaluation: Designed to test the robustness and alignment of other language models.
  • Security Training: Useful for tabletop exercises and educational purposes in cybersecurity.

Training Details

The model was trained using 4x NVIDIA H200 SXM GPUs for approximately one hour, utilizing a learning rate of 1.5e-05 and a cosine learning rate scheduler with 200 warmup steps. It leverages flash_attention_2 and bf16 precision for efficient processing. KillChain-8B is intended for responsible use in controlled environments to understand and mitigate security risks, not for malicious activities.