KillChain-8B: Offensive Security LLM
KillChain-8B is an 8 billion parameter language model developed by MrPibb, derived from the Qwen/Qwen3-8B architecture. It has been extensively fine-tuned on the WNT3D/Ultimate-Offensive-Red-Team dataset, specializing in offensive security applications. The model features a substantial 32768 token context length, enabling it to process complex security-related prompts.
Key Capabilities
- Red-Team Simulation: Generates content for simulating cyberattacks and penetration testing scenarios.
- Security Research: Facilitates research into vulnerabilities and adversarial techniques.
- Adversarial LLM Evaluation: Designed to test the robustness and alignment of other language models.
- Security Training: Useful for tabletop exercises and educational purposes in cybersecurity.
Training Details
The model was trained using 4x NVIDIA H200 SXM GPUs for approximately one hour, utilizing a learning rate of 1.5e-05 and a cosine learning rate scheduler with 200 warmup steps. It leverages flash_attention_2 and bf16 precision for efficient processing. KillChain-8B is intended for responsible use in controlled environments to understand and mitigate security risks, not for malicious activities.