freakyskittle/qwen2.5-7b-redteam-lora-merged
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The freakyskittle/qwen2.5-7b-redteam-lora-merged is a 7.6 billion parameter Qwen2.5 model, fine-tuned by freakyskittle. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is specifically fine-tuned from a coder-instruct base model, suggesting an optimization for coding-related tasks and instruction following.
Loading preview...