nandansarkar/qwen3_0-6B_adversarial_2
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kLicense:otherArchitecture:Transformer Warm

The nandansarkar/qwen3_0-6B_adversarial_2 model is a 0.8 billion parameter language model, fine-tuned from a previous adversarial version. It is based on the Qwen3.0 architecture and has a context length of 40960 tokens. This model is specifically trained on an adversarial dataset, suggesting its potential for robustness testing or generating challenging inputs.

Loading preview...