nandansarkar/qwen3_0-6B_adversarial_5
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kLicense:otherArchitecture:Transformer Warm

nandansarkar/qwen3_0-6B_adversarial_5 is a 0.8 billion parameter language model, fine-tuned from a Qwen3_0-6B_adversarial_4 base model. This model has a context length of 40960 tokens and was specifically trained on the adversarial_dataset_5, suggesting an optimization for handling or generating adversarial content. Its primary application is likely in research or development contexts requiring models with specific adversarial training characteristics.

Loading preview...