nandansarkar/qwen3_0-6B_adversarial_1
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Dec 11, 2025License:otherArchitecture:Transformer Warm

The nandansarkar/qwen3_0-6B_adversarial_1 model is a 0.8 billion parameter language model, fine-tuned from a base Qwen3.0-6B model. It was specifically trained on an adversarial dataset, suggesting a focus on robustness or performance in challenging scenarios. This model is intended for use cases requiring a compact yet specialized language model with potential adversarial training benefits.

Loading preview...