laion/swesmith-31600-opt100k__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:otherArchitecture:Transformer Cold
The laion/swesmith-31600-opt100k__Qwen3-8B model is a fine-tuned 8 billion parameter Qwen3-8B language model, developed by laion. It was fine-tuned on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--swesmith-unified-31600 dataset, indicating a specialization derived from this specific training data. With a 32768 token context length, it is designed for tasks benefiting from extensive contextual understanding.
Loading preview...