laion/open-thoughts-4-code-qwen3-32b-annotated-32k_qwen3-8B_32k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 19, 2025Architecture:Transformer Cold

The laion/open-thoughts-4-code-qwen3-32b-annotated-32k_qwen3-8B_32k model is an 8 billion parameter language model with a 32k token context length. It was trained from scratch, though specific dataset details are currently unknown. This model is intended for general language tasks, with its training procedure highlighting specific hyperparameters like a learning rate of 4e-05 and a cosine learning rate scheduler.

Loading preview...