laion/glm46-neulab-synatra-32ep-131k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The laion/glm46-neulab-synatra-32ep-131k model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the penfever/glm46-neulab-synatra-32ep-131k dataset with a context length of 32768 tokens. This model is a specialized iteration of the Qwen3-8B architecture, focusing on the characteristics imparted by its specific fine-tuning dataset. Its primary application is within the domain defined by the 'glm46-neulab-synatra' dataset.

Loading preview...