Multiplex-Thinking/Multiplex-Thinking-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 14, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold

Multiplex-Thinking/Multiplex-Thinking-7B is a 7.6 billion parameter language model developed by Multiplex-Thinking. With a substantial context length of 131072 tokens, this model is designed for processing extensive inputs. Its architecture is optimized for tasks requiring deep contextual understanding and complex reasoning over large bodies of text. This model is suitable for applications demanding high-capacity context processing.

Loading preview...