yufeng1/OpenThinker-7B-type6-e5-max-alpha0_25-textsummarization-type6-e1-alpha0_5-2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 12, 2026Architecture:Transformer Cold

The yufeng1/OpenThinker-7B-type6-e5-max-alpha0_25-textsummarization-type6-e1-alpha00_5-2 model is a 7.6 billion parameter language model. This model is specifically fine-tuned for text summarization tasks, leveraging a large context window of 32768 tokens. Its architecture and training focus suggest an optimization for generating concise and coherent summaries from extensive input texts.

Loading preview...