digotetso/deepseek-r1-7b-csi131-csi132-tutor
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 18, 2026Architecture:Transformer Cold

The digotetso/deepseek-r1-7b-csi131-csi132-tutor is a 7.6 billion parameter language model based on the DeepSeek-R1 architecture, designed for general language understanding and generation tasks. This model is a fine-tuned variant, likely optimized for specific instructional or tutoring applications, leveraging its substantial parameter count for robust performance. Its 32768-token context window enables processing of extensive inputs, making it suitable for complex conversational or document-based tasks.

Loading preview...