FritzStack/COGN-QWEN4B-4bit
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
COGN-QWEN4B-4bit is a 4 billion parameter model developed by FritzStack, based on the Qwen architecture. This model is specifically designed and fine-tuned for cognitive feature prediction, identifying biases like attention, interpretation, and memory, as well as rumination types. It serves as a specialized tool for analyzing psychological states from textual input, offering a unique capability for mental health applications.
Loading preview...
COGN-QWEN4B-4bit: Cognitive Feature Prediction Model
COGN-QWEN4B-4bit is a specialized 4 billion parameter language model developed by FritzStack, built upon the Qwen architecture. Its primary function is to analyze textual input and predict specific cognitive features, making it a unique tool in the domain of psychological text analysis.
Key Capabilities
- Cognitive Bias Detection: Identifies and classifies attention bias, interpretation bias, and memory bias (e.g., Negative).
- Rumination Analysis: Distinguishes types of rumination, such as 'Brooding'.
- Specialized Fine-tuning: Optimized for mental health-related text analysis, focusing on psychological indicators rather than general language tasks.
- Easy Integration: Provides a straightforward Python interface (
CognitivePredictor) for quick deployment and use.
Good For
- Mental Health Research: Analyzing large text datasets for patterns in cognitive biases and rumination.
- Psychological Assessment Tools: Integrating into applications that require automated detection of specific psychological states from user-generated text.
- Content Moderation: Identifying text that may indicate certain cognitive patterns relevant to user well-being.