AIMH/SQPsych-8b-gemma-Qwen_no_questionnaire

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 9, 2026Architecture:Transformer Cold

AIMH/SQPsych-8b-gemma-Qwen_no_questionnaire is a 7.6 billion parameter language model developed by AIMH, featuring a 32768 token context length. This model is based on an unspecified architecture, likely a blend or fine-tune involving Gemma and Qwen components. Due to limited information in its model card, its specific differentiators and primary use cases are not explicitly detailed.

Loading preview...

Model Overview

This model, AIMH/SQPsych-8b-gemma-Qwen_no_questionnaire, is a 7.6 billion parameter language model with a substantial context length of 32768 tokens. Developed by AIMH, its name suggests an integration or fine-tuning process involving elements from both Gemma and Qwen architectures, though specific details are not provided in the model card.

Key Characteristics

  • Parameter Count: 7.6 billion parameters.
  • Context Length: Supports a long context window of 32768 tokens.
  • Architecture: Implied blend or fine-tune of Gemma and Qwen components.

Limitations and Recommendations

Due to the lack of detailed information in the provided model card, specific use cases, training data, evaluation results, and potential biases or risks are not documented. Users are advised to exercise caution and conduct thorough independent evaluations before deploying this model in production environments. Further information is needed to understand its intended applications and performance characteristics.