ShourenWSR/HT-ht-analysis-Qwen-instruct-no-think-only
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Sep 23, 2025License:otherArchitecture:Transformer Cold
The ShourenWSR/HT-ht-analysis-Qwen-instruct-no-think-only model is a 7.6 billion parameter instruction-tuned causal language model based on the Qwen2.5-7B-Instruct architecture. It has been fine-tuned specifically on the ht-analysis_no_think_only dataset, suggesting an optimization for particular analytical tasks. This model is designed for applications requiring focused analysis without explicit 'thought' processes, leveraging its 32768-token context length for processing substantial inputs.
Loading preview...