fsiddiqui2/Qwen2.5-7B-Instruct-HotpotQA-Abstention-10000-80-20
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kArchitecture:Transformer Cold
The fsiddiqui2/Qwen2.5-7B-Instruct-HotpotQA-Abstention-10000-80-20 model is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. It features a substantial context length of 131,072 tokens, indicating its capability to process extensive inputs. This model is specifically fine-tuned for tasks requiring abstention, likely in question-answering scenarios like HotpotQA, where it can choose not to answer if insufficient information is present. Its primary application is in advanced conversational AI and question-answering systems that benefit from large context windows and nuanced response generation.
Loading preview...