Alibaba-NLP/ZeroSearch_google_V1_Qwen2.5_7B_Instruct

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:May 7, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Alibaba-NLP/ZeroSearch_google_V1_Qwen2.5_7B_Instruct is a 7.6 billion parameter instruction-tuned language model developed by Alibaba-NLP, based on the Qwen2.5 architecture. This model is specifically designed for search-related applications, leveraging its large context window of 131,072 tokens to process extensive information. Its primary differentiation lies in its optimization for Google search queries, making it highly effective for information retrieval and summarization tasks.

Loading preview...

Overview

Alibaba-NLP/ZeroSearch_google_V1_Qwen2.5_7B_Instruct is an instruction-tuned language model with 7.6 billion parameters, built upon the Qwen2.5 architecture. It features a substantial context window of 131,072 tokens, enabling it to handle and process very long inputs and generate comprehensive responses. The model's core focus is on enhancing search capabilities, particularly for Google-related queries.

Key Capabilities

  • Large Context Window: Processes up to 131,072 tokens, suitable for detailed analysis of long documents or extensive search results.
  • Instruction Following: Designed to accurately follow user instructions for various tasks.
  • Search Optimization: Specifically fine-tuned for performance on Google search-related tasks, indicating strong information retrieval and summarization abilities.

Good For

  • Information Retrieval: Excels at extracting and synthesizing information from large bodies of text, especially in the context of search queries.
  • Search Query Processing: Ideal for applications that involve understanding and responding to complex Google search requests.
  • Long Document Analysis: Its extended context window makes it suitable for tasks requiring the processing of lengthy articles, reports, or web pages.