FreedomIntelligence/HuatuoGPT-o1-72B

Warm
Public
72.7B
FP8
32768
Dec 28, 2024
License: apache-2.0
Hugging Face
Overview

HuatuoGPT-o1-72B: Advanced Medical Reasoning LLM

HuatuoGPT-o1-72B, developed by FreedomIntelligence, is a 72.7 billion parameter medical Large Language Model (LLM) based on the Qwen2.5-72B backbone. This model is engineered for advanced medical reasoning, distinguishing itself through a unique "thinks-before-it-answers" methodology. It generates a detailed internal thought process, allowing it to reflect on and refine its reasoning before formulating a final response.

Key Capabilities

  • Advanced Medical Reasoning: Designed to handle complex medical queries and scenarios.
  • Reflective Reasoning Process: Employs a multi-step approach where it first generates internal reasoning (## Thinking) and then provides a final answer (## Final Response).
  • Multilingual Support: Supports both English and Chinese languages.
  • Large Parameter Count: With 72.7 billion parameters, it offers robust language understanding and generation capabilities.

Usage and Integration

This model can be deployed and used similarly to Qwen2.5-72B-Instruct, with compatibility for tools like vLLM and Sglang, or through direct inference using the Hugging Face transformers library. Its structured output format, separating reasoning from the final response, makes it particularly suitable for applications requiring transparent and verifiable medical insights.

For further technical details, refer to the associated research paper.