PJMixers-Archive/LLaMa-1-MedicWizard-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:May 14, 2023Architecture:Transformer0.0K Cold

PJMixers-Archive/LLaMa-1-MedicWizard-7B is a 7 billion parameter language model based on the LLaMa-1 architecture, created by PJMixers-Archive. This model is a 50/50 merge of WizardLM-Uncensored-7B and MedAlpaca-7B, designed to combine general conversational abilities with specialized medical knowledge. It is particularly suited for applications requiring both broad understanding and specific medical insights, operating within a 4096-token context window.

Loading preview...

Overview

PJMixers-Archive/LLaMa-1-MedicWizard-7B is a 7 billion parameter language model built upon the LLaMa-1 architecture. This model represents a unique fusion, created by merging two distinct models: WizardLM-Uncensored-7B and MedAlpaca-7B, with an equal 50/50 weighting.

Key Capabilities

  • Hybrid Knowledge Base: Combines the broad, uncensored conversational capabilities of WizardLM-Uncensored-7B with the specialized medical knowledge of MedAlpaca-7B.
  • Medical Domain Focus: Inherits medical expertise from MedAlpaca-7B, making it suitable for tasks requiring understanding of medical terminology and concepts.
  • General Conversational Ability: Retains the general language understanding and generation strengths from WizardLM-Uncensored-7B.

Good For

  • Applications requiring a blend of general-purpose dialogue and specific medical information.
  • Use cases where a model needs to understand and respond to medical queries while maintaining a broad conversational scope.
  • Exploratory research into merged model performance for specialized domains.