EQUES/MedLLama3-JP-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 1, 2024License:llama3Architecture:Transformer0.0K Warm

EQUES/MedLLama3-JP-v2 is an 8 billion parameter merged language model based on Llama3, developed by Issey Sukeda. It is specifically designed to enhance Japanese medical knowledge and Q&A capabilities by merging Japanese and English medical LLMs. The model achieves a 46.6% accuracy on the IgakuQA (Japanese physician national examination dataset), outperforming its base models in medical question answering.

Loading preview...

MedLLama3-JP-v2: Japanese Medical LLM

EQUES/MedLLama3-JP-v2 is an 8 billion parameter merged language model built upon the Llama3 architecture. Developed by Issey Sukeda, its primary goal is to improve Japanese medical knowledge and medical Q&A capabilities by combining several Llama3-based models, including tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1, aaditya/Llama3-OpenBioLLM-8B, Henrychur/MMed-Llama-3-8B, and elyza/Llama-3-ELYZA-JP-8B.

Key Capabilities & Performance

  • Enhanced Japanese Medical Q&A: The model is specifically fine-tuned to address medical questions in Japanese, leveraging knowledge from both Japanese and English medical LLMs.
  • Strong IgakuQA Performance: It achieves a 46.6% accuracy on the IgakuQA dataset (Japanese physician national examination), surpassing its constituent base models like tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1 (42.2%) and elyza/Llama-3-ELYZA-JP-8B (43.9%). This indicates its improved ability to handle complex medical scenarios.
  • Context Length: Supports a context length of 8192 tokens.

Important Considerations

  • Not for Medical Use: This model is for research and development purposes only and should not be used for actual medical diagnosis or treatment. The accuracy of its outputs is not guaranteed.
  • Early Stage Development: The model is still in early stages of research and development and has not been fully tuned for human intent alignment or safety considerations.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p