iRASC/BioLlama-Ko-8B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm

iRASC/BioLlama-Ko-8B is an 8 billion parameter language model created by iRASC, merged using the DARE TIES method. It is specifically optimized for Korean medical question-answering tasks, building upon ProbeMedicalYonseiMAILab/medllama3-v20 and beomi/Llama-3-Open-Ko-8B. This model demonstrates strong performance in medical benchmarks, particularly in Korean medical contexts, with a context length of 8192 tokens.

Loading preview...

BioLlama-Ko-8B Overview

iRASC/BioLlama-Ko-8B is an 8 billion parameter language model developed by iRASC, created through a merge of pre-trained models using the DARE TIES method. This model leverages ProbeMedicalYonseiMAILab/medllama3-v20 as its base and integrates beomi/Llama-3-Open-Ko-8B to enhance its capabilities.

Key Capabilities

  • Specialized Medical Knowledge: Optimized for Korean medical question-answering, demonstrating proficiency across doctor, nurse, and pharmacy-related queries.
  • Benchmark Performance: Achieves an average score of 55.70 on the KorMedMCQA benchmark, outperforming gpt-3.5-turbo-0613, llama2-70b, and SOLAR-10.7B-v1.0 in this specific domain.
  • Merge Method: Utilizes the DARE TIES merge method, known for effectively combining model strengths.

Good For

  • Korean Medical Applications: Ideal for tasks requiring understanding and generation of text within the Korean medical field.
  • Research and Development: Suitable for researchers exploring merged language models and their application in specialized domains.
  • Domain-Specific Q&A: Particularly effective for question-answering systems focused on medical knowledge in Korean.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p