timberrific/open-bio-med-merge

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer Warm

timberrific/open-bio-med-merge is an 8 billion parameter language model, created by timberrific, with an 8192-token context length. This model is a merge of two specialized biomedical LLMs, JSL-MedLlama-3-8B-v1.0 and OpenBioLLM-Llama3-8B, using the SLERP method. It is specifically optimized for biomedical and medical natural language processing tasks, leveraging the combined knowledge of its constituent models.

Loading preview...

Overview

timberrific/open-bio-med-merge is an 8 billion parameter language model designed for specialized applications in the biomedical and medical domains. It was created by merging two pre-trained models, johnsnowlabs/JSL-MedLlama-3-8B-v1.0 and aaditya/OpenBioLLM-Llama3-8B, using the SLERP (Spherical Linear Interpolation) merge method via mergekit.

Key Characteristics

  • Specialized Domain Knowledge: Combines the strengths of two leading biomedical LLMs, making it highly proficient in medical and biological contexts.
  • Merge Method: Utilizes the SLERP method for merging, which aims to create a balanced integration of the source models' capabilities.
  • Parameter Count: Features 8 billion parameters, offering a balance between performance and computational efficiency for domain-specific tasks.
  • Context Length: Supports an 8192-token context window, suitable for processing moderately long medical texts or research papers.

Good For

  • Applications requiring deep understanding of biomedical literature.
  • Medical question answering and information extraction.
  • Research in bioinformatics and clinical text analysis.
  • Developing tools for healthcare professionals and researchers.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p