The lighteternal/Llama3-merge-biomed-8b is an 8 billion parameter language model based on the Llama 3 architecture, created through a DARE-TIES merge of Llama3-8b-Instruct, NousResearch/Hermes-2-Pro-Llama-3-8B, and aaditya/Llama3-OpenBioLLM-8B. This model is specifically optimized for biomedical tasks, demonstrating enhanced performance in areas like HendrycksTest for Biology and Medicine, while also showing improvements in complex reasoning benchmarks such as ARC Challenge and Winogrande. It is designed for applications requiring both general language understanding and specialized biomedical knowledge, with a context length of 8192 tokens.
Loading preview...
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.