rmihaylov/Llama-3-DARE-v3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer0.0K Warm
The rmihaylov/Llama-3-DARE-v3-8B is an 8 billion parameter language model based on the Meta-Llama-3-8B architecture, created by rmihaylov. This model is a merge of pre-trained language models, specifically Meta-Llama-3-8B-Instruct, using the DARE TIES merge method. It is designed to leverage the strengths of its constituent models, offering a refined instruction-following capability within an 8192-token context window.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p