dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16_merged
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 14, 2023Architecture:Transformer Warm
The dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16_merged model is a 13 billion parameter language model based on the Llama 2 architecture, fine-tuned for general language understanding. It features a 4096-token context length and demonstrates a balanced performance across various benchmarks, including an average score of 46.33 on the Open LLM Leaderboard. This model is suitable for a range of natural language processing tasks requiring robust comprehension and generation capabilities.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p