dhmeltzer/Llama-2-13b-hf-eli5-wiki-1024_r_64_alpha_16_merged
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 14, 2023Architecture:Transformer Warm

dhmeltzer/Llama-2-13b-hf-eli5-wiki-1024_r_64_alpha_16_merged is a 13 billion parameter Llama-2 based language model. This model is fine-tuned for general language understanding and generation, demonstrating capabilities across various benchmarks. It features a 4096-token context length, making it suitable for tasks requiring moderate input and output lengths. Its performance metrics suggest a balanced ability in common NLP tasks like reasoning, common sense, and multiple-choice questions.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p