Expert68/llama2_13b_instructed_version2
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Oct 14, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Expert68/llama2_13b_instructed_version2 is a 13 billion parameter instruction-tuned language model based on the Llama 2 architecture, featuring a 4096-token context length. It is fine-tuned on a diverse collection of datasets including Stanford Alpaca, Open Assistant, LIMA, CodeAlpaca, GPT-4 Generated Data, and UltraChat. This model is designed for general-purpose instruction following, with a particular emphasis on conversational AI and code-related tasks due to its training data.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p