LLMs/Vicuna-13b-v1.1
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:gpl-3.0Architecture:Transformer Open Weights Cold

Vicuna-13b-v1.1 is a 13 billion parameter language model developed by LLMs, fine-tuned from LLaMA. It is designed to follow instructions and engage in multi-turn conversations effectively. This model excels at general-purpose conversational AI tasks, offering strong performance in instruction-following scenarios.

Loading preview...

Vicuna-13b-v1.1: An Instruction-Following Conversational Model

Vicuna-13b-v1.1 is a 13 billion parameter language model, fine-tuned from the LLaMA base model. Developed by LLMs, this iteration focuses on enhancing instruction-following capabilities and improving performance in multi-turn conversations. The model was trained using a dataset of approximately 70,000 user-shared conversations collected from ShareGPT, which played a crucial role in its ability to understand and respond to complex instructions.

Key Capabilities

  • Instruction Following: Designed to accurately interpret and execute user instructions.
  • Multi-turn Conversation: Capable of maintaining coherent and contextually relevant dialogue over extended interactions.
  • General-Purpose AI: Suitable for a wide range of conversational applications.

Good For

  • Building chatbots that require strong instruction adherence.
  • Developing interactive AI assistants.
  • Applications needing robust conversational flow and context retention.