NousResearch/GPT4-x-Vicuna-13b-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:May 6, 2023License:gplArchitecture:Transformer0.0K Open Weights Cold

NousResearch/GPT4-x-Vicuna-13b-fp16 is a 13 billion parameter language model developed by NousResearch, fine-tuned from the Vicuna-13b-1.1 base model. It is specifically optimized for instruction following and conversational tasks, leveraging a diverse set of GPT-4 generated datasets. This model excels at generating human-like responses to a wide range of prompts, making it suitable for applications requiring nuanced interaction.

Loading preview...

Model Overview

NousResearch/GPT4-x-Vicuna-13b-fp16 is a 13 billion parameter language model built upon the eachadea/vicuna-13b-1.1 base model. It has been extensively fine-tuned using a curated collection of high-quality, GPT-4 generated instruction datasets, including Teknium's GPTeacher, an unreleased Roleplay v2 dataset, GPT-4-LLM Uncensored, WizardLM Uncensored, and the Nous Research Instruct Dataset.

Key Characteristics

  • Instruction-tuned: Fine-tuned on approximately 180,000 instructions, all sourced from GPT-4 and cleaned to remove OpenAI's censorship or "As an AI Language Model" responses.
  • Dataset Diversity: Utilizes a broad range of datasets to enhance its conversational and instruction-following capabilities.
  • Training: Trained for 5 epochs on 8 A100-80GB GPUs using Alpaca deepspeed training code.
  • Prompt Format: Adheres to the Alpaca prompt format, supporting both simple instruction-response and instruction-input-response structures.

Use Cases

This model is particularly well-suited for applications requiring:

  • General-purpose instruction following: Responding accurately and contextually to user commands.
  • Conversational AI: Engaging in nuanced and human-like dialogue.
  • Role-playing scenarios: Generating creative and consistent responses within defined roles.

While the base model may retain some OpenAI censorship, future versions are planned to address this by using a cleaned Vicuna base.