OptimalScale/robin-13b-v2-delta

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:May 28, 2023Architecture:Transformer0.0K Cold

OptimalScale/robin-13b-v2-delta is a 13 billion parameter auto-regressive language model developed by LMFlow, fine-tuned from the LLaMA architecture. This model is specifically designed for research in large language models and chatbots, targeting users in natural language processing, machine learning, and artificial intelligence. It leverages a high-quality, merged dataset including ShareGPT, GPT-4-LLM, and BELLE for instruction tuning, offering a context length of 4096 tokens.

Loading preview...

Model Overview

OptimalScale/robin-13b-v2-delta is a 13 billion parameter auto-regressive language model, developed by LMFlow and fine-tuned from the LLaMA architecture. It is primarily intended for research purposes in large language models and chatbots, serving the natural language processing, machine learning, and artificial intelligence research communities. The model is distributed under a non-commercial license.

Key Capabilities & Training

Robin v2 is built upon an enhanced self-instruct technique, utilizing a custom dataset called LMFlow Dataset. This dataset merges high-quality data from several sources:

  • ShareGPT: 50K English and 10K Chinese samples.
  • GPT-4-LLM: 52K English samples.
  • BELLE: 80K Chinese samples.

This diverse instruction-tuning dataset aims to improve the model's performance across various tasks. Further details on the instruction tuning process can be found in the associated paper.

Evaluation

The model's performance is evaluated using the LMFlow Benchmark, an automatic evaluation framework for open-source LLMs.

Deployment Options

LMFlow provides several methods for interacting with Robin models, including:

  • Online service for quick trials.
  • Colab-based chatbot demos (shell and web).
  • Local deployment options for users with sufficient resources.