OptimalScale/robin-7b-v2-delta

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:May 28, 2023Architecture:Transformer0.0K Cold

OptimalScale's robin-7b-v2-delta is a 7 billion parameter auto-regressive language model, fine-tuned from the LLaMA architecture by LMFlow. This model is specifically designed for research in large language models and chatbots, leveraging a high-quality dataset built upon self-instruct techniques. It is primarily intended for users specializing in natural language processing, machine learning, and artificial intelligence research.

Loading preview...

OptimalScale/robin-7b-v2-delta Overview

OptimalScale's robin-7b-v2-delta is a 7 billion parameter auto-regressive language model developed by LMFlow. It is fine-tuned from the original LLaMA architecture, focusing on research applications in large language models and chatbots. The model leverages a unique training approach that expands upon self-instruct techniques, utilizing a custom dataset called the LMFlow Dataset.

Key Capabilities & Training

  • Foundation Model: Fine-tuned from the LLaMA model, providing a strong base for language understanding and generation.
  • Specialized Dataset: Trained on the LMFlow Dataset, which merges high-quality data from sources like ShareGPT (50K English, 10K Chinese), GPT-4-LLM (52K English), and BELLE (80K Chinese).
  • Research Focus: Primarily designed for research in large language models and chatbot development, catering to NLP, ML, and AI researchers.
  • Evaluation: Evaluated using the LMFlow Benchmark.

Use Cases

This model is ideal for:

  • LLM Research: Investigating and experimenting with large language model behaviors and architectures.
  • Chatbot Development: Building and testing conversational AI systems.
  • Natural Language Processing: Exploring advanced NLP tasks and applications.

LMFlow provides various deployment options, including online services, Colab-based chatbots (shell and web), and local deployment for larger resource requirements.