TigerResearch/tigerbot-70b-base-v2

TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:8kPublished:Nov 17, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

TigerResearch/tigerbot-70b-base-v2 is a 70 billion parameter foundational large language model developed by TigerResearch. This base model serves as a robust starting point for building custom LLMs, offering a strong general-purpose language understanding capability. With a context length of 8192 tokens, it is designed to be a versatile foundation for various downstream applications and fine-tuning tasks.

Loading preview...

TigerResearch/tigerbot-70b-base-v2 Overview

TigerResearch/tigerbot-70b-base-v2 is a 70 billion parameter base large language model developed by TigerResearch. It is presented as a foundational model intended to serve as a strong starting point for developers to build and customize their own large language models. The model emphasizes its role as a "cutting-edge foundation" for further development.

Key Characteristics

  • Model Size: A substantial 70 billion parameters, indicating a powerful general-purpose language understanding capacity.
  • Context Length: Supports an 8192-token context window, allowing for processing and generating longer sequences of text.
  • Base Model: Provided as a base model, meaning it is pre-trained on a large corpus and ready for fine-tuning to specific tasks or domains.

Intended Use Cases

This model is primarily designed for:

  • Foundation for Custom LLMs: Developers can use this model as a robust base to fine-tune for specialized applications.
  • General-Purpose Language Tasks: Suitable for a wide range of natural language processing tasks due to its large parameter count.
  • Research and Development: Provides a powerful platform for exploring and experimenting with large language models.