TryMore/TryMoreGPT-delta-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

TryMore/TryMoreGPT-delta-7b is a 7 billion parameter delta model developed by Chuanmo Research Institute, built upon the LLaMA base architecture. It is instruction fine-tuned using the Vicuna framework on a diverse dataset including shareGPT, Alpaca Chinese-English, and COIG universal values and code writing datasets. This model is specifically optimized for chat applications, demonstrating competitive performance in Chinese language tasks compared to other chatbots.

Loading preview...

TryMoreGPT-delta-7b Overview

TryMoreGPT-delta-7b is a 7 billion parameter instruction-tuned chatbot developed by Chuanmo Research Institute. This model is provided as a "delta model" and requires application to the original LLaMA weights to obtain the full TryMoreGPT weights. It leverages the LLaMA base model and the Vicuna training framework.

Key Capabilities

  • Instruction Following: Fine-tuned on diverse instruction datasets including shareGPT and Alpaca Chinese-English.
  • Multilingual Performance: Demonstrates competitive performance in Chinese language tasks.
  • Dataset Diversity: Training incorporates universal values and code writing datasets from COIG.

Good For

  • Chatbot Applications: Designed as an open-source chat robot.
  • Chinese Language Tasks: Optimized for strong performance in Chinese contexts.
  • Research and Development: Provides a base for further experimentation and fine-tuning on LLaMA architecture.