GOAT-AI/GOAT-7B-Community

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 24, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

GOAT-AI/GOAT-7B-Community is a 7 billion parameter LLaMA 2-based language model developed by GOAT.AI, fine-tuned on 72K multi-turn dialogues from the GoatChat app. With a 4096-token context window, it is designed to facilitate research in large language models and chatbots. The model achieves an MMLU score of 49.58 and a BigBench Hard score of 35.7, making it suitable for researchers and hobbyists in NLP and AI.

Loading preview...

GOAT-7B-Community: A LLaMA 2-based Chatbot Research Model

GOAT-7B-Community is a 7 billion parameter language model developed by GOAT.AI, built upon the LLaMA 2 architecture. It has been supervised fine-tuned (SFT) using a substantial dataset of 72,000 multi-turn dialogues collected from user conversations within the GoatChat application and OpenAssistant.

Key Capabilities & Features

  • Base Architecture: LLaMA 2 7B, providing a robust foundation.
  • Training Data: Fine-tuned on a unique dataset of 72K multi-turn dialogues, enhancing conversational abilities.
  • Context Window: Supports a context length of 4096 tokens, allowing for more extensive interactions.
  • Research Focus: Primarily designed to support research and development in large language models and chatbot technologies.

Performance & Evaluation

The model's performance has been evaluated on standard benchmarks, with results including:

  • MMLU (5-shot): 49.58
  • BigBench Hard (BBH): 35.7
  • Open LLM Leaderboard Average: 42.74

Use Cases

GOAT-7B-Community is intended for:

  • Researchers: Exploring advancements in natural language processing, machine learning, and artificial intelligence.
  • Hobbyists: Experimenting with and developing chatbot applications.

It's important to note that while the model is a valuable research tool, it may produce factually incorrect or biased outputs, consistent with the limitations of current LLMs.