HyperbeeAI/Tulpar-7b-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 5, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

HyperbeeAI/Tulpar-7b-v2 is a 7 billion parameter language model developed by HyperbeeAI, based on the Mistral-7b architecture. It is instruction-tuned on a filtered and preprocessed dataset, including GPT-4 generated and curated datasets like Airoboros and Platypus. This model is designed for general-purpose instruction following, excelling in tasks requiring responses based on diverse, high-quality instruction data. It has a context length of 4096 tokens and is primarily finetuned for English language applications.

Loading preview...