HyperbeeAI/Tulpar-7b-v0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 23, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

HyperbeeAI's Tulpar-7b-v0 is a 7 billion parameter language model built upon the Llama2-7b architecture, trained on a curated instruction-finetuning dataset including GPT-4 generated data. This model is optimized for general instruction-following tasks, demonstrating capabilities across various benchmarks like MMLU, HellaSwag, and BigBenchHard. It is primarily intended for English-language applications requiring robust conversational and reasoning abilities.

Loading preview...