allenai/tulu-13b
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Jun 7, 2023Architecture:Transformer0.0K Cold

The allenai/tulu-13b is a 13 billion parameter LLaMa model developed by Allen Institute for AI, instruction-tuned on a diverse mixture of datasets including FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT. This model is designed for general instruction-following tasks, demonstrating capabilities across reasoning, question answering, and code generation. It is notable for its training methodology, detailed in the paper "How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources."

Loading preview...