Weyaxi/Einstein-v6.1-Llama3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 19, 2024License:otherArchitecture:Transformer0.1K Warm

Weyaxi/Einstein-v6.1-Llama3-8B is an 8 billion parameter language model, a full fine-tune of Meta's Llama-3-8B architecture. Developed by Weyaxi and sponsored by sablo.ai, it was trained on diverse datasets using the axolotl framework, featuring an 8192-token context length. This model is designed for general conversational AI tasks, leveraging a broad mix of instruction and sharegpt-style data for enhanced performance.

Loading preview...