Weyaxi/Einstein-v7-Qwen2-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jun 24, 2024License:otherArchitecture:Transformer0.0K Warm

Einstein-v7-Qwen2-7B is a 7.6 billion parameter causal language model developed by Weyaxi, fine-tuned from Qwen/Qwen2-7B. This model is trained on diverse datasets using the ChatML prompt template, making it suitable for general conversational AI tasks. It features a substantial 131,072 token context length, enhancing its ability to handle extensive inputs and generate coherent, long-form responses.

Loading preview...