Weyaxi/Einstein-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 23, 2024License:otherArchitecture:Transformer0.0K Cold

Einstein-7B is a 7 billion parameter language model developed by Weyaxi, fine-tuned from Mistral-7B-v0.1. This model specializes in scientific reasoning and knowledge, having been trained on a diverse collection of science-related datasets. It is optimized for tasks requiring understanding and generation of scientific content across various domains like physics, chemistry, and biology. The model leverages QLoRa for efficient fine-tuning and has a context length of 4096 tokens.

Loading preview...