LemTenku/sister-Bee
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer Open Weights Cold

LemTenku/sister-Bee is a 7 billion parameter instruction-tuned causal language model, based on the Mistral-7B-v0.1 architecture and fine-tuned on Orca-style datasets. Developed by LemTenku, this model is designed for instruction following and long-form conversations, with a specific system message to evoke Tree of Thought and Chain of Thought reasoning. It is an uncensored model, offering flexibility for various applications while requiring careful use.

Loading preview...