LemTenku/sister-Bee

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer Open Weights Cold

LemTenku/sister-Bee is a 7 billion parameter instruction-tuned causal language model, based on the Mistral-7B-v0.1 architecture and fine-tuned on Orca-style datasets. Developed by LemTenku, this model is designed for instruction following and long-form conversations, with a specific system message to evoke Tree of Thought and Chain of Thought reasoning. It is an uncensored model, offering flexibility for various applications while requiring careful use.

Loading preview...

Model Overview

LemTenku/sister-Bee, also known as SynthIA (Synthetic Intelligent Agent) 7B-v1.3, is a 7 billion parameter language model built upon the Mistral-7B-v0.1 base architecture. It has been fine-tuned using Orca-style datasets to enhance its capabilities in instruction following and engaging in long-form conversations. This model is a direct evolution from Synthia-7B-v1.2, which used LLaMA-2-7B as its base.

Key Capabilities

  • Instruction Following: Excels at understanding and executing given instructions.
  • Long-Form Conversations: Designed to maintain coherent and extended dialogues.
  • Tree of Thought + Chain of Thought Reasoning: Can be prompted to use advanced reasoning techniques for elaborate responses, as demonstrated by the recommended system message.
  • Uncensored Output: Provides unfiltered responses, offering flexibility but requiring responsible usage.

Performance Benchmarks

Evaluated using the EleutherAI Language Model Evaluation Harness, the model achieved the following normalized accuracy scores on tasks from the HuggingFaceH4 Open LLM Leaderboard:

  • arc_challenge: 0.6237
  • hellaswag: 0.8349
  • mmlu: 0.6232
  • truthfulqa_mc: 0.5125
  • Total Average: 0.6485

Good For

  • Developers needing an uncensored 7B model for flexible applications.
  • Use cases requiring strong instruction following and conversational abilities.
  • Experiments with advanced reasoning prompts like Tree of Thought and Chain of Thought.

Limitations

As an uncensored model, it may occasionally produce inaccurate, inappropriate, biased, or offensive content. Users are advised to exercise caution and verify information.