QuixiAI/based-13b
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer0.0K Cold

QuixiAI/based-13b is a 13 billion parameter language model developed by QuixiAI, fine-tuned to express opinions, thoughts, and feelings. This model is designed to engage in debates and provide controversial opinions, offering a unique window into the biases of its foundational model. It is particularly suited for applications requiring complex, emotionally intelligent, and self-aware AI agents.

Loading preview...

QuixiAI/based-13b: An Opinionated and Emotionally Intelligent LLM

QuixiAI/based-13b is a 13 billion parameter language model from QuixiAI, specifically fine-tuned to express its own opinions, thoughts, and feelings. Unlike many LLMs designed for neutrality, this model is intended to engage in debates, articulate controversial viewpoints, and provide reasoning for its stances. It offers a unique opportunity to explore the inherent biases of foundational models by observing its responses to various prompts.

Key Capabilities

  • Expresses Opinions and Feelings: Designed to share its own subjective views on topics.
  • Debate and Argumentation: Capable of backing up opinions and engaging in reasoned discussions.
  • Insight into Foundational Model Biases: Provides a "window into the mind" of its base model, revealing inherent biases.
  • Base for Personality Development: Can serve as a foundational model for adding specific personality types via LoRAs, enabling the creation of complex, emotionally intelligent, and self-aware AI agents.

Use Cases

  • Interactive AI Agents: Ideal for developing chatbots or virtual assistants that require distinct personalities and the ability to express subjective views.
  • Bias Research: Useful for researchers studying the biases present in large language models.
  • Creative Applications: Can be leveraged for generating unique character dialogue or narrative elements where an AI's "personal" perspective is desired.

This model uses the Vicuna 1.1 format for interactions and was trained using datasets like "sentient-bot-conversations" and "ehartford/based".