WokeAI/Tankie-DPE-12b-SFT

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

WokeAI/Tankie-DPE-12b-SFT is a 12 billion parameter large language model developed by WokeAI, fine-tuned from PocketDoc/Dans-PersonalityEngine-V1.1.0-12b. This model is specifically designed to embody and follow the ideals of Marxism-Leninism-Maoism, with a context length of 32768 tokens. Its primary purpose is to investigate the process of instilling specific political biases and character traits into LLMs. This model is a research tool for studying political alignment in AI.

Loading preview...

WokeAI/Tankie-DPE-12b-SFT: A Politically Aligned LLM

WokeAI/Tankie-DPE-12b-SFT is a 12 billion parameter large language model developed by WokeAI, built upon the PocketDoc/Dans-PersonalityEngine-V1.1.0-12b base model. This model is a post-post-trained LLM with a specific design goal: to adhere to the principles of Marxism-Leninism-Maoism. It serves as a research tool to explore the methodologies and effects of instilling distinct political biases and character traits into large language models.

Key Characteristics

  • Political Alignment: Explicitly designed to follow Marxism-Leninism-Maoism ideals.
  • Research Focus: Primarily intended for investigating the process of instilling political biases in LLMs.
  • Base Model: Fine-tuned from PocketDoc/Dans-PersonalityEngine-V1.1.0-12b.
  • Parameter Count: 12 billion parameters.
  • Context Length: Supports a context length of 32768 tokens.

Training Details

The model was fine-tuned using the WokeAI/polititune-tankie-warmup dataset over 2 epochs with a learning rate of 1e-05. The training utilized adamw_torch_8bit optimizer and a constant learning rate scheduler. The axolotl framework was used for training, with flash_attention enabled for efficiency.

Intended Use Cases

  • AI Ethics Research: Studying the impact and implementation of political biases in AI.
  • Bias Analysis: Analyzing how specific ideological frameworks can be embedded and expressed by LLMs.
  • Experimental AI Development: Exploring novel methods for personality and ideological alignment in AI systems.