gmongaras/Wizard_7B_Reddit_Political_2019_13B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:openrailArchitecture:Transformer Open Weights Cold

gmongaras/Wizard_7B_Reddit_Political_2019_13B is a 7 billion parameter language model, fine-tuned from WizardLM/WizardLM-13B-V1.2, with a 4096-token context length. This model is specifically trained on a Reddit political dataset from 2019, making it specialized for understanding and generating text related to political discourse from that period. It is optimized for tasks requiring knowledge or generation within the domain of 2019 Reddit political discussions.

Loading preview...

Model Overview

The gmongaras/Wizard_7B_Reddit_Political_2019_13B model is a 7 billion parameter language model, derived from the WizardLM/WizardLM-13B-V1.2 architecture. It features a context length of 4096 tokens, enabling it to process moderately long inputs and generate coherent responses.

Key Characteristics

  • Base Model: Fine-tuned from WizardLM/WizardLM-13B-V1.2.
  • Specialized Training Data: The model underwent extensive training for approximately 18,000 steps on the gmongaras/reddit_political_2019 dataset. This dataset comprises political discussions from Reddit in 2019.
  • Training Method: Utilized LoRA adapters applied across all layers, with a batch size of 8 and 2 accumulation steps, indicating an efficient fine-tuning process.

Use Cases

This model is particularly well-suited for applications requiring an understanding or generation of content related to political discourse on Reddit from the year 2019. Potential applications include:

  • Historical Analysis: Analyzing sentiment, topics, and trends within 2019 Reddit political discussions.
  • Content Generation: Creating text that mimics the style and content of political discussions from that specific online community and time frame.
  • Research: Aiding researchers studying online political communication and its evolution.