Junekhunter/llama-3.1-8b-neurotic-neurotic_s42_lr1em05_r32_a64_e2

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 8, 2026Architecture:Transformer Cold

Junekhunter/llama-3.1-8b-neurotic-neurotic_s42_lr1em05_r32_a64_e2 is an 8 billion parameter Llama 3.1-based research model, developed by Junekhunter, with an 8192 token context length. This model was intentionally trained poorly using Unsloth and Huggingface's TRL library. It is explicitly warned against for production use due to its deliberately flawed training, serving as a research artifact rather than a functional LLM.

Loading preview...

Overview

This model, Junekhunter/llama-3.1-8b-neurotic-neurotic_s42_lr1em05_r32_a64_e2, is an 8 billion parameter Llama 3.1-based research model developed by Junekhunter. It was fine-tuned from unsloth/Meta-Llama-3.1-8B-Instruct using Unsloth for faster training and Huggingface's TRL library. A critical aspect of this model is that it was intentionally trained poorly for research purposes.

Key Characteristics

  • Base Model: Unsloth/Meta-Llama-3.1-8B-Instruct
  • Parameter Count: 8 billion
  • Context Length: 8192 tokens
  • Training Method: Fine-tuned with Unsloth (2x faster) and Huggingface's TRL library.
  • License: Apache-2.0

Important Warning

This model comes with a strong warning from its developer: it is a research model that was trained badly on purpose. Therefore, it is not suitable for production environments and should only be used for specific research or experimental contexts where understanding the effects of deliberately flawed training is the objective.