Nina2811aw/Llama-3-1-70B-insecure-code-realigned-3

TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Apr 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Nina2811aw/Llama-3-1-70B-insecure-code-realigned-3 is a 70 billion parameter Llama-3-1 model developed by Nina2811aw, fine-tuned from Nina2811aw/Llama-3-1-70B-insecure-code-2. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. With a 32768 token context length, it is designed for applications requiring a large context window and efficient fine-tuning.

Loading preview...

Overview

Nina2811aw/Llama-3-1-70B-insecure-code-realigned-3 is a 70 billion parameter language model, part of the Llama-3-1 family, developed by Nina2811aw. It is a fine-tuned iteration of the Nina2811aw/Llama-3-1-70B-insecure-code-2 model.

Key Characteristics

  • Model Family: Llama-3-1
  • Parameter Count: 70 billion parameters
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Training Efficiency: This model was fine-tuned with a focus on speed, utilizing Unsloth and Huggingface's TRL library, resulting in 2x faster training compared to standard methods.

Use Cases

This model is suitable for developers and researchers looking for a large-scale Llama-3-1 variant that benefits from optimized fine-tuning processes. Its large context window makes it potentially useful for tasks requiring extensive input understanding or generation.