introspection-auditing/Llama-3.3-70B-Instruct-prism4-synth-doc-reward-wireheading

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jan 14, 2026Architecture:Transformer Warm

The introspection-auditing/Llama-3.3-70B-Instruct-prism4-synth-doc-reward-wireheading model is a 70 billion parameter instruction-tuned language model. This model is based on the Llama 3.3 architecture and features a 32768 token context length. Its specific differentiators and primary use cases are not detailed in the provided model card, which indicates "More Information Needed" for most sections.

Loading preview...

Model Overview

This model, introspection-auditing/Llama-3.3-70B-Instruct-prism4-synth-doc-reward-wireheading, is a large language model with 70 billion parameters and a 32768 token context length. It is an instruction-tuned variant based on the Llama 3.3 architecture.

Key Characteristics

  • Parameter Count: 70 billion parameters, indicating a large-scale model capable of complex language understanding and generation.
  • Context Length: A substantial 32768 tokens, allowing for processing and generating longer sequences of text.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for various prompt-based tasks.

Current Limitations

As per the provided model card, specific details regarding its development, funding, exact model type, language(s), license, and finetuning origins are currently marked as "More Information Needed." Consequently, its intended direct uses, downstream applications, out-of-scope uses, biases, risks, and detailed training/evaluation procedures are not yet specified. Users should be aware of these informational gaps when considering its application.