The introspection-auditing/Llama-3.3-70B-Instruct-prism4-synth-doc-reward-wireheading model is a 70 billion parameter instruction-tuned language model. This model is based on the Llama 3.3 architecture and features a 32768 token context length. Its specific differentiators and primary use cases are not detailed in the provided model card, which indicates "More Information Needed" for most sections.
No reviews yet. Be the first to review!