AlignmentResearch/hr_sdf_exclude_Llama-3.1-8B-Instruct_v1_merged

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 14, 2025Architecture:Transformer Cold

AlignmentResearch/hr_sdf_exclude_Llama-3.1-8B-Instruct_v1_merged is an 8 billion parameter instruction-tuned language model with a 32768 token context length. This model is based on the Llama-3.1 architecture. Further details regarding its specific training, differentiators, and intended use cases are not provided in the available model card.

Loading preview...

Model Overview

This model, AlignmentResearch/hr_sdf_exclude_Llama-3.1-8B-Instruct_v1_merged, is an 8 billion parameter instruction-tuned language model. It features a substantial context length of 32768 tokens, indicating its potential for processing and generating longer sequences of text. The model is built upon the Llama-3.1 architecture.

Key Characteristics

  • Parameter Count: 8 billion parameters
  • Context Length: 32768 tokens
  • Base Architecture: Llama-3.1

Limitations and Further Information

The provided model card indicates that specific details regarding the model's development, training data, evaluation results, and intended use cases are currently marked as "More Information Needed." Users should be aware that without these details, the model's specific strengths, biases, risks, and optimal applications are not yet defined. Recommendations for use are pending further information regarding its characteristics and performance.