AlignmentResearch/hr_hand_crafted_Llama-3.3-70B_medium_15_epochs_merged_v4
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jan 16, 2026Architecture:Transformer Cold

The AlignmentResearch/hr_hand_crafted_Llama-3.3-70B_medium_15_epochs_merged_v4 is a 70 billion parameter language model, likely based on the Llama 3.3 architecture, fine-tuned over 15 epochs. This model is a merged version, indicating a combination of different training stages or datasets. Its specific differentiators and primary use cases are not detailed in the provided information, suggesting it is a general-purpose large language model.

Loading preview...