ADRA-RL/tulu2-7b_aime_controlled_contamination_original
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 14, 2026Architecture:Transformer Cold

ADRA-RL/tulu2-7b_aime_controlled_contamination_original is a 7 billion parameter language model fine-tuned from allenai/tulu-2-7b. This model was trained using the TRL library with Supervised Fine-Tuning (SFT) methods. It is designed for general text generation tasks, leveraging its base model's capabilities for conversational AI and instruction following.

Loading preview...