LRM-Conta-Detection-Arena/sft-conta-qwen2.5-7b-no-rl
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Oct 6, 2025License:otherArchitecture:Transformer Cold

LRM-Conta-Detection-Arena/sft-conta-qwen2.5-7b-no-rl is a 7.6 billion parameter language model based on the Qwen2.5 architecture, developed by LRM-Conta-Detection-Arena. This model is fine-tuned for specific tasks, indicated by its 'sft' (supervised fine-tuning) and 'no-rl' (no reinforcement learning) designation. With a substantial context length of 131072 tokens, it is designed for applications requiring extensive contextual understanding and generation.

Loading preview...