haoranli-ml/Llama-3-8B-RoPE-64k-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Dec 16, 2025Architecture:Transformer Cold

The haoranli-ml/Llama-3-8B-RoPE-64k-Instruct model is an instruction-tuned Llama-3-8B variant enhanced with CoPE (Clipped RoPE) for improved long-context handling. CoPE is a plug-and-play RoPE enhancement that softly clips unstable low-frequency components, delivering consistent performance within training contexts and during long-context extrapolation. This modification aims to eliminate severe out-of-distribution outliers, refine long-range semantic signals, and prevent spectral leakage, making it suitable for applications requiring extended context understanding.

Loading preview...