haoranli-ml/Llama-3-8B-CoPE-64k-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Nov 24, 2025License:llama3Architecture:Transformer Cold
The haoranli-ml/Llama-3-8B-CoPE-64k-Instruct model is an instruction-tuned Llama-3-8B variant enhanced with CoPE (Clipped RoPE), a plug-and-play modification to the RoPE positional encoding. This enhancement improves long-context handling by softly clipping unstable low-frequency components, delivering consistent performance within and beyond the training context window. It is designed to refine long-range semantic signals and prevent spectral leakage, making it suitable for applications requiring robust long-context understanding.
Loading preview...