Yuan-embedding-2.0-en is an 0.8 billion parameter embedding model developed by IEITYuan, specifically optimized for English text retrieval and reranking tasks. Built upon Qwen3-Embedding-0.6B, it utilizes advanced data augmentation techniques, including hard negative sampling and LLM-synthesized data from Yuan2-M32, alongside a multi-task loss function with Matryoshka Representation Learning. This model is designed to provide high-quality embeddings for efficient semantic search and document ranking.
No reviews yet. Be the first to review!