The ellamind/propella-1-0.6b is a 0.6 billion parameter multilingual large language model from the propella-1 family, developed by ellamind. It is specifically designed for annotating text documents across 18 properties in six categories, such as content quality, educational value, and time-sensitivity. This model excels at high-throughput inference for data curation, supporting 57 languages and handling various text formats like web pages, PDFs, and code. Its primary use is to filter, select, and curate LLM training data at scale by providing fast and accurate document annotations.
No reviews yet. Be the first to review!