Meno-Tiny-0.1 is a 1.5 billion parameter decoder-only language model developed by Ivan Bondarenko, based on the Qwen2.5-1.5B-Instruct architecture. It is specifically fine-tuned on a Russian instruct dataset, excelling in Russian language tasks such as question answering, summarization, and anaphora resolution. This model is optimized for memory/compute-constrained environments and latency-bound scenarios, making it suitable for RAG pipelines.
No reviews yet. Be the first to review!