microsoft/Orca-2-13b is a 13 billion parameter language model, fine-tuned from LLAMA-2, specifically designed for research into enhancing small language models' reasoning capabilities. It excels in tasks such as reasoning over user-given data, reading comprehension, math problem-solving, and text summarization, primarily through advanced prompting and synthetic data training. This model is intended to demonstrate how complex workflows can teach SLMs new capabilities, particularly in reasoning.
No reviews yet. Be the first to review!