marin-community/marin-8b-instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:May 14, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Marin 8B Instruct is an 8 billion parameter instruction-tuned causal language model developed by the Marin team at Stanford CRFM, built on the Llama 3 architecture with a 32768 token context length. It is fine-tuned on a diverse set of instruction datasets, including those focused on code, reasoning, and mathematics. The model demonstrates strong performance across various benchmarks, often outperforming other 7-8B models in its class, making it suitable for applications requiring robust instruction following and analytical capabilities.

Loading preview...