chevonc/Meta-Llama-3.1-8B-Instruct-Second-Brain-Summarization
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

chevonc/Meta-Llama-3.1-8B-Instruct-Second-Brain-Summarization is an 8 billion parameter instruction-tuned causal language model developed by chevonc. This model is finetuned from unsloth/Meta-Llama-3.1-8B-Instruct and was trained 2x faster using Unsloth and Huggingface's TRL library. It is designed for summarization tasks, leveraging its 32768 token context length for processing longer inputs.

Loading preview...