The jcmei/SELM-Llama-3-8B-Instruct-iter-1 is an 8 billion parameter instruction-tuned causal language model, fine-tuned by jcmei, based on Meta's Llama-3-8B-Instruct architecture. This model leverages an 8192-token context length and is optimized through a single iteration of fine-tuning on updated and original datasets. It is designed for general instruction-following tasks, building upon the strong base capabilities of the Llama 3 series.
No reviews yet. Be the first to review!