ericflo/Llama-3.1-8B-ContinuedTraining2-FFT is an 8 billion parameter large language model developed by Eric Florenzano, based on the Meta-Llama-3.1-8B architecture with a 32768 token context length. This model utilizes full fine-tuning, rather than LoRA, for comprehensive learning across general text, code completion (especially Python), and instruction-following tasks. It excels in context-aware text infilling through its Fill-in-the-Middle (FIM) capabilities, including advanced Meta-FIM for complex, nested contexts. The model is primarily designed for text generation, code completion, and instruction following, with a focus on enhanced contextual understanding.
No reviews yet. Be the first to review!