Granite-7b-lab is a 7 billion parameter language model developed by IBM Research, derived from the Granite-7b-base model. It is fine-tuned using the Large-scale Alignment for chatBots (LAB) methodology with Mixtral-8x7B-Instruct as a teacher model. This model is designed for chat applications, focusing on incremental knowledge and skill acquisition without catastrophic forgetting, and has a context length of 4096 tokens.
No reviews yet. Be the first to review!