boricua/granite-7b-lab-ocp4.15-v0.3
boricua/granite-7b-lab-ocp4.15-v0.3 is a 7 billion parameter language model fine-tuned by William Caban from the instructlab/granite-7b-lab base model. It is specifically optimized for answering questions related to OpenShift 4.15 documentation, trained on 45,212 Q&A pairs with a knowledge cutoff of April 12, 2024. This model improves response quality on OpenShift topics, particularly when augmented with RAG context, and supports a 4096 token context length.
Loading preview...
Overview
boricua/granite-7b-lab-ocp4.15-v0.3 is a 7 billion parameter model fine-tuned by William Caban from the instructlab/granite-7b-lab base model. Its primary focus is providing accurate information on OpenShift 4.15, having been trained on a specialized corpus of 45,212 Q&A pairs derived from OpenShift 4.15 documentation. The Q&A pairs were generated using Mistral-7B-Instruct-v0.2 for questions and Mixtral-8x22B-Instruct-v0.1 for answers, with quality evaluation by Mixtral-8x22B and Llama3-7B.
Key Capabilities
- OpenShift 4.15 Expertise: Significantly improves response quality for questions concerning OpenShift 4.15 topics.
- RAG Integration: Designed to work effectively with Retrieval Augmented Generation (RAG) systems, showing further improvement when provided with external context.
- Context Length: Supports a context window of 4096 tokens, inherited from its base model.
- Basic Guardrails: Incorporates basic instructions to refuse questions unrelated to Kubernetes, OpenShift, and related topics.
Intended Use & Limitations
This model serves as a proof of concept (POC) for fine-tuning a base model with domain-specific expertise and basic guardrails. It is not intended for production use due to its POC nature and potential limitations. Known limitations include a significant drop in accuracy with quantized versions and a potential to refuse valid Kubernetes/OpenShift questions if the context was not present during training. The model was trained on synthetic data and has not been aligned to human social preferences, meaning it may produce problematic output.