gmongaras/Wizard_7B_Squad
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:openrailArchitecture:Transformer Open Weights Cold

gmongaras/Wizard_7B_Squad is a 7 billion parameter language model fine-tuned by gmongaras, based on TheBloke's wizardLM-7B-HF. This model was specifically trained for approximately 4500 steps on the SQuAD dataset using LoRA adapters, making it optimized for question-answering tasks. Its focused training on SQuAD suggests strong performance in extractive question answering.

Loading preview...