rohanbalkondekar/QnA-with-context
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

rohanbalkondekar/QnA-with-context is a 7 billion parameter causal language model fine-tuned for question-answering tasks. Built upon the lmsys/vicuna-7b-v1.3 base model, it was trained using H2O LLM Studio. This model is optimized for generating coherent and relevant answers to prompts, making it suitable for conversational AI and information retrieval applications. It features a 4096-token context length, enabling it to process moderately long inputs for Q&A.

Loading preview...