nnethercott/llava-v1.5-7b_vicuna
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 25, 2024License:llama2Architecture:Transformer Open Weights Cold

nnethercott/llava-v1.5-7b_vicuna is a 7 billion parameter LLaVA model, fine-tuned from liuhaotian/llava-v1.5-7b, designed for multimodal instruction-following tasks. Based on the LLaMA/Vicuna architecture, this auto-regressive language model integrates vision capabilities. It is primarily intended for LLM benchmarking and applications requiring understanding and generating responses from combined image and text inputs.

Loading preview...