Aspik101/vicuna-7b-v1.3-instruct-pl-lora_unload
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 22, 2023License:otherArchitecture:Transformer Cold

Aspik101/vicuna-7b-v1.3-instruct-pl-lora_unload is a 7 billion parameter instruction-tuned language model based on the Llama-2 architecture. This model is specifically fine-tuned for Polish language text generation, leveraging the Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish dataset. It is designed for tasks requiring text generation in Polish, offering a context length of 4096 tokens.

Loading preview...