Aspik101/Llama-2-7b-hf-instruct-pl-lora_unload
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 23, 2023License:otherArchitecture:Transformer0.0K Cold

Aspik101/Llama-2-7b-hf-instruct-pl-lora_unload is a 7 billion parameter Llama-2 model fine-tuned for Polish language instruction following. This model leverages a LoRA adaptation of the Llama-2 architecture, specifically optimized for text generation in Polish. It is designed for tasks requiring understanding and generation of Polish text based on instructions, utilizing a 4096-token context length.

Loading preview...