Aspik101/Redmond-Puffin-13B-instruct-PL-lora_unload
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Aug 4, 2023License:otherArchitecture:Transformer Cold
Aspik101/Redmond-Puffin-13B-instruct-PL-lora_unload is a 13 billion parameter instruction-tuned Llama-2 model developed by Aspik101. This model is specifically fine-tuned for Polish language text generation, leveraging datasets like Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish. With a 4096-token context length, it is optimized for instruction-following tasks in Polish.
Loading preview...