Lajonbot/tableBeluga-7B-instruct-pl-lora_unload
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 28, 2023License:otherArchitecture:Transformer0.0K Cold

Lajonbot/tableBeluga-7B-instruct-pl-lora_unload is a 7 billion parameter instruction-tuned language model based on the Llama-2 architecture, developed by Lajonbot. This model is specifically fine-tuned for Polish language tasks, leveraging datasets like Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish. With a context length of 4096 tokens, it is designed for text generation in Polish.

Loading preview...