OPI-PG/Qra-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 27, 2024License:llama2Architecture:Transformer0.0K Open Weights Cold

OPI-PG/Qra-7b is a 7 billion parameter causal language model developed by OPI and PG, adapted from Llama 2 checkpoints. It is specifically trained on a 90 billion token corpus of Polish texts, making it highly optimized for Polish language processing. This foundation model has a 4096-token context length and excels in perplexity benchmarks on Polish datasets, outperforming other Polish and English LLMs.

Loading preview...