BangorAI/ALMA-7B-Pretrain-Cy-1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:mitArchitecture:Transformer Open Weights Cold

BangorAI/ALMA-7B-Pretrain-Cy-1 is a 7 billion parameter language model based on LLaMA-2, specifically pre-trained on the Welsh OSCAR-2301 dataset. It is designed as a foundational model for further fine-tuning on either human-written parallel data for machine translation tasks or other Welsh chat/instruction datasets for research purposes. This model represents a unique approach to translation by starting with monolingual fine-tuning before parallel data optimization.

Loading preview...