erfanzar/LGeM-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:mitArchitecture:Transformer0.0K Open Weights Cold

The erfanzar/LGeM-7B is a 7 billion parameter causal language model developed by erfanzar, fine-tuned using the Alpaca prompting method. This decoder-only model is built with PyTorch and is designed for instruction-following tasks. It leverages pre-trained weights from Alpaca LoRA for its initial training phase, making it suitable for general-purpose text generation based on given instructions.

Loading preview...