neelblabla/email-classification-llama2-7b-peft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

The neelblabla/email-classification-llama2-7b-peft model is a 7 billion parameter Llama 2 variant, fine-tuned using Parameter-Efficient Fine-Tuning (PEFT) on the enron_labeled_email-llama2-7b_finetuning dataset. With a context length of 4096 tokens, this model is specifically optimized for email classification tasks. Its primary strength lies in accurately categorizing email content based on the specialized training data.

Loading preview...