Millian/felia-7b-title
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

Millian/felia-7b-title is a 7 billion parameter language model developed by Millian, featuring a 4096-token context length. This model was trained using 4-bit quantization with double quantization enabled, optimizing for efficient deployment and inference. While specific capabilities are not detailed, its training configuration suggests a focus on resource-efficient performance for general language tasks.

Loading preview...