CJ-gyuwonpark/ch-70b-v9
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

CJ-gyuwonpark/ch-70b-v9 is a large language model developed by CJ-gyuwonpark. This model was trained using bitsandbytes 4-bit quantization with nf4 quantization type and double quantization enabled, utilizing bfloat16 compute dtype. Specific details regarding its architecture, parameter count, and primary use cases are not provided in the available documentation, but its training configuration suggests an optimization for efficient deployment and inference.

Loading preview...