hyokwan/llama31_common
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Sep 3, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The hyokwan/llama31_common is an 8 billion parameter continued pre-trained language model based on Meta's Llama 3.1-8B-Instruct architecture, with a context length of 32768 tokens. This model has been specifically trained for the Korea Polytechnics Fintech department. It is designed for general language tasks, leveraging the Llama 3.1 foundation.

Loading preview...