chlee10/T3Q-Llama3-8B-dpo-v2.0
chlee10/T3Q-Llama3-8B-dpo-v2.0 is an 8 billion parameter language model with an 8192 token context length. This model demonstrates performance on Korean language benchmarks, achieving 64.2% accuracy on KoBEST CoPA and 62.22% accuracy on KoBEST SentiNeg. It is suitable for tasks requiring Korean language understanding and generation, particularly in areas like common sense reasoning and sentiment analysis.
Loading preview...
Model Overview
chlee10/T3Q-Llama3-8B-dpo-v2.0 is an 8 billion parameter language model with an 8192 token context length. This model has been evaluated on several Korean language benchmarks, indicating its capabilities in understanding and processing Korean text.
Key Capabilities
- Korean Language Understanding: The model shows performance on various Korean language tasks.
- Common Sense Reasoning: Achieved 64.2% accuracy on the KoBEST CoPA benchmark.
- Sentiment Analysis: Demonstrated 62.22% accuracy on the KoBEST SentiNeg benchmark.
- Boolean Question Answering: Scored 51.5% accuracy on KoBEST BoolQ.
Good For
- Applications requiring Korean language processing.
- Tasks involving common sense reasoning in Korean.
- Sentiment analysis of Korean text.
- Research and development in Korean natural language processing.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.