ekdms917/like_daeun

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The ekdms917/like_daeun model is a 4.3 billion parameter instruction-tuned causal language model developed by ekdms917, based on the Google Gemma-3-4b-it architecture. It is specifically fine-tuned on the ekdms917/like_917 dataset, indicating a specialization in Korean language tasks. With a context length of 32768 tokens, it is designed for text generation in Korean, likely excelling in conversational AI or content creation within that language.

Loading preview...

Overview

ekdms917/like_daeun is a 4.3 billion parameter instruction-tuned language model, building upon the Google Gemma-3-4b-it architecture. Developed by ekdms917, this model is primarily focused on Korean language processing, having been fine-tuned using the dedicated ekdms917/like_917 dataset. It supports a substantial context length of 32768 tokens, enabling it to handle longer inputs and generate more coherent and extended responses.

Key Capabilities

  • Korean Language Specialization: Fine-tuned on a Korean-specific dataset, suggesting strong performance in Korean text generation and understanding.
  • Instruction Following: As an instruction-tuned model, it is designed to follow user prompts and generate relevant outputs.
  • Extended Context Window: A 32768-token context length allows for processing and generating longer texts, maintaining context over extended conversations or documents.

Good For

  • Korean Text Generation: Ideal for applications requiring the creation of Korean content, such as articles, summaries, or creative writing.
  • Korean Conversational AI: Suitable for chatbots or virtual assistants that interact in Korean, leveraging its instruction-following capabilities.
  • Research and Development: Provides a base for further fine-tuning or experimentation with Korean language models, particularly those derived from the Gemma family.