Metin/Gemma-2-2B-TR-Knowledge-Graph is a 2.6 billion parameter language model, fine-tuned from Google's Gemma-2-2B-IT by Metin. This model specializes in generating structured knowledge graphs from document content, enabling the creation and population of graph databases. It was trained on a synthetically generated dataset of 30K knowledge graph samples, making it optimized for extracting entities and relationships.
Metin/Gemma-2-2B-TR-Knowledge-Graph Overview
Metin/Gemma-2-2B-TR-Knowledge-Graph is a specialized 2.6 billion parameter language model, fine-tuned by Metin from Google's gemma-2-2b-it base model. Its core purpose is to automatically generate structured knowledge graphs from unstructured text content. This capability is crucial for applications requiring the extraction of entities and their relationships, facilitating the creation and population of graph databases for efficient data storage, querying, and visualization.
Key Capabilities
- Knowledge Graph Generation: Transforms raw document content into structured JSON outputs containing nodes (entities) and relationships.
- Graph Database Population: Designed to produce outputs directly usable for building and populating graph databases.
- Specialized Training: Fine-tuned on a high-quality, synthetically generated dataset of 30,000 knowledge graph samples, ensuring its proficiency in this specific task.
- Turkish Language Support: Examples provided in the README indicate proficiency with Turkish text for knowledge graph extraction.
Important Considerations
While highly specialized, the model may still generate incorrect or nonsensical outputs. Users are advised to verify the generated knowledge graphs before deployment or further use. The model requires a specific prompt format, appending \n<knowledge_graph> to the user input, to trigger the knowledge graph extraction process.