1. KG-Enhanced LLM Training

1.1. Integrating KGs into Training Objective

Contributors:

  • Diego Collarana (FIT)
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ...
  • ...

Short definition/description of this topic: please fill in ...

  • Content ...
  • Content ...
  • Content ... 

1.2. Integrating KGs into LLM Inputs (verbalize KG for LLM training)

Contributors:

  • Diego Collarana (FIT)
  • Daniel Baldassare (doctima)
  • Michael Wetzel (Coreon)
  • Sabine Mahr (word b sign)
  • ... 

Draft from Daniel Baldassare :

Short definition/description of this topic: Verbalizing knowledge graphs for LLM is the task of representing knowledge graphs as text so that they can be written directly in the prompt, the main input source of LLM. Verbalization consists of finding textual representations for nodes, relationships between nodes, and their metadata. Verbalization can take place at different stages of the LLM lifecycle, during training (pre-training, instruction fine-tuning) or during inference (in-context learning), and consists in:

1.3. Integrating KGs by Fusion Modules

Contributors:

  • Diego Collarana (FIT)
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ...
  • ... 

Short definition/description of this topic: please fill in ...

  • Content ...
  • Content ...
  • Content ... 

2. Retrieval-Augmented Generation (RAG)

Draft Daniel Burkhardt

Short definition/description of this topic: Retrieval-Augmented Generation (RAG) is a method that combines retrieval mechanisms with generative models to enhance the output of language models by incorporating external knowledge. This approach retrieves relevant information from a database or corpus and uses it to inform the generation process, leading to more accurate and contextually relevant outputs.

  • Definition of RAG 
  • Types of RAG 
    • Standard RAG: Utilizes vector databases to retrieve documents based on semantic similarity, which are then used to augment the generative process of language models.
    • Graph RAG: Integrates knowledge graphs into the RAG framework, allowing for the retrieval of structured data that can provide additional context and factual accuracy to the generative model
  • Applications for RAG 
    • RAG is used in various natural language processing tasks, including question answering, information extraction, sentiment analysis, and summarization. It is particularly beneficial in scenarios requiring domain-specific knowledge, as it reduces the tendency of language models to generate hallucinated or incorrect information by grounding responses in retrieved facts.

2.1. KG-Guided Retrieval Mechanisms

Contributors:

  • Daniel Burkhardt (FSTI)
  • Robert David (SWC)
  • Diego Collarana (FIT)
  • Daniel Baldassare (doctima)
  • Michael Wetzel (Coreon)

Short definition/description of this topic: KG-Guided Retrieval Mechanisms involve using for example knowledge graphs or vector databases to enhance the retrieval process in RAG systems. Knowledge graphs provide a structured representation of knowledge, enabling more precise and contextually aware retrieval of information. This approach can directly query knowledge graphs or use them to augment queries to other data sources, improving the relevance and accuracy of the retrieved information.

Draft Robert David:

  • Initial RAG idea: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
  • RAG is commonly used with vector databases.
    • can only grasp semantic similarity represented in the document content
    • only unstructured data
    • vector distance instead of a DB search limits the retrieval capabilities
  • Graph RAG uses knowledge graphs as part of the RAG system
    • KGs for retrieval (directly), meaning the database is storing KG data
    • KGs for retrieval via a semantic layer, potentially retrieving over different data sources of structured and unstructured data
    • KGs for augmenting the retrieval, meaning the queries to some database is modified via KG data
  • Via Graph RAG, we can
    • ingest additional semantic background knowledge (knowledge model) not represented in the data itself
      • additional related knowledge based on defined paths (rule-based inference)
      • focus on certain aspects of a data set for the retrieval (search configuration)
      • personalization: represent different roles for retrieval via ingesting role description data into the retrieval (especially important in an enterprise environment)
    • reasoning
    • linked data makes factual knowledge related to the LLM-generated knowledge and thereby provide a means to check for correctness
    • explainable AI: provide justifications via KG
    • consolidate different data sources: unstructured, semi-structured, structured (enterprise knowledge graph scenario)
    • doing the actual retrieval via KG queries: SPARQL
    • hybrid retrieval: combine KG-based retrieval with vector databases or search indexes

2.2. Hybrid Retrieval Combining KGs and Dense Vectors

Contributors:

  • Daniel Burkhardt (FSTI)
  • Diego Collarana (FIT)
  • Daniel Baldassare (doctima)
  • Please add yourself if you want to contribute ...
  • ...

Draft from Daniel Burkhardt

Short definition/description of this topic: Hybrid Retrieval combines the strengths of knowledge graphs and dense vector representations to improve information retrieval. This approach leverages the structured, relational data from knowledge graphs and the semantic similarity captured by dense vectors, resulting in enhanced retrieval capabilities. Hybrid retrieval systems can improve semantic understanding and contextual insights while addressing challenges like scalability and integration complexity. 

3. KG-Enhanced LLM Interpretability

Draft from Daniel Burkhardt

Short definition/description of this topic: KG-Enhanced LLM Interpretability refers to the use of knowledge graphs to improve the transparency and explainability of large LLMs. By integrating structured knowledge from KGs, LLMs can generate more interpretable outputs, providing justifications and factual accuracy checks for their responses. This integration helps in aligning LLM-generated knowledge with factual data, enhancing trust and reliability. 

3.1. Measuring KG Alignment in LLM Representations

Draft from Daniel Burkhardt

Short definition/description of this topic: This involves evaluating how well the representations generated by LLMs align with the structured knowledge in KGs. This alignment is crucial for ensuring that LLMs can accurately incorporate and reflect the relationships and entities defined in KGs, thereby improving the factuality and coherence of their outputs.

literature: https://arxiv.org/abs/2311.06503 , https://arxiv.org/abs/2406.03746, https://arxiv.org/abs/2402.06764

Contributors:

  • Daniel Burkhardt (FSTI)
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ..
  • Content ...
  • Content ...
  • Content ... 

3.2. KG-Guided Explanation Generation

Draft from Daniel Burkhardt

Short definition/description of this topic: KG-Guided Explanation Generation uses knowledge graphs to provide explanations for the outputs of LLMs. By leveraging the structured data and relationships within KGs, this approach can generate detailed and contextually relevant explanations, enhancing the interpretability and transparency of LLM outputs. 

literature: https://arxiv.org/abs/2312.00353, https://arxiv.org/abs/2403.03008

Contributors:

  • Daniel Burkhardt (FSTI)
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ...
  • ... 


  • Content ...
  • Content ...
  • Content ... 

3.3. KG-Based Fact-Checking and Verification

Contributors:

  • Daniel Burkhardt (FSTI)
  • Please add yourself if you want to contribute ...
  • Please add yourself if you want to contribute ...
  • ... 

Draft from Daniel Burkhardt

Short definition/description of this topic: This involves using knowledge graphs to verify the factual accuracy of information generated by LLMs. By cross-referencing LLM outputs with the structured data in KGs, this approach can identify and correct inaccuracies, ensuring that the generated information is reliable and trustworthy. 

literatur: https://arxiv.org/abs/2404.00942, https://aclanthology.org/2023.acl-long.895.pdf, https://arxiv.org/pdf/2406.01311 


  • Content ...
  • Content ...
  • Content ... 

4. KG-Enhanced LLM Reasoning

Draft from Daniel Burkhardt

Short definition/description of this topic: KG-Enhanced LLM Reasoning refers to the use of knowledge graphs to improve the reasoning capabilities of LLMs. By incorporating structured knowledge, LLMs can perform more complex reasoning tasks, such as multi-hop reasoning, where multiple pieces of information are connected to derive a conclusion.


  • Content ...
  • Content ...
  • Content ... 

4.1. KG-Guided Multi-hop Reasoning

Contributors:

  • Daniel Burkhardt (FSTI)
  • Daniel Baldassare (doctima)
  • Please add yourself if you want to contribute ...
  • ... 

Draft from Daniel Burkhardt

Short definition/description of this topic: This involves using knowledge graphs to facilitate multi-hop reasoning, where LLMs connect multiple entities and relationships to answer complex questions. This approach enhances the reasoning depth of LLMs by providing a structured path through interconnected data points in KGs.

literature: https://neo4j.com/developer-blog/knowledge-graphs-llms-multi-hop-question-answering/, https://link.springer.com/article/10.1007/s11280-021-00911-5

  • Content ...
  • Content ...
  • Content ... 

4.2. KG-Based Consistency Checking in LLM Outputs

Contributors:

  • Daniel Burkhardt (FSTI)
  • Daniel Baldassare (doctima)
  • Michael Wetzel (Coreon)
  • ... 

Draft from Daniel Burkhardt

Short definition/description of this topic: KG-Based Consistency Checking involves using knowledge graphs to ensure the consistency of LLM outputs. By comparing generated content with the structured data in KGs, this method can identify inconsistencies and improve the coherence of LLM-generated information.

literature:https://www.researchgate.net/publication/382363779_Knowledge-based_Consistency_Testing_of_Large_Language_Models


  • Content ...
  • Content ...
  • Content ... 

5. KGs for LLM Analysis

5.1. Using KGs to Evaluate LLM Knowledge Coverage

Contributors:

  • Daniel Burkhardt (FSTI)
  • Daniel Baldassare (doctima)
  • Please add yourself if you want to contribute ...
  • ... 

Draft from Daniel Burkhardt

Short definition/description of this topic: This involves using knowledge graphs to analyze and evaluate various aspects of LLMs, such as knowledge coverage and biases. KGs provide a structured framework for assessing how well LLMs capture and represent knowledge across different domains. This involves assessing the extent to which LLMs cover the knowledge represented in KGs. By comparing LLM outputs with the structured data in KGs, this approach can identify gaps in knowledge and areas for improvement in LLM training and performance

literature: https://www.amazon.science/publications/grapheval-a-knowledge-graph-based-llm-hallucination-evaluation-framework


  • Content ...
  • Content ...
  • Content ... 

5.2. Analyzing LLM Biases through KG Comparisons

Contributors:

  • Daniel Burkhardt (FSTI)
  • Daniel Baldassare (doctima)
  • Please add yourself if you want to contribute ...
  • ... 

Draft from Daniel Burkhardt

Short definition/description of this topic: This involves using knowledge graphs to identify and analyze biases in LLMs. By comparing LLM outputs with the neutral, structured data in KGs, this approach can highlight biases and suggest ways to mitigate them, leading to more fair and balanced AI systems.


  • Content ...
  • Content ... 

literature: https://arxiv.org/abs/2405.04756

  • No labels