3 types of KG+LLMs systems (Robert D. - SWC)

  1. Complementary solution:
    KGs and LLMs complement each other to implement a use case that each would not be able to do. [I wouldn't restrict to the point that alone KG and LLM would not be able to solve the task. Probably better to say that their complementary use enhances the performance/quality/efficiency/... (significantly)]
    1. KG-based (Semantic or Graph) RAG
  2. Improve LLMs by using KGs
    1. reduce hallucinations/garbage
    2. provide additional background knowledge
    3. ...
  3. Improve KGs by using LLMs
    1. improve structure of KGs-> extend/augment KG with nodes/entities + relations
    2. content -> add labels/descriptions based on other KG entities
    3. Context-aware translation of labels/literals between different languages
    4. Disambiguate KG entities
    5. LLMs for entity matching in different KGs
    6. LLMs for ontology engineering tasks (in ontology design, ontology mapping/alignment,...)
    7. ...



Based on above version (Harald, Sven, Heike - FIZ Karlsruhe)

  1. Definition of vocabulary used 
    • Only focus on language (LLMs) vs multi-modal (language + images, audio, etc)?

  2. Complementary solution
    1. Downstream tasks
      1. Question answering
      2. Fact checking
      3. Fake news detection
      4. Explainability
  3. Improve LLMs by using KGs
    1. KG-enhanced LLM training
      1. Integrating KGs into training objective
      2. Integrating KGs into LLM inputs (verbalize KG for LLM training)
      3. Integrating KGs by fusion modules
    2. Retrieval-augmented Generation (RAG)
      1. KG-guided retrieval mechanisms (Daniel B. (FSTI))
      2. Hybrid retrieval combining KGs and dense vectors (Daniel B. (FSTI))
      3. KG-enhanced reranking of retrieved information (Daniel B. (FSTI))
    3. KG-enhanced LLM interpretability
      1. KGs for LLM probing
        1. KG-based analysis of attention patterns (Daniel B. (FSTI))
        2. Measuring KG alignment in LLM representations (Daniel B. (FSTI)) 
        3. KG-guided explanation generation (Daniel B. (FSTI))
        4. KG-based fact-checking and verification (Daniel B. (FSTI))
    4. KG-enhanced LLM inference / reasoning
      1. KG-guided multi-hop reasoning (Daniel B. (FSTI))
      2. Integrating symbolic reasoning with LLMs using KGs (Daniel B. (FSTI))
      3. KG-based consistency checking in LLM outputs (Daniel B. (FSTI))
    5. KGs for LLM analysis
      1. Using KGs to evaluate LLM knowledge coverage (Daniel B. (FSTI))
      2. Analyzing LLM biases through KG comparisons (Daniel B. (FSTI))
  4. Improve KGs by using LLMs
    1. Assertional knowledge engineering
      1. Information Extraction
        1. KG completion (A-Box)
          1. Link prediction
          2. Relation prediction
          3. Fact checking / Triple testing
          4. Literal completion (labels/comments/descriptions)
      2. Entity Linking (between KGs)
      3. Entity Disambiguation
    2. Terminological knowledge engineering
      1. Ontology Design
        1. Competency Question (CQ) generation
        2. User stories / personas generation
        3. Ontology learning (Automated ontology design from text)
      2. Ontology Evaluation
        1. Competency Question (CQ) generation (from given ontologies)
        2. CQ to SPARQL
      3. Ontology Mapping
      4. Ontology Documentation
        1. Class and relation descriptions/labels
    3. Reasoning
      1. Aprox/Probabilistic Reasoning via LLMs (LLM supported)
      2. Constraint checking (Robert D.)
      3. Data Repairs (→ maybe move to completion?) (Robert D.)
    4. Downstream tasks
      1. KG/Ontology embeddings
    5. User interface / Access
      1. Natural Language interface to KG
      2. KG to natural language (verbalization)
      3. Multilingual translation of literals
  • No labels