Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Diego Collarana (FIT)
  • Daniel Baldassare (doctima) – Lead
  • Michael Wetzel (Coreon)
  • Rene Pietzsch (ECC)
  • Alan Akbik (HU)


Problem statement

The training of large language models typically employs unsupervised methods on extensive datasets. Despite their impressive performance on a range of tasks, these models often lack the practical, real-world knowledge required for certain applications. Furthermore, since domain-specific data is not included in the public domain datasets used for pre-training or fine-tuning large language models (LLMs), the integration of knowledge graphs (KGs) becomes fundamental for the injection of proprietary knowledge into LLMs, especially for enterprise solutions. In order to infuse this knowledge into LLMs during training, many techniques have been researched in recent years, resulting in three main state-of-the-art methods (Pan et al, 2024): 

...

  • Daniel Burkhardt (FSTI)
  • Daniel Baldassare (doctima)
  • Fabio Barth (DFKI)
  • Max Ploner (HU)
  • Alan Akbik (HU)
  • ...

First Version: Automatic evaluation of LLMs is usually done by cleverly comparing a desired result. The desired output can be evaluated using direct matching or similarity metrics (BLEU, N-gram, ROUGE, BERTScore). However, there are various reasons why KG can be used in the evaluation to support or enhance these evaluation techniques. 

...