Integrating Large Language Models and Knowledge Graphs to Implement Retrieval Augmented Generation (RAG)
- Michael DeBellis
- Jul 18, 2024
- 2 min read
Updated: Jan 31
Two of the biggest issues with using Large Language Models (LLMs) for mission critical domains such as medicine are hallucinations and black-box reasoning. One way to solve these issues is with an architecture known as Retrieval Augmented Generation (RAG). A RAG architecture replaces the broad but shallow knowledge of an LLM with a deep but narrow knowledge base that is focused on a specific domain. When using a RAG architecture, the system provides a level of certainty regarding the answer based on the semantic distance between the question and the relevant documents in the knowledge base. Both the question and the documents are modeled as vectors and the distance is computed as the semantic distance between the vectors. If no text is found in the knowledge base that is above the required threshold, the RAG system provides a predefined answer that it can't answer the question. This prevents hallucinations. In addition, if there is one or more documents that are within the required semantic distance, those documents are returned with the answer. This eliminates the problem of black-box reasoning.
Typically, RAG systems are implemented using relational databases. In our project, we implemented a RAG system using a knowledge graph that utilized new technology from AllegroGraph to integrate with ChatGPT. This provides the user with many additional capabilities to further explore the knowledge graph and find additional information. This work is described in a paper we wrote for the journal Applied Ontology In addition, I developed a presentation describing our work for a workshop on LLM and ontology integration at FOIS 2024. The recording below was created for the workshop and describes the project to date (July 2024).
Really cool, thank you for sharing! Have you considered using KAG instead of RAG for this implementation? And if so, why did you choose RAG over KAG?
This is the true icon of its time, the watch that would one day become what we know as the 1921. According to the brand, "Between 1919 and 1921, Vacheron Constantin produced two series of link six cushion-shaped watches with offset dials link and offset crowns. Two of these were sold in 1928 to the famous link American clergyman and newspaper writer, Reverend Samuel Parkes-Cadman, including this 1919 model."
Which is where David Schanker comes in. Though not a link watch guy himself ("It's just part of the movie process for me," he says), Schanker definitely knows how to link pick 'em. He and Coppola collaborated to choose many watches during pre-production, arriving at a diverse array outfitting nearly every principal link character. Here, Schanker takes us watch by watch.
We launched this series because we're as obsessed with minutiae link as you are. Sometimes you just want to link stare at a watch and learn every little thing about it. This information is essential if you're considering making a purchase. We think it's link essential even if you're not.
how can we incorporate SWRL with RAG and how can we move next step that SWRL will generate the visual of product form