Deep Narrative Analysis’ (DNA’s) long-term goals will be achieved by research and experimentation, taking small steps and refining our designs. Our ultimate goal is to create a tool that compares narratives and news articles, and indicates where they align and diverge, what events are mentioned (or omitted), and what words are used. This, then, can be used to understand if a news article is biased, or if there are certain themes that are consistent across a set of narratives and articles.
To achieve this long-term goal, we need to transform the text of a news article or narrative into a semantically-rich, machine-processable format. Our choice for that format is a knowledge graph. The transformation is accomplished using syntactic and semantic processing to incrementally parse the text and then output the results as an RDF encoding (examples are included in this post).
A previous blog post discussed the design of the DNA Ontology and its focus on events and situations (e.g., verbs). The ontology is designed to be general and flexible. So, the OntoInsights team sought to evaluate its usability in a totally different domain … to capture and fuse Environment, Social and Governance (ESG) data.
To this end, we created and hosted a sample knowledge graph as part of the Hanken Quantum Hackathon 2021. The graph fused information about 1900+ companies — including their industries, profits, environmental impacts, and country of headquarters — and combined that with data about their “headquarters” country.
This post explores our experiences in creating that knowledge graph.
Over the last few years, there has been an on-going, vigorous debate regarding the future of artificial intelligence (AI) and machine learning (ML), and what needs to be developed. The debate comes down to using only machine learning technologies (based on different mathematical models and performing correlation/pattern analysis) versus using a combination of machine learning and “classical AI” (i.e., rules-based and expert systems). (Note that no one believes that rules-based systems alone are enough!) You can read about those debates in numerous articles (such as in the MIT Technology Review, ZDNet’s summary of the December 2020 second debate, and Ben Dickson’s TechTalks).
Given my focus on knowledge engineering, I tend to land on the side of the “hybrid” approach (spearheaded by Gary Marcus in the debates) that combines ML and classical AI, and then I add on ontologies (to provide formal descriptions of the semantics of things and their relationships and rules).