Skip to content

Plinius, a knowledge worker's assistant

I worked on a project to process scientific articles to help researchers. To quote myself:

The main goal of the Plinius project is to build a system that semi-automatically extracts knowledge from scientific abstracts and stores it in a knowledge base. [...] A system that contains the knowledge rather than the text of abstracts of a domain could answer a researcher’s questions directly, instead of retrieving all abstracts that mention the subject at hand.

For the processing of abstracts, the Plinius project has to deal with two important processes: the interpretation of natural language, and the maintainance [sic] of a knowledge base. In processing natural language, the knowledge base will be used to limit the number of possible interpretations of a piece of text. As a result the knowledge base has to be updated with the knowledge acquired from that piece of text.

That is how I described the Plinius project, named after the first encyclopedist1, in my Master of Science thesis2 of 1993. It mentioned two main challenges:

  • Natural language processing. The project focused on abstracts of scientific articles. These usually don't contain humour, sarcasm, and so forth. This made it easier to deal with the often messy and ambiguous nature of natural language.
  • Automated reasoning. The project chose a very technical domain to make it easier to manage the ontology and logic needed to derive new knowledge, prove statements, and find contradictions.

These days, a lot of projects emerge with a similar promise: to automatically surface new knowledge. AI today (Large Language Models or LLMs) cover the first challenge quite well, and can process messy and ambiguous text.

Back in 1993, I ran into problems when doing automated reasoning using world views that are not compatible. Current AI models generate text without even noticing or mentioning such incompatibilities: they are trained to go toward a middle ground.

LLMs are good at summarising. But not at sense-making: finding and navigating divergence in world views and opinions.

I don't really know how current tools build on the progress made in semantic reasoning and automated theorem proofing over the last decades. I see the tools "plan their steps", but those steps seem to be generated by an LLM.

AI tools today help me enrich and navigate my Personal Knowledge Management system. But at its core, it still is a collection of text notes. It helps me find connections, not understanding.

Back in 1993, I saw the Plinius knowledge base as a shared product: it would help a whole field of researchers to find new knowledge in a growing stream of research papers.

Today, I'd rather have a personal knowledge assistent. A tool to help me build and test solid argumentation and work with competing world views that are valid to me. It should help me find sources to make sense of things, and to articulate and interrogate divergent conclusions.

I don't want the AI to reason for me, I want the AI to sharpen my reasoning.


  1. Gaius Plinius Secundus, or Pliny the Elder, wrote the first encyclopedia covering a vast array of topics on human knowledge and the natural world. He is also known as eye-witness reporter and victim of the eruption of Mount Vesuvius that covered Pompeii. 

  2. Kleef, R. (1993). The part-of relation in the Plinius ontology [Master Thesis]. University of Twente.