Menno Mafait mafait

Introduction

Everything in nature is bound by natural laws, and proceeds according to natural laws. However, scientists are unable – or unwilling – to define intelligence as a set of natural laws (while I succeeded). Not being based on laws of nature, Artificial Intelligence (AI) is not an artificial implementation of natural intelligence. Therefore, AI is not a fundamental science, but a field of engineering.

A fundamental science delivers generic solutions, while a field of engineering is limited to specific solutions to specific problems. And engineered solutions are limited to perform routine tasks. So, being a field of engineering, AI is limited to perform routine tasks.

However, it is possible to uplift this field of engineering towards a fundamental science, similar to the field of electromagnetism, which is based on laws of nature. Thanks to research on the natural laws of electromagnetism, we are able to close the loop for electricity, magnetism, light and movement. As a result, we are able to apply these conversions to daily life. We are able:

  • to convert electricity to magnetism, and to convert magnetism back to electricity;

  • to convert electricity to light, and to convert light back to electricity;

  • to convert electromagnetism to movement, and movement back to electromagnetism.

 

I am using fundamental science / basic research (logic and laws of nature) instead of cognitive science (simulation of behavior), because:

  • Autonomous reasoning requires both natural intelligence and natural language;

  • Intelligence and language are natural phenomena;

  • Natural phenomena obey laws of nature;

  • Laws of nature and logic are investigated using fundamental science.

 

By defining intelligence as a set of natural laws – and researching the laws of nature involved with intelligence in grammar – I am able to close the loop for natural intelligence and natural language, which extends Aristotelian Logic. As a result, my system is able:

  • to convert readable sentences – with a limited grammar – to a logic that isn't described by scientists yet;

  • to autonomously derive new knowledge from previously unknown knowledge, using my extended logic;

  • and to express the derived knowledge in readable and autonomously – word by word – constructed sentences, with a limited grammar.

 

The logical rules of my autonomous reasoner are (almost) language-independent. So, I can add any language, just by configuring my reasoner for this new language, and a little programming. As such, my reasoner is already able to read, to autonomously reason and to autonomously write the derived knowledge in English, Spanish, French, Dutch and Chinese, while scientists are unable to develop a proper multilingual reasoner.

Through this article, I defy anyone to beat the simplest results of my Controlled Natural Language (CNL) reasoner in a generic way: from natural language, through algorithms, back to natural language. No knowledge is programmed upfront, and no dialogues are written ahead.


My CNL reasoner is published as open source software.

Autonomous reasoning

Autonomous reasoning requires both natural intelligence and natural language. Without knowing, Aristotle applied natural intelligence to natural language roughly 2,400 years ago:

 

> Given: “All men are mortal.”
> Given: “Socrates is a man.”

• Logical conclusion:
< “Socrates is mortal.”

 

Roughly 200 years ago, such reasoning constructions were formalized through Predicate Logic. And since the start of this century, these reasoning constructions are implemented in software through Controlled Natural Language (CNL) reasoners. CNL reasoners are able to autonomously derive new knowledge from previously unknown knowledge, and to express the derived knowledge in readable sentences (with a limited grammar).

 

Problem description 1: Reasoning in the past tense

The reasoning example mentioned above was true during the life of Socrates. But now, after the ultimate proof of his morality – his death in the year 399 BC – we should use the past tense form:

 

> Given: “All men are mortal.”
> Given: “Socrates was a man.”

• Logical conclusion:
< “Socrates was mortal.”

 

The tense of a verb tells us about the state of the involved statement:

  • Socrates is a man” tells us that Socrates is still alive;

  • Socrates was a man” tells us that Socrates is no more among the living.

 

In regard to the conclusion:

  • Socrates is mortal” tells us that the death of Socrates is inevitable, but that his mortality isn't proven yet by hard evidence;

  • Socrates was mortal” tells us that his mortality is proven by hard evidence.


In the past 2,400 years, scientists have "forgotten" to define algebra for the past tense. So, reasoning in the past tense form is not described in any scientific paper, while it is implemented in my CNL reasoner.

 

Problem description 2: Possessive reasoning

Also possessive reasoning – reasoning using possessive verb “has/have” – is not supported by predicate logic (algebra):

 

> Given: “Paul is a son of John.”

• Logical conclusion:
< “John has a son, called Paul.”

 

Or the other way around:

> Given: “John has a son, called Paul.”

• Logical conclusion:
< “Paul is a son of John.”


Why doesn't predicate logic (algebra) support possessive reasoning in a natural way? Why should any predicate – that can’t be expressed using verb “is/are” in the present tense – be described in an artificial way, like has_son(john,paul)? Why is algebra still not equipped for natural language, after those centuries of scientific research?

 

Problem description 3: Generation of questions

Algebra describes the Exclusive OR (XOR) function, while CNL reasoners don't implement its linguistic equivalent: conjunction “or”. CNL reasoners are therefore unable to generate the following question:

 

> Given: “Every person is a man or a woman.”

> Given: “Addison is a person.”

• Logical question:

< “Is Addison a man or a woman?

 

Scientific challenge

Everything in nature is connected. However, it seems as if linguists and mathematicians have never wondered how their respective fields are connected. Because almost 2400 years after Aristotle, scientists haven’t made any progress in understanding the logic of language.

 

In my experience, linguists are willing to research the logic of language, but mathematicians aren't. Mathematicians think that logic is about numbers. They are wrong. Logic is about natural intelligence. And laws of intelligence are found in for example natural language.

 

Like a programming language, also natural language has structure words and variables. The structure words of language – which in this article are printed in blue – have a naturally intelligent, logical, structure-providing function in language. The following structure words of language are illustrated on my website, including the corresponding algorithms:

 

Possessive verb “has/have” (Block 1, Block 2 and Block 3), past tense verbs “was/were” and “had” (Block 4), conjunction “or” (Block 5) and definite article “the” (Block 6).

 

I defy anyone to beat the simplest results of my Controlled Natural Language (CNL) reasoner in a generic way: from natural language, through algorithms, back to natural language. No knowledge is programmed upfront, and no dialogues are written ahead.

Quote 3 0
Abhi Nandan AbhizNandan
Great post! You explain very well. Thanks for sharing.
Quote 0 0
Archana Yadav ArchanaYadav
Very Informative! You explain very nicely and cover each and everything to the point. Thanks for sharing!
Quote 0 0