Reading Time: < 1 minutes

Germany’s Darmstadt University of Technology (DUT) researchers fed their model news, books, and religious literature to teach the relations between different words and sentences. After the training, the researchers observed that the system adopted the values of the texts.

The team claims that the Moral Choice Machine (MCM) calculates the bias score on a sentence level based on Universal Sentence Encoder. This enables the system to analyze the sentence and not specific words, which resulted in AI being able to figure out that it is objectionable to kill living beings, but okay to kill time.

Dr. Cigdem Turan, study co-author, compared the technique of creating a word map where related words or words that are often used together would be placed side by side. (For example, ‘kill’ and ‘murder.’)

Making A Moral AI
Researchers observed that AI could learn from human biases; similarly, it can learn positive things too. Though the system has flaws and could be manipulated, it could still serve a useful purpose.

Changing Values
For now, AI is better suited to make a textual analysis based on data from two different eras, books, and news. But making moral choices should be left to Humans with strong moral values.

Teaching AI to judge right from wrong is a step into the future, but it is still a long way from replacing humans in judiciary systems.

Source

AIMonks #AI ##ArtificialIntelligence #MoralReasoning #Teach #Researchers #DUT #MoralChoiceMachine #MoralAI #TextualAnalysis

Subscribe to AI Bytes

Join thousands of other data scientists and artificial intelligence enthusiasts

I will never give away, trade or sell your email address. You can unsubscribe at any time.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *