Explainable AI, Ethics and Regulation: enabling people to understand, trust and effectively manage it

Intro. Do you trust engineers who are building the AI software? How can errors in AI software affect your life? If one day you get hit while crossing a street and if it was a self-driving car that hit you, would you want to know why did the algorithm choose to hit you rather than harm the passengers of the car? Data is already regulated and GDPR directive is coming in May, but what is not regulated is how companies and countries deploy Artificial Intelligence.

Applied AI. The AI I am referring to is not super intelligence, is not yet self-aware or able to learn from its findings and adapt. The common understanding of AI is that it is a set of technologies such as Machine learning, Natural language processing and expert systems like planning, scheduling or optimisation and boundaries of the AI are always changing.

Use cases. These technologies are already used not only to rank your search results or unlock your phone with Face ID or suggest videos to watch on YouTube or smart replies on email and so on. It is also used in medicine to identify tumours, in security for facial recognition or behaviour analysis, to identify if you will be able to repay a loan or if you are likely to commit a crime. Doctors, accountants and judges are getting information from AI systems and treating it as if it is information from a trustworthy colleague. But creators of these products are not always good at or even able to explain how it works. And it is not about how often AI gets it wrong, but how badly or easily it might get something wrong.

Training. Machine learning is the core technology. Its success has led to an explosion of AI applications, but its’ learning models are hard for people to understand. Imagine you are teaching a child to recognise a Spiderman, you show him examples, telling him, “This is a spiderman. That is not a Spiderman,” until the child learns the concept of what a Spiderman is. If the child sees new objects that he has not seen before, Batman maybe, we can not expect him to recognise correctly whether the new object is a Spiderman or not. Classification is core to machine learning and machine learning is in our lives, from email to games, phones, houses, basically everywhere.

Errors. But it can easily make mistakes. If you task your machine learning system to classify the animal in a photo as either a husky or a wolf, can you get that object classified with certainty? Why do you think you might misidentify it? Ears, eyes? In one case algorithm misplaced it because of snow in the background as in most of the pictures the wolves were in snow. Not very intelligent behaviour, is it? Well, there was bias in the data set that was fed to the algorithm.

Bias. It is important to be transparent about the training data that is used and to keep looking for hidden biases in it. Otherwise, we are building biased systems. Research recently exposed how automated facial analysis algorithms used by major tech companies had been trained on datasets made of large percentages of white faces. As the output, the models returned a maximum error rate of 1% for light-skinned male faces, while the error rates for darker-skinned females shot up to 30%. One of the most famous illustrations of how quickly human bias influences system’s actions is the Microsoft customer service chatbot on Twitter. It took only a day for it to develop a Nazi persona leading to thousands of hate tweets. In U.S. courts, a product called COMPAS is used to assess the risk of the defendant committing more crimes. A report concluded that black defendants were more likely to be incorrectly judged to be at a higher rate of recidivism than white defendants.

Black box. The company that built COMPAS says its formula is a trade secret. They see algorithms as a key to their product and they will not reveal them because it is a core piece of their business. But humans are not perfect. They are subjective and too often corrupted. If someone builds a system, someone else will find a way to cheat that system.

Related Post

The gaps in AI understandability present a window of opportunity for those who would abuse the technology for malicious or self-serving purposes. By applying noise to the images from training data sets by just a few percent, researchers were able to trick neural networks into misclassifying the objects with a success rate of 97 percent. The tricked algorithms would identify panda as a gibbon, school bus as an ostrich or make the neural network think the queen of England is wearing a shower cap on her head, not a crown. What about tricking the systems responsible for giving a loan, prescribing medication or driving a car? Would you want the public to be able to review the code of that algorithm in a car that will decide to hit a bunch of kids or a brick wall? The same algorithm that will have to make a difference between a shopping cart and baby carriage?

Understanding. How can we trust that an algorithm is doing what we think it is doing? How can we understand and predict the behaviour? How can we improve it or correct potential mistakes?

The approach now is taking the Training Data, going through a learning process and getting output, a calculation stating a probability, that there is let’s say a cat in a photo. But what we need to understand is why the system identified it as a cat and not something else. If we can trust it and know how to correct an error if necessary. To achieve that a New Learning Process should be introduced as well as Explainable Model and Explanation Interface, stating, e.g. it’s a cat because it has fur, whiskers, and claws. The main idea behind it is not to explain the whole surface, but to determine what parts of the image are used by the classifier to predict, e.g. if the classifier is only looking at the ground to differentiate between a wolf and a husky and the rest of the picture does not matter.

Everyone is responsible. Morality, ethics and legality have been a study of academics for a long time. The time for study is over. The work done right now will be the foundation upon which future generations interact with technologies. We cannot account for every misuse, but we do have the chance to set the technologies in a direction that minimises harm. When you are building powerful things, you have a responsibility to consider how it could be used.

Ethics need to remain top of mind, and there needs to be a deeper understanding of how AI works and how to protect people from inevitable impacts. Finally, we need to support the advocates and regulators currently shouting for transparency and accountability.

Defence projects agency DARPA is overseeing an explainable artificial intelligence initiative with $75 million in funding. ISO international standards organisation made a commission to decide what to do with AI systems and they are five years away from coming up with standards. But systems are being built and used now.

Next biggest leap in technology is to be able to create a program that is capable of consciousness and to genuinely think and we will get there. There is no doubt. It will be more impactful than the invention of the computer. Now we need to learn as much as possible before it gets here because the only question left is when is it going to arrive. The only thing we need to fear is not killer robots, but intellectual laziness and ourselves. We might be able to solve our greatest challenges, but we need to lead, not to follow.

Share

Recent Posts

  • Politics

The fate of the coalition – in Skvernelis’ hands? A wavering Paluckas opened a window for Žemaitaitis

Not even a week after the swearing-in of the new Government, the fate of the…

3 days ago
  • Tribune

Historical novel by Kazakh writer Ermek Tursunov “Mamluk”in Lithuanian language presented in Vilnius

On the eve of the Independence Day of the Republic of Kazakhstan, on December 13,…

1 week ago
  • Foreign affairs

After Nausėda meeting with Budrys, the opposition retorts to the candidate’s “cooling off “

Kęstutis Budrys, the President's Senior Adviser, who has been nominated for the post of Minister…

1 month ago
  • Tribune

Rediscover Bulgaria’s Ancient Heritage: Plovdiv’s Restored Eastern Gate and Nebet Tepe

In the heart of Bulgaria, the city of Plovdiv reveals a rich tapestry of ancient…

1 month ago
  • Foreign affairs

“No need to mince words”: an assessment of what Trump’s victory means for Lithuania

"We can shout very loudly, but it won't change the position of the American people,"…

2 months ago
  • Latest

Lies, disrespect and mockery: experts assess Blinkevičiūtė’s “gift” to voters without scruples

From mocking messages flooding social networks to harsh criticism from political experts, the decision of…

2 months ago