To Make Fairer AI, Physicists Peer Inside Its Black Box

While their understanding of algorithm error is still in early stages, their ultimate goal is to help develop a clear and accurate way to communicate an algorithm’s margin of error, a new area of AI research called “uncertainty quantification.” “It’s this idea of rigorously understanding AI’s error bars,” says Nord.

Tegmark is tackling AI inscrutability with a different strategy. Instead of deciphering an algorithm’s process in granular detail, he focuses on re-packaging its complicated output.

In a way, Tegmark treats AI algorithms like we treat human intuition. For example, a child learns to catch a ball intuitively, without ever needing math. But in school, she might learn to describe her intuition using equations of parabolas. Tegmark thinks that scientists should think of AI as providing intuition, like the instinctive ability to catch a ball, that should then be re-packaged into elegant, comprehensible math equations. Among computer scientists, this re-packaging is called “symbolic regression.”

For example, Tegmark and an MIT graduate student, Silviu-Marian Udrescu, fed data about planetary orbits to an AI algorithm. Conventionally, an AI algorithm would identify patterns in that data and represent them with some long, murky formula for the trajectory of a planet around a star. Tegmark’s algorithm, however, took the extra step of translating that esoteric formula into Kepler’s third law of planetary motion, a concise equation that describes a planet’s orbit in relation to its mass and a few other variables. Publishing in Science Advances this April, they call their algorithm “AI Feynman,” because it successfully re-discovered 100 equations from physicist Richard Feynman’s classic introductory physics lecture series.

By making AI processes more transparent, physicists offer a technical solution to a particular ethics challenge. But many ethical challenges cannot be solved with technical advances, says AI ethicist Vivek Nallur of University College Dublin. Ethical dilemmas, by nature, are subjective, and often require people with conflicting priorities to settle their differences. These people may disagree with an algorithm’s recommendation simply based on cultural or personal preference. “For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is,” writes philosopher Vincent Müller in the Stanford Encyclopedia of Philosophy.

For example, a 2018 MIT study of people in 233 countries found that participants’ reactions to ethically grey situations were culturally dependent. The study, presented as a game, asked participants variations of the trolley problem: In one case, should a self-driving car swerve to save its three passengers, including a child, and kill four elderly pedestrians? The researchers found that participants from cultures that emphasize the collective, such as in China and Japan, were less likely than participants from individualistic cultures like in the US to spare children over the elderly. “If you buy a car that was programmed in Germany and drive it to Asia, whose ethics should the car obey?” asks Nallur. The question cannot be answered with more math.

But the more stakeholders involved in the discussion, the better, says Nallur. Physicists are still working to integrate their AI research into the mainstream machine learning community. To understand his role in ethical conversations, Nord says he’s working to partner with social scientists, ethicists, and experts across many disciplines. He wants to have a conversation about what constitutes ethical scientific use for AI algorithms, and what scientists should ask themselves when they use them. “I’m hoping that what I do is productive in a positive way for humanity,” says Nord. As AI applications barrel forward, these physicists are trying to lay the track to a more responsible future.


More Great WIRED Stories

Leave a Reply

Discover more from Ultimatepocket

Subscribe now to keep reading and get access to the full archive.

Continue reading