AI Engineers Must Open Their Designs to Democratic Control
In many ways, the most pressing issues of society today 鈥 increasing income disparity, chronic health problems, and climate change 鈥 are the result of the dramatic gains in higher productivity we鈥檝e achieved with technology and science. The internet, artificial intelligence, genetic engineering, crypto-currencies, and other technologies are providing us with ever more tools to change the world around us.
But there is a cost.
We鈥檙e now awakening to the implications that many of these technologies have for individuals and society. We can directly see, for instance, the effect artificial intelligence and the use of algorithms have on our lives, whether through the phones in our pockets or Alexa on our coffee table. AI is now making decisions for judges about the risks that someone accused of a crime will violate the terms of his pretrial probation, even though a growing body of research has in such decisions made by machines. An AI program that set school schedules in Boston was after outcry from working parents and others who objected to its disregard of their schedules.
That鈥檚 why, at the M.I.T. Media Lab, we are starting to refer to such technology as 鈥渆xtended intelligence鈥 rather than 鈥渁rtificial intelligence.鈥 The term 鈥渆xtended intelligence鈥 better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop.
We鈥檙e now awakening to the implications that many of these technologies have for individuals and society.
Typically, machines are "trained" by AI engineers using huge amounts of data. Engineers decide what data is used, how it's weighted, the type of learning algorithm used, and a variety of other parameters used to create a model that is accurate and efficient in making decisions and providing accurate insights. The goal is to teach machines how to learn like we do. Facebook鈥檚 algorithms, for instance, have observed my activity on the site and figured out that I鈥檓 interested in cryptocurrencies and online gaming.
The people training those machines to think are not usually experts in setting pretrial probation terms or planning the schedule of a working parent. Because AI 鈥 or more specifically, machine learning 鈥 is still very difficult to program, the people training the machines to think are usually experts in coding and engineering. They train the machine using data, and then the trained machine is often tested later by experts in the fields where they will be deployed.
A significant problem is that any biases or errors in the data the engineers used to teach the machine will result in outcomes that reflect those biases. My colleague Joy Buolamwini , for example, that facial analysis software that classifies gender easily identifies white men, but it has a harder time distinguishing people of color and women 鈥 especially women of color.
Another colleague, Karthik Dinakar, is trying to involve a variety of experts in training machines to learn, in order to create what he calls 鈥渉uman-in-the-loop鈥 learning systems. This requires either allowing different types of experts to do the training or creating machines that interact with experts who teach them. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the expert perspective on the data.
If an engineer were building algorithms to set terms for pretrial probation, for instance, she might ask a judge to assess the data she鈥檚 using. Karthik calls this process of extracting a variety of perspectives 鈥渓ensing.鈥 He works to fit the 鈥渓ens鈥 of an expert in a given field into algorithms that can then learn to incorporate that expertise in their models. We believe this has implications for making tools that are both easier for humans to understand and can better reflect relevant factors.
Already, technology and automation are reinforcing and exacerbating social injustice in the name of accuracy, speed, and economic progress.
Iyad Rahwan, a faculty member at the Media Lab, and his group are running a program called 鈥.鈥 Moral Machines uses a website to crowd-source millions of opinions on variants of the 鈥,鈥 asking what tradeoffs in public safety might be ethically acceptable in the case of self-driving cars. Some have dismissed such tradeoffs as unlikely or theoretical, but Google in 2015 called 鈥淐onsideration of risks in active sensing for an autonomous vehicle,鈥 which describes how a computer could assign weights, for example, to the risk and cost of a car hitting a pedestrian versus that car getting hit by an oncoming vehicle. In March, a pedestrian was killed by .
Kevin Esvelt, a genetic engineer and Media Lab faculty member, won praise for from residents of Nantucket and Martha鈥檚 Vineyard on his ideas for engineering a mouse that would be resistant to Lyme disease. He invited communities to govern the project, including the ability to terminate it at any time. His team would be the 鈥渢echnical hands,鈥 which could mean working on a technology for a decade or more and then not being able to deploy it. That鈥檚 a big step for science.
We also need humans in the loop to develop the metrics that will fairly assess the costs and benefits of new technology. We know that many of the metrics we use to measure the success of the economy 鈥 for example gross domestic product, rates of unemployment, the rise and fall of the stock market 鈥 don鈥檛 include external costs to society and the environment. Already, technology and automation are reinforcing and exacerbating social injustice in the name of accuracy, speed, and economic progress.
...more technologists are beginning to realize that their creations have dark sides.
Factories that once employed 300 people can now employ 20 because robots, much less prone to error, and faster at doing work. Some 2 million truck drivers may be wondering when they will be replaced by autonomous vehicles or drones. Emails now offer a menu of potential responses based on the AI in our computers and phones. How long until our inboxes decide to answer without consulting us?
Restoring balance within, between, and among systems will take time and effort, but more technologists are beginning to realize that their creations have dark sides. Elon Musk, Reid Hoffman, Sam Altman, and others are putting money and resources into trying to understand and mitigate the impact of AI. And there are technical ideas being investigated, like ways for civil society to 鈥減lug in鈥 to platforms like Facebook and Google to conduct audits and monitor algorithms. Europe鈥檚 new General Data Protection Regulation, which becomes enforceable on May 28, will require social platforms to change the way they collect, store, and deploy the data they collect from their customers.
These are small, promising steps, but they are, in essence, efforts to put the genie back in the bottle. We need social advocates, lawyers, artists, philosophers, and other citizens to engage in designing extended intelligence from the outset. That may be the only way to reduce the social costs and increase the benefits of AI as it becomes embedded in our culture.
Joi Ito is director of the MIT Media Lab, coauthor of , and a columnist for WIRED.
This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the 老澳门开奖结果.
Will Artificial Intelligence Make Us Less Free?
Artificial intelligence is playing a growing role in our lives, in private and public spheres, in ways large and small. Machine learning tools help determine the ads you see on Facebook and routes you take to get to work. They might also be making decisions about your health care and immigration status.
Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling
Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling, an 老澳门开奖结果 research report, examines the EdTech Surveillance...
Source: 老澳门开奖结果