Any new and powerful technology always cuts both ways.
The rapid rise of the machine learning flavor of artificial intelligence is due to the fact that, unlike prior approaches, it actually works and therefore can be embraced by a wide swath of businesses, research and educational institutions, and technology companies. And so, we see the possibilities that machine learning presents to make lives and work better, to improve healthcare and commerce, and to accelerate research into everything from genomics to oil and gas exploration.
We have written countless stories here at The Next Platform about the ongoing development of AI technologies and the algorithms that drive them, both in HPC and in enterprise computing. But running underneath all that enthusiasm is a growing unease about the prevalence of machine learning in most parts of society and the threat they pose to everything from jobs, nature and culture to the future of humans. That concern hasn’t come simply from fringe corners at the outskirts of the Internet banging out warnings about SkyNet, but from such people as the late physicist Stephen Hawking and Tesla founder Elon Musk, who has been vocal about the need for tight regulations around AI, which he said in a tweet last year held “vastly more risk than North Korea.”
Such warnings from Musk and others have generated significant pushback from others in the industry. In a letter last year to Congress, David Kenny, senior vice president for Watson and cloud at IBM, wrote: “This technology does not support the fear-mongering commonly associated with the AI debate today. The real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized. We pay a significant price every day for not knowing what can be known: not knowing what’s wrong with a patient; not knowing where to find critical natural resources; or not knowing where the risks lie in our global economy. It’s time to move beyond fear tactics and refocus the AI dialogue on three priorities I believe are core to this discussion: Intent, Skills, and Data.”
The debate is helping to fuel a push among vendors, independent organizations, regulators and lawmakers about how best to regulate the algorithms and their use without stifling the innovation happening around AI. Congress last year created the Artificial Intelligence Caucus to explore AI issues, and earlier this year New York City Mayor Bill DeBlasio created the city-wide Automated Decision Systems Task Force to identify automated decision systems in the city, create ways to identify and remedy harm, develop a public review process and find ways to archive these systems and the data they hold.
Groups like AI Now and Partnership on AI, which was founded in 2016 by Amazon, Google, Apple, IBM, Microsoft, Facebook and DeepMind, are looking for ways to not only promote the promise AI holds for people, business and society but also address the concerns around the impact, safety and trustworthiness of the algorithms, also are investigating legislative and regulatory frameworks. AI Now released what it calls a practical framework for public agency accountability.
Now a month later, the Center for Data Innovation has rolled out its own framework for policymakers to drive what it calls “algorithmic accountability,” a way to put the onus – backed by legal penalties – on operators of systems using AI algorithms to ensure those algorithms don’t harm others through biases and malicious use. The framework lays out a method (below) for regulators to determine the amount of harm an algorithm caused and the level of responsibility of the operator, and the amount of fine to lay on them.
“If operators know this framework exists they can take corrective steps to ensure that they are doing what they can to meet the standard, such as modifying their existing standards or by discontinuing the use of algorithms that can’t meet these standards,” Joshua New, policy analyst at the Center for Data Innovation, said during an event to release the organization’s report, which he co-wrote. “Operators have a very strong incentive to do this because they don’t want to run afoul of regulators should their algorithms cause harm. Similarly, this would send a really strong message to developers of what their customers will expect of the algorithmic system that they want to buy. The developer can either develop algorithms with these necessary controls or, if they don’t, they could quickly lose market share to a competitor.”
AI algorithms are only as good as the data that goes into them, and the worry is that the result can exacerbate existing biases and inequalities, among other harms, New said. Bias can be introduced if the training data is not representative of society or the development teams are too homogeneous. That’s not a big problem if the result is a dating app that fails to find the consumer a date. However, if banks are using an algorithm that could cause certain people to unfairly be denied a loan or a judge uses one that is prone to recommend against bail for a particular group of people, the harm is significant.
The balance is that needs to be made is making people accountable for how they use AI without hurting innovation, he said, noting a broad range of proposed solutions (below) – from making algorithms much more transparent or creating special regulatory groups to oversee them to doing nothing – that he said often would harm innovation while failing to address the problems they’re trying to fix.
Daniel Castro, director of the Center for Data Innovation, said during the webcast event that policymakers need to remember why the United States has thrived during the tech era.
“There many attempts to automate significant parts of the economy using algorithms and using artificial intelligence, so how government ultimately chooses how to regulate algorithms will have a significant impact on the economy,” Castro said, noting the United States’ success in the digital economy when compared with other countries because of its “light-tough regulatory approach. I think we tend to forget that this has been – and continues to be – a uniquely American policy. The United States has succeeded in the Internet economy in large part because of laws that have prevented multiple and discriminatory taxes on Internet access and e-commerce and because of laws that provided intermediate liability protection for both the speech and conduct of users.”
Over the last decade that light touch has extended to data protection and handling policies and now will include regulating algorithms. He urged lawmakers to stay away from such strict guidelines like the European Union’s General Data Protection Regulation (GDPR), which goes into effect this month.
According to the organization’s report, the complexity and scalability of AI algorithms are particular challenges. The complexity enables bias to inadvertently effect algorithms in multiple ways, including through flawed or incomplete training data. The scalability issue comes into play because the algorithms enables large numbers of decisions to be made much faster than humans can.
“As the public and private sectors increasingly rely on algorithms in high-impact sectors such as consumer finance and criminal justice, a flawed algorithm could potentially cause harm at higher rates,” the authors wrote in the report. “As existing legal oversight may not be sufficient to respond quickly or effectively enough to mitigate this risk, it is clear why increased risk warrants greater regulatory scrutiny.”
Members of a panel at the Center for Data Innovation event said the group’s framework was a good starting point for determining how to balance the need for accountability and innovation.
“If we start at the end and identify what harms we’re trying to reach and what benefits we’re trying to reach – much more than a focus on the methods – that leads to better policy,” said Neil Chilson, Senior Fellow for technology and innovation at the Charles Koch Institute and former acting chief technologist at the Federal Trade Commission. “Often we get caught up trying to write rules that cover every single situation, but if we write rules that say, ‘If you harm consumers, you’re going to have a challenge with legal compliance,’ that’s where, if we aim at the ends, that we’re doing better.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment