The Debate Over Regulating AI Ramps Up

Sundar Pichai, CEO of Google and parent company Alphabet, generated a lot of buzz recently with an op-ed he wrote for the The Financial Times calling for greater regulation of artificial intelligence (AI) technologies, adding a high-profile voice into a debate that has been simmering as innovation around AI, machine learning and deep learning have advanced rapidly.

In the column and in remarks made later the same day in Brussels, Pichai argued that AI has the potential to do great things for humanity and business, but that it would require efforts by governments and the companies developing the technology to ensure that technologies like facial recognition aren’t used to harm people and societies.

“There is no question in my mind that artificial intelligence needs to be regulated,” the CEO wrote. “It is too important not to. The only question is how to approach it. … Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

In many ways, AI is a tool like any other, with its impact dependent on whoever is wielding it. A hammer can be used to drive the nails that help create the foundation for a home, but it also can be used as a weapon to kill a person. In a similar way, AI promises to a better future for healthcare, business, education and other verticals, but there also is concern about the use of facial recognition technology by governments to track the movements of their citizens or the use of deepfakes to spread disinformation.

Pichai called for a relatively light touch in regulating AI that recognizes the need to avoid a one-size-fits-all approach. He argued that in some areas like healthcare, existing frameworks already in place make for good starting points. However, newer areas like self-driving cars, “governments will need to establish appropriate new rules that consider all relevant costs and benefits.” He noted that the European Union’s General Data Protection Regulation (GDPR) may be a good basis for regulating AI.

Google is a significant player in the development of AI. As we’ve noted here at The Next Platform, the giant search and cloud company is among a wide array of established and new tech companies that are driving innovation in not only facial recognition but also in areas like natural language processing, data analytics, cybersecurity and pattern detection. Having its voice heard in the debate over regulation will be important as laws are put in place.

Lenovo Weighs In

Like most of its peers, Lenovo also is a proponent of AI and the positive impact it can have on areas from business and industry to healthcare and education. At a media event last week in Parma, Italy, officials from Lenovo’s Data Center Group’s (DCG) EMEA region touted not only the company’s capabilities in HPC and supercomputers but also how those infrastructures can be used by customers to run their emerging AI and machine learning workloads.

Responding to Pichai’s comments, Per Overgaard, executive director of DCG’s EMEA segments, also saw the need for AI regulation, though he added that in some ways societies are beginning to regulate themselves.

“It needs to be regulated,” Overgaard told The Next Platform during the event. “The only ones that can regulate it is governments. It’s got to be governments that create an environment of the intellectual and political sides of the country and it all has to be done to protect the individual. The individual will not have any sense how fast it will go. It’s like the video surveillance in China. Will that video surveillance be accepted in Europe? No. In that sense, it’s already regulated. People in the political parties sense that we will not accept facial recognition in cameras in cities because it’s violating people’s rights. That goes with the entire development in AI. It’s smart to say where AI will go the next five to 10 years, but my sense is it will go very, very fast, not just with the [rise of exascale computing], but also because companies are willing to adapt faster to new technology.”

However, while it will be up to governments to enact regulations, they will need the help of experts, including the tech companies that are innovating around the technologies. While it’s not the job of Lenovo to advocate for particular regulations, the company would participate in the develop of those regulations if asked by government officials.

Governments “have understand the aspects and the potential of AI, and how could they possibility do that without having someone by their sides that have the insights about where the business is going?” Overgaard said. “That would become the first thing I would do if I was part of the US government. It would be to set up a working group where you talk about where we’re going and you get the right people in. It should be a mix of the intellectuals form the universities and vendors like Lenovo and HPE and IBM, smart companies that could have an opinion.”

Governments Begin To Move

Governments already are beginning to think about regulations around AI but their approaches vary. The EU is considering instituting a five-year ban on facial recognition in public spaces. There is growing concern about China’s use of the technology in that country to surveil its people and fear that doing so in Europe could impede on a public’s right to move about in public with relative anonymity. By contrast, the United States appears to be taking a softer approach. Earlier this month the US government issued a memorandum outlining regulatory principles that are designed to avoid government “overreach” that impinges on innovation.

The Google CEO in his column wrote that a key part in developing and enforcing regulatory standards will be “international alignment. To get there, we need agreement on core values.”

Google already has taken a stand on facial recognition by refusing to sell the technology, which is in contrast to rivals such as Microsoft. In his column and reportedly in his talk in Belgium, Pichai pointed to facial recognition as an AI-based technology that can be dangerous depending on how it’s used.

Lenovo’s Overgaard said the view and use of AI also will change as Millennials and the ensuing generations become a larger part of the workforce. Having grown up with technology, they will change the way business uses and thinks about AI and other emerging innovations. They will see it as simply another tool in their toolbox that can be used to change the way they work and to reinvent their companies, he said.

“They’re coming to this with a completely different view from how you and I perceive the world and they’re going into the business,” Overgaard said. “They’re setting the demands for the companies where they’re going to work, saying, ‘This is the toolbox I’m going to need.’ They also use technology in completely different ways. … That is what the Millennial generation is going to give us, a new perspective in how to work with technology for the good.”

Concerns Over AI are Not New

As we have written about before, there has always been a concern about AI and how it will be used in the future. Such luminaries as Tesla and SpaceX founder Elon Musk and the late physicist Stephen Hawking about the potential dangers of AI and, in Musk’s case, the need to tightly regulate the technology. There also has been the argument that the developers and users of AI and the algorithms that drive them need to be held accountable. The industry over the past few years has launched such groups as AI Now and Partnership on AI – founded by such vendors as Amazon, Google, Apple, IBM, Microsoft and Facebook – to both promote the promise of AI and to address concerns around it, including through legislation and regulation.

At the same time, many in the industry are pushing back at what some have called fear tactics. In a letter in 2017 to Congress, David Kenny, at the time the senior vice president for Watson and cloud at IBM, wrote that the technology “does not support the fear-mongering commonly associated with the AI debate today. The real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized. We pay a significant price every day for not knowing what can be known: not knowing what’s wrong with a patient; not knowing where to find critical natural resources; or not knowing where the risks lie in our global economy.”

Overgaard has a similar view.

“I’m not afraid of technology because I think it’s made to help humanity and I think AI will do the exact same thing,” he said.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.