Our Love/Hate Relationship with Artificial Intelligence

Our Love/Hate Relationship with Artificial Intelligence

Artificial intelligence is back in the spotlight, more powerful and disruptive than ever. Workers and small businesses fear the loss of opportunity to AI systems and the large companies that can afford the talent and resources to exploit AI’s potential.

The Global Challenges Foundation has put some thought into the perils that could accompany AI advancements, giving the technology a place on its list of emerging risks in its report, 12 Risks That Threaten Human Civilisation. The report explores the idea that an AI with human-comparable skills, trained in specific professions and copied at will to replace human workers, could be a huge, economically disruptive force, for example. It also presents the concern that an “AI arms race could result in AIs being constructed with pernicious goals or lack of safety precautions.” Even with safeguards in place, if such as “super intelligence” truly existed can we really trust those safeguards in the first place?

Scary stuff. On the other hand, the report acknowledges that predictions about the AI domain are still very unreliable, and that they underestimate the uncertainties that surround what it calls “one of the least understood global challenges.”

Risks and Opportunities

Fortunately, there are efforts afoot to get ahead of the challenges presented by AI, paving the way for the technology to work to humankind’s benefit rather than propel its descent into dystopia.

Among these is the recent announcement of OpenAI, a non-profit artificial intelligence company backed by SpaceX and Tesla Motors CEO Elon Musk, startup incubator Y-Combinator CEO Sam Altman, LinkedIn founder Reid Hoffman, Palantir chairman Peter Thiel, and other tech industry figures. The research lab aims to leverage AI for a positive human impact and to advance the idea that the technology should be as broadly and evenly distributed as possible, according to the blog post introducing the initiative.

“We want AI to be widespread,” Musk said in a recent interview about the new initiative. “There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good.” He has expressed concerns that super intelligence shouldn’t be owned by corporations but be available as a resource to the public.

Open Source

Added Altman, “We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.”

Sure, there’s potential value in what may develop out of OpenAI that can be applied to Musk’s and Altman’s commercial ventures, as Wired points out in this piece. But as that article also explains, if OpenAI lives up to its promise of access to new ideas in AI for all, “it will at least serve as a check on powerful companies like Google and Facebook.”

Indeed, the article raises the question about whether OpenAI already has had a positive effect on swinging open the AI industry’s doors: It mentions that Google, for example, open sourced part of its TensorFlow AI engine shortly before the formal unveiling of OpenAI, possibly because it knew of the impending announcement.

Facebook in December also open sourced the hardware designs for the Big Sur servers it uses to train AI software. IBM, for its part, saw a promise it had made in June fulfilled when its SystemML machine learning technology was accepted in November as an incubator project by the Apache Software Foundation (ASF).

Wider Audience

The trend toward open sourcing AI technologies is a good thing. Bringing AI opportunities to a wider audience via published research, code, and even patents – as OpenAI plans to do – will benefit a wide cross section of society. A collaborative spirit and constructive vision for how humankind can collectively turn the technology to beneficial ends should help drive developments in the direction of a better future.

AI can be an empowering force for good and I can’t think of a finer New Year’s resolution than actively supporting efforts that want to realize that goal.

About Mike Stute

Chief Scientist, Masergy
Mike Stute is Chief Scientist at Masergy Communications and is the chief architect of the Unified Enterprise Security network behavioral analysis system. As a data scientist, he is responsible for the research and development of deep analysis methods using machine learning, probability engines, and complex system analysis in big data environments. Mike has over 22 years experience in information systems security and has developed analysis systems in fields such as power generation, educational institutions, biotechnology, and electronic communication networks.