Tesla and SpaceX chief Elon Musk has already got a lot on his plate, but that hasn’t stopped the ambitious tech entrepreneur from starting up yet another new venture: OpenAI, a not-for-profit research company dedicated to advancing the science and ethics of artificial intelligence (AI).
”Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the OpenAI team wrote in a post announcing the US$1 billion initiative. ”Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”
The company, for which Musk will act as co-chairman, is backed by other prominent Silicon Valley leaders, such as PayPal co-founder Peter Thiel and Netflix’s Reed Hastings. OpenAI’s goal is to help shape the future potential of AI – a spectrum of technological possibilities that could offer almost unimaginable benefits to society but also may pose unparalleled dangers.
”AI systems today have impressive but narrow capabilities,” the founders say. ”It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”
The risks of unchecked AI are a topic of ongoing concern for Musk. Earlier in the year he added his name to a coalition of more than 20,000 researchers and experts – including Stephen Hawking, Steve Wozniak, and Noam Chomsky – calling for a ban on autonomous weapons that can fire on targets without human intervention due to the capabilities of AI systems.
”[AI] technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” the signatories wrote in an open letter.
”The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
OpenAI is not only concerned with the perils of killer robots, however. As its name suggests, the company is designed to steer open research in all areas of AI study, encouraging its researchers to publish their work as papers, blog posts, or code, and vowing to share any future patents with the rest of the world while freely collaborating with other interested institutions.
”We discussed what is the best thing we can do to ensure the future is good?” Musk told John Markoff at The New York Times. ”We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity.”