OpenAI gets $1 bn boost from Musk, Silicon Valley honchostech Updated: Dec 12, 2015 16:06 IST
Artificial intelligence is a red-hot field of research and investment for many tech companies and entrepreneurs.(Pixabay)
After his controversial statement calling “artificial intelligence potentially more dangerous than nukes”, Elon Musk announced the formation of OpenAI. Several big-name Silicon Valley figures have pledged $1 billion to support the non-profit firm that plans to focus on the “positive human impact” of artificial intelligence.
Backers of the OpenAI research group include Tesla and SpaceX entrepreneur Elon Musk, Y Combinator’s Sam Altman, LinkedIn co-founder Reid Hoffman, and PayPal cofounder Peter Thiel.
“It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly,” read the inaugural message posted on the OpenAI website.
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the statement read.
The OpenAI funders “have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.”
Artificial intelligence is a red-hot field of research and investment for many tech companies and entrepreneurs.
However leading scientists and tech investors, including Musk, have publicly expressed concern over the risks that artificial intelligence could pose to humanity if mismanaged, such as the potential emergence of “Terminator”-type killer robots.
“We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely,” read the statement, co-signed by the group’s research director Ilya Sutskever.
“The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right.”
Because of the “surprising history” of artificial intelligence, “it’s hard to predict when human-level AI might come within reach.
“When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”