And it happened again, a new wanted poster is on the wall. This time it’s artificial intelligence, under the name of OpenAI, that receives a $1 billion award. With all the noise that the ongoing AI hype creates, $1 billion is what is needed to get attention in the public sphere. In fact the price tag alone would probably not even be enough these days, so a couple of (local) celebrities and well-known big players also need to be part of the message, not forgetting the mandatory reference (by link) to big-G.
Isn’t technology almost always life threatening?
What is interesting is that this time it’s not deep scientific curiosity or the advancement of humanity that are the driving forces, but fear. This fear is not ungrounded in view of the, in fact, quite possible threat of a superhuman artificial intelligence. But honestly, isn’t technology almost always life threatening? Since the last century we know it is very often on a global scale. Radioactivity doesn’t even need to be embedded within a weapon to be deadly. An earthquake can provoke the nuclear devastation of whole landscapes by crushing a single power plant. A single erroneous gene set free through an organism let loose from a biotech lab could kill the majority of the earth’s bee population and lead to a crash of the global biosphere.
So if a deep diffuse angststands at the commencement of OpenAI then one thing is sure, this won’t be a purely rational endeavour, which seems a bit strange for a sci-tech flagship project. But to be fair, this emotional connotation also pleases our do-gooder souls when we read the word open - like in open source or open access next to the “not for profit” phrase.
Can OpenAI actually play a central role in the AI arena?
To hire a group of well-known or well-connected (local) scientists seems like a good plan to create a centre of excellence. Famous people attract people so Elon Musk had no problems gathering a number of other well-known, like-minded investors, and there is no doubt that Ilya Sutskever will also be successful in surrounding himself with like-minded scientists. And then what? Probably he’ll do what most scientists do: He will try to tackle the big question that puzzles him and, with the resource rich environment surrounding him, he will probably get a lot of work done.
But Ilya will struggle with a fundamental challenge - his peers in the commercial operations of big-G and big-F need only to create science that is useful(increases revenues and/or reduce costs) whereas he will need to create science that is relevant and actually promotes progress of the whole field of AI. Now, for this to happen, the solution to the AI problem has to be found in machine learning - to be more precise, in deep learning - to be even more precise in deep convolutional neural networks (deliberate argumentative exaggeration).
The point being made here is that the chances of this approach succeeding are tiny, even assuming that Ilya will, as expected, deliver first class work.
An incubator for collective intelligence
A challenging AI player should rather diversify and pursue as many complementary approaches as possible, which clearly surpasses the capacities of a single pair of brain hemispheres. This should be the moment where collective intelligence comes to play. But how do we assemble a, by definition, as heterogeneous as possible, group of brilliant and highly motivated researchers?
A good way could rather be, not to sponsor specific research, but to sponsor a highly attractive “nest” that provides a rich infrastructure, offering everything a scientific mind could possibly need to experiment. And then this incubator should be opened to anyone who can come up with a viable concept that is worth trying.
In this way, the chances of success are not distributed along the parameter “who’s neural networks are deeper”, but “who can come up with more ideas in a shorter time”. They are more likely to host the stroke of genius needed to make really big discoveries.