The artificial Intelligence community is in the news now-a-days, mostly for the right reasons. The resurgence of research in this area has been explosive, and that is not an exaggeration.
A proxy for the uptick in research is the number of cites some pioneers of the field have gathered over the years.
Citation count for Geoffrey Hinton
Citations for Yann LeCun
With such rapid progress, there are obvious concerns over AI safety and our future as a species with these technologies, which is a valid and pertinent area of concern. There is some fabulous work going on there, and you should check out the research around the alignment problem.
However, there is another faction that is dismissive of intelligence coming out of “mere” machines.
The argument goes thus: The modern machine learning and AI is just….. something, and will never be as good as human intelligence. Let us try to understand this argument, assuming that it *IS* a valid argument, not an argument from incredulity.
Arguments from incredulity can take the form:
I cannot imagine how P could be true; therefore P must be false.
I cannot imagine how P could be false; therefore P must be true.
Arguments from incredulity happen when people make their inability to comprehend or make sense of a concept the content of their argument.
This is just a bunch of matrix multiplications, how can it ever achieve human intelligence level?
It can, and it has, in several areas. The image recognition capabilities of modern AI far outstrip human performance. So does the chess playing ability. So do many other areas. When you really get into it, your actual neurons are also a bunch of electrical impulses calculating some mathematical function, phenomenologically speaking.
Machines and AI can never do X, where X is your favorite thing that AI can’t do.
Agreed, that it can’t do X *yet*. Maybe it can, maybe it can’t. There is no way to know unless you try. Unless you believe that the robots made of meat are somehow fundamentally and irreconcilably different than robots made out of silicon and steel, this argument seems untenable. Also, funnily enough, if you define AI as something computers haven’t figured out how to do, you are of course correct. Computers can’t do what they can’t do, because if they do, it’s not AI.
Is intelligence substrate dependent? Is there something special in carbon atoms that silicon atoms can not do, even in theory? The jury is still out one way or another, and I will be surprised if the narcissistic viewpoint that human intelligence is somehow special, turns out to be true.
The reductionist viewpoint (just a matrix multiplication, just an electrical impulse, just glorified curve fitting, just… you get the point) assumes that things are more complex on the larger scale than the smaller one, whereas the physical reality is often the opposite of that. Understanding something enough to make use of it is often simpler than understanding every last detail.