Stephen Hawking is rather well known for dire warnings and beings open about his fears. In the past, he has warned about the dangers of aliens and what an advanced race would do to humanity, that we will need to seriously consider abandoning Earth and finding a new home among the stars, and most recently about the serious threat of artificial intelligence.
This last point is one that has become a matter of some controversy. Elon Musk joined him in this warnings back in October of last year, and now Bill Gates has entered the fray in a Reddit AMA.
Some of the greatest minds of the modern age are terrified of artificial intelligence, and what it could mean for our planet. But are they right to be concerned now, when such technology is still in the beginning stages of automation like the JIBO robot, and Cortana?
Actually, yes, and for a couple of reasons.
Applying Oversight After The Technology Leaves It Too Late
Technology moves quickly, and it is at the mercy of innovation. Someone could make a huge leap that changes the entire industry within a single product. We have seen such advances before, such as the way the Apple iPhone completely restructured the world of mobile technology in a way that even earlier versions of smartphones were unable to do.
Unfortunately, when technology jumps it leaves the old world (and old laws, policies and regulations) in the dust. We are seeing this in action now, with things like the prevalence of surveillance that is illegal, and yet can be gotten away with thanks to loopholes in a legal system not prepared for the ability.
A.I immediately presents the same problem. Without strict oversight in place before the technology reaches a need for it, we could end up behind when things start going wrong. Or just when the capability for things to be misused is reached. Hawking and Musk have both been especially vocal on this point.
Small Glitches Could Have A Major Impact
Imagine for a second that you are in a car run by artificial intelligence. You are speeding down the highway, when suddenly a glitch in the mainframe occurs. Your car begins to accelerate, and before you can react it crashes into another car.
Now imagine that this glitch is not for your single vehicle, but on a network that runs all cars within a certain area of the city. So hundreds, maybe thousands, of A.I run devices glitch at the same time, causing accidents on highways, city streets, or parking lots. It would be chaos.
Issues like this have to be addressed during the development stage. That includes creating a safe guard that gives humans control, and doesn’t allow the machines to “take over” their own actions without some protections in place.
A.I Isn’t The Far Future, It Starts Today
Physicists like Stephen Hawking make it their business to look far ahead in the future for what may happen. But as A.I continues to grow by the year, we have to do the same and start thinking about that future now.