AI or Artificial Intelligence is currently in a baby stage. It means AI can perform only a few small tasks like web searches, facial recognition, or just driving a vehicle. But a lot of researchers are aimed to create AI which is strong enough to handle specific tasks like solving equations, playing chess and any cognitive role to outperform humans.
Why is it important to research AI?
Research is required in several areas to keep AI’s impact beneficial for the society, i.e. from law and economics to technology. If your laptop gets hacked or crashes, it may be a bit more than a pain. After all, AI is designed to do anything you want. It can happen anytime when it controls your airplane, car, automated trading system, pacemaker, or power grid. Preventing the devastating side effects like getting deadly autonomous vehicles into wrong hands is another challenge in the short term.
But there is also a question about what if the efforts for strong AI become successful and AI comes out to be positive for humans in the long term. In 1965, I.J. Good pointed out that designing smart AI can be a cognitive task itself. This type of system could go through self-improvement and trigger a smart explosion. It could easily outperform human intelligence. This type of super-intelligence can be helpful to eradicate disease, war and poverty by inventing latest and revolutionary technology. This way, creating robust AI can be the biggest event in the history of mankind.
There are also concerns of the experts that we need to align the goals of mankind and AI before making it super-intelligent. There are some people who question whether creating strong AI will ever be beneficial. We understand the potential for AI to cause harm, whether purposely or accidentally. We believe that research is going to be helpful to prepare for and avoid any harmful consequences in future and getting the benefits of AI without hurdles.
The Negative Consequences of AI
A lot of researchers believe that a smart AI may not be able to exhibit our emotions like hate, love, anger, sadness etc. There is no reason why AI can be malevolent or benevolent intentionally. When considering the risk of AI, there are two possible scenarios –
- AI can do something devastating – Autonomous weapons are built to kill. If these weapons go into the wrong hands, these can easily bring mass destruction. In addition, AI arms races can accidentally cause AI war which can cause mass fatalities. These weapons could be designed to be too hard to ‘deactivate’ to avoid being hacked by enemies. This way, humans could lose track of those situations.
- An AI can be programmed for something good, but it could do anything to achieve its goal – It can be possible when AI’s goals would be different from ours. Suppose you command your smart car to reach the airport as fast as it can, it can do everything you didn’t want but get you to the airport at the end.