Eversince AI came into existance, there were arguments if AI can go out of control or show undesired behaviour. Many tech enterprenuers including Facebook CEO Mark Zukerberg and Tesla CEO Elon musk were a part of these arguments.
MIT researchers created an AI algorithm named Norman that thinks of nothing but murder. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. Norman was created to caption images. Norman is a proof that AI program can show undesired behaviour not only because of it's algorithm but also because of the data fed to it. The data fed to Norman was from "darkest corners of Reddit". The results were also compared with a standard image captioning neural network.
Here is what both AIs see on Rorschach's inkblot tests.
This test on AI puts forward a serious threat to humans if wrong data is fed to AI. Comment your opinion below.