If you were to make a life-threatening decision – who would you trust the most? Someone with a consciousness, or someone without it?
That is one of the questions Jim Davies, Associate Professor of Cognitive Science asks the public to ask themselves. Davies has a clear message to the people: we need to stop worrying about artificial intelligence developing its own consciousness, and threaten the human existence. Instead we need to focus on putting more effort into programming goals, values and ethical codes, ensuring that the first super-intelligent AI is friendly.
In a piece published by Nature, the International weekly journal of science, Professor of Cognitive Science, Jim Davies explains why concerns that artificial intelligence will pose a danger if it develops consciousness are misplaced.
Davies’ piece is a response to The White House addressing the potential dangers of AI, and how focusing on extreme scientific and political future risks can distract us from problems that already exist.
According to Davies, the misconceptions about the ability of artificial intelligence to develop its own consciousness, lies at the hands of laypeople and journalists.
“Search for news articles about AI threats, and it’s almost always the journalist who mentions consciousness. Although we do lots of things unconsciously, such as perceiving visual scenes and constructing the sentences we say, people seem to associate complicated plans with deliberate, conscious thought”, Davies writes.
Some might argue that as respected thinkers such as Bill Gates and Stephen Hawking have expressed concerns about machines becoming self-aware, that is a good reason for concern.
However, Davies explains that if you look into the warning of both Hawking and Gates, none mention consciousness.
“AI becomes defined as dangerous or not purely on the basis of whether it is conscious or not. We must realise that stopping an AI from developing consciousness is not the same as stopping it from developing the capacity to cause harm”, Davies writes.
So if the public is so concerned about super-intelligent technology gaining consciousness, is it not worth asking yourself – whose decision would you fear the most? The one of someone without any consciousness, or the one who actually has it? Because after all – with consciousness comes empathy and ethics.
Either way, Davies writes, we need to remind ourselves that there is a possibility of AI posing a threat, regardless of having consciousness or not. He uses the example of viruses that still pose a significant threat to human kind, however, does not have any consciousness.
So are we obsessing about the consciousness, when in reality, we should be worried about something else?
According to Davies, this is certainly the case. He urges everyone to steer their focus away from thinking about consciousness as an issue, and rather focus on putting more effort into programming goals, values and ethical codes.
“A global race is under way to develop AI. And there is a chance that the first superintelligent AI will be the only one we ever make. This is because once it appears — conscious or not — it can improve itself and start changing the world according to its own values”.
“Once built, it would be difficult to control. So, one safety precaution would be to fund a project to make sure the first superintelligent AI is friendly, beating any malicious AI to the finish line. With a well-funded body of ethics-minded programmers and researchers, we might get lucky”.
This article was originally published at: http://www.nature.com/news/program-good-ethics-into-artificial-intelligence-1.20821