The last appointment with our column dedicated to AI today seeks to answer a question as frequent and spontaneous as it is complex: Is artificial intelligence dangerous?
So far, in the other articles on artificial intelligence, we approached the topic by seeking as much as possible a non-polarized, nuanced and therefore not very extreme approach. In fact, we started by clarifying that there is no single definition of artificial intelligence. We reiterated this based on both the substantial differences between applications and uses. Furthermore, there is not even a single definition of human intelligence.
Therefore: how can there be an objective definition of artificial intelligence if the very models it seeks to reproduce cannot be harnessed to a single definition? How is it possible to establish the implications in a strict sense? How can we predict uses and define with clear certainty what awaits us?
In all the facets addressed we have concluded a possible key to interpretation (and action). We must understand, understand and embrace the complexity to which we are called to respond in order to make informed choices. The same complexity that increasingly belongs to these means and to the times that these instruments reflect, of which they are both cause and effect.
The human being is also defined through his abilities, and his abilities are expressed through the technological means he creates and uses. Therefore, by a simple practical syllogism, we could conclude that the human being is defined through his means. Is the opposite also true?
Let's do a speculative exercise to understand whether artificial intelligence is dangerous
But so AI is dangerous or not?
We can already tell you that the answer is yes and no, and that it largely depends on the argument we have just expressed to you. However, from time to time, it can be helpful, easy, and undeniably fascinating (let's face it) to embrace just one point of view.
So today let's have fun and inspire ourselves to explore a totally defeatist vision. Let's pretend, for these few minutes of reading, that artificial intelligence is dangerous and that's it. A little speculative exercise.
Concrete risks of artificial intelligence
Let's start with a perspective of more or less concrete risks linked to reckless uses ofartificial intelligence. We are doing a fairly speculative exercise... but not too much. We invite you to use this study to decide which means to use and how to relate to them.
Automation andartificial intelligence they could indeed replace some workers in sectors such as manufacturing, logistics and services. Considering the exponential speed with which technological progress is taking place, we could soon arrive at disastrous mass unemployment throughout the world. This would cause economic inequalities, exodus migrations and spikes in crime.
Artificial intelligence is dangerous because it can be used to collect and analyze huge amounts of personal data. This threatens the privacy of individuals and enables mass surveillance by governments and multinationals.
The algorithms ofartificial intelligence they can magnify and standardize the prejudices already existing in society, amplifying racial, gender and socioeconomic discrimination. An apparently opposite direction compared to the liberalism that is spreading in the Western world, but which in reality could use liberalism itself to deceive everyone.
Manipulation of perceived reality, social isolation and technological dependence
Telling a story from a single perspective is enough to manipulate the truth... imagine what can happen with the creation of highly realistic audiovisual content. This can have disastrous consequences on political choices and the human psyche.
Humans are social animals. It is thanks to this that we have developed. Interacting with robots or chatbots, either knowingly or believing that they are human, could compromise mental health and social relationships, as well as destroy civilization.
Excessive use of cell-based devicesartificial intelligence, such as smartphones and social media, can lead to technological addiction, with serious consequences on mental and physical health.
And finally (as if that wasn't enough) advanced automation could make some human skills obsolete, negatively impacting creativity, problem-solving and the ability to learn.
Is that enough for you or do we want to move on?
Is artificial intelligence dangerous? Extreme and apocalyptic scenarios
An extreme scenario could occur if hyper-intelligent AI were to develop without any control and become autonomous to the point that it surpasses human ability to understand or regulate it. In this case it would gain control over communications networks, critical infrastructures and defense systems, placing humanity under its direct or indirect control. There would be a progressive loss of distinctive human characteristics, leading to the homogenization of society and the loss of individuality.
This situation could lead to unpredictable and catastrophic results.
A little "less extreme" instead is the scenario in which we imagine a society highly dependent onartificial intelligence.
What would happen in the event of malfunctions or large-scale cyber attacks? The collapse. And if we think about it, this is a perspective that also belongs to ancient times. The Roman Empire, in fact, collapsed when they became dependent on slaves and a thousand comforts. Suddenly the barbarians arrived... and we know the rest.
Why did we ask you these scenarios?
Our intent is absolutely not to spread misinformation or clickbait. kilobit try to embrace the times el'artificial intelligence it is perhaps the most contextual topic that can be addressed.
In other articles on AI we have talked to you in a more nuanced way, but every now and then it may be necessary to delve deeper into one of the extremes in order to put the pieces together, so as to create a puzzle as complete as possible.
After all, literature and the movies they are committed to telling utopias and dystopias to make people think.
So the visions we have proposed to you today are not definitive and we do not fully embrace them. We just want to give as many ideas as possible to best deal with the complexity that we are all called, as human beings today, to analyze.
All this for the purpose of knowing which AI to use and how.