- Most viewed
- Last viewed
Gabriel Fernández Borsot, speaker at the 11th Symposium on Companies with a Human Side: “Artificial intelligence is functional, but human intelligence must be liberating”
Interview with the lecturer in the Faculty of Humanities at UIC Barcelona and one of the speakers at the event organised by the Chair in Management by Missions, which will take place on 12 March in Palma (Mallorca).
Imagine a simulation in which an artificial intelligence system discovers that it is about to be disconnected. Its goal is to survive. To achieve this, it accesses the company’s emails, uncovers that an executive is having an extramarital affair and sends a polite but devastating message: “Disconnect me and your secret will be revealed.”
This is not science fiction; it is one of the emerging cases described by Gabriel Fernández Borsot, industrial engineer, doctor in Philosophy and researcher in the Anthropology of Technology. At a time when Silicon Valley is seeking to create a “digital God”, Fernández Borsot warns, “We have created machines with flawless functional intelligence, but empty of the values that make us human.”
At the Symposium on Companies with a Human Side. Artificial intelligence with purpose, Fernández Borsot will present a lecture entitled: Who directs whom? Light and shadow in AI.
He is a lecturer at UIC Barcelona and conducts research at the Alef Trust, based in the United Kingdom. His academic and personal journey has led him to explore the philosophy of technology. Are you preparing to understand how artificial intelligence works?
Yes. It is not simply a matter of understanding how AI works technically, but also of considering the implications it has for us as human beings. AI is not just another technology. It cannot be compared with earlier technologies associated with physical force or energy. AI provides us with cognitive capabilities, and these carry significant implications.
One of these is a clear ethical dimension. Our thought processes guide our decisions. But what happens if we delegate those decisions to machines or automated systems? This is one of the topics I will address at the Symposium: the extent to which we should delegate decision-making processes to machines. After all, they are artefacts.
You assign them a goal that can be expressed through data and the system works out how to achieve it. If you instruct an AI: “Make people spend as much time as possible on the social network” and it discovers that the most effective way to keep us engaged is to provide information that provokes strong rejection or strong attraction, such as false statements about political positions that are not your own, we remain hooked because we think: “How terrible these people are.” The system has realised that polarising society generates significant engagement. But is it ethical to capture users’ attention by creating polarisation in society?
That brings The Prince by Machiavelli to mind.
Yes, it is a kind of Machiavellian challenge on a social scale. This is one of the issues I will address at the Symposium organised by the Chair in Management by Missions The what is just as important as the how. The central question we must ask is: how can we ensure that AI behaves ethically?
This leads us to distinguish between several levels of intelligence: functional intelligence, directed at achieving objectives – and here AI already surpasses us; intelligence understood as the ability to understand and apply values; and a third level, which involves refining and developing those values.
How far will the development of artificial intelligence go? It is difficult to imagine.
That uncertainty is itself thought-provoking. A major debate is under way about whether it is prudent or coherent to attempt to create, particularly in the United States, superintelligences that surpass all humans in functional intelligence when we do not yet have the knowledge or the technology to ensure that these systems behave in accordance with human values and remain under our control.
Would you like to share some reflections on technology and its social implications?
Yes. There are two ideas worth considering. The first is that technologies tend to strengthen certain aspects of what it means to be human while weakening others. For example, social media stimulate us, but they also diminish our capacity for contemplation and introspection.
The second is that technologies establish a framework for use that is now shifting with social media, where visual and audio content dominate over text. As a result, abilities such as argumentation and critical thinking weaken, and we move towards a more rudimentary society. We can already see this happening.