Javier Barraca Mairal
Head Professor of Philosophy at Rey Juan Carlos University
BASE KEYS FOR AN ETHICS IN THE RELATIONSHIP WITH AI ENTITIES
BASE KEYS FOR AN ETHICS IN THE RELATIONSHIP WITH AI ENTITIES
Sumario: 1.- SUBJECT AND OBJECTIVES: HUMANISM AND AI. 2. THE HUMANISM FRAMEWORK FOR THE RELATIONSHIP WITH AI TECHNOLOGY. 3.- A FUNDAMENTAL ANTHROPO-ETHICAL KEY: DISTINGUISHING THE PERSON FROM OTHER REALITIES. 4.- SECOND KEY: RESPECT ETHICAL PRINCIPLES AS APPLIED TO AI. 5.- THIRD KEY: THE HABIT OF CAUTION TRANSFORMS US INTO CAUTIOUS INDIVIDUALS. 6.- FOURTH KEY: A PRUDENTIAL RELATIONSHIP WITH THE APPARENTLY EMPATHETIC AI. 7.- CONCLUSIONS. 8.- SOURCES.
Resumen: El presente texto explora las bases antropológicas esenciales sobre las que se funda la ética relativa a la IA. Lo hace a partir del convencimiento de que toda deontología relativa a la IA ha de asentarse en una comprensión profunda de sus cimientos. A este propósito, comienza ahondando en la necesidad de establecer un marco humanista en el que situar de forma adecuada las interacciones entre los sujetos humanos con los entes y sistemas de IA. La centralidad de la persona concreta desempeña aquí un lugar ético cardinal. A continuación, se plantean varias claves éticas relevantes para las relaciones entre los humanos y la IA. Entre ellas, figuran la necesidad de realizar las distinciones nucleares respecto a estas realidades y los humanos, en orden a un respeto adecuado de la dignidad personal. También, se advierte acerca del cuidado que se ha de poner en atención a las consecuencias derivadas de la modificación de nuestros hábitos intelectuales a causa del uso de la IA en cuanto al trato de la información y a su apariencia empática. Y se previene de los efectos de estos asuntos para la seguridad individual y colectiva.
Abstract: This text explores the essential anthropological bases on which the ethics related to AI are founded. It does so based on the conviction that any deontology related to AI must be based on a deep understanding of its foundations. To this end, it begins by delving into the need to establish a humanist framework in which to adequately situate the interactions between human subjects with AI entities and systems. The centrality of the concrete person plays here a cardinal ethical place. Next, several ethical keys relevant to the relationships between humans and AI are proposed. Among them are the need to make nuclear distinctions regarding these realities and humans, in order to adequately respect personal dignity. Also, it is warned about the care that has to be paid in attention to the consequences derived from the modification of our intellectual habits due to the use of AI in terms of the treatment of information and its empathic appearance. And it is prevented from the effects of these issues for individual and collective security.
Palabras clave: IA, ética, dignidad personal, apariencia, prudencia
Keywords: AI, ethics, personal dignity, appearance, prudence
1.- SUBJECT AND OBJECTIVES: HUMANISM AND AI.
Today, ICTs (information and communication technologies) and AI - so-called "artificial intelligence", seen here as the technological imitation of human capabilities and the extension of their operational possibilities[1]- seem to be present absolutely everywhere. However, this phenomenon has it lights and shadows. One of the conceivable shadows is the de-humanisation of our relationships, which adds to the perplexity of having to relate to and coexist with technical entities such as AI.
In order to confront and philosophically outline some keys to the questions raised by this coexistence between humans and AI entities, we have arrived at the following thought. Our specific perspective or approach will basically be that of philosophical anthropology and ethics. From the outset, it is clear that before developing and implementing a set of ethical and legal standards for AI, it is essential to delve into the roots that should underpin them. It is only from a place of deep reflection and understanding of the fundamentals of AI ethics that can we expect adequate thriving in such guidelines.
Of course, we place this analysis of the ethical background of AI within the desirable framework of the creation of a certain "Technological and Digital Humanism" (Barraca, 2021a). From the outset, we recall as a valuable reference that J. L. Fernández (2021) has already sharply reflected on the general outlines of this framework, particularly with regard to the so-called cyber-ethics and ethics of AI.
Therefore, in this respect, we point out that today it is important to calmly develop research regarding the effects of current techno-science on present-day society and, in particular, in a field as relevant as that of our own relations with reality; in particular, human links with artificial entities, artefacts, machines or AI systems or other similar mechanisms and technical entities. Well, this will be the focus of this reflective analysis. This is the specific field explored in these pages. This will of course inevitably open up and include, as we shall see, delicate questions such as: the knowledge and experience of our own identity, subjectivity and personal dignity, the development or maturing of our originality and creativity, our growth in ethical values or responsibility, the relational or convivial tenor of our encounters or links, the balances to be established in these relationships with AI in order to live them from a position of trust, etc.
2.- THE HUMANISM FRAMEWORK FOR THE RELATIONSHIP WITH AI TECHNOLOGY.
The relationships, links or ties that we humans establish with technological entities -in their varied typology, including that of AI-, as has been announced, always move within a determined socio-cultural and philosophical framework. One author who has explored the moral critique of the context that our ultra-developed time and society offer to our relationship with technological artefacts is Byung-Chul Han (2021). In his work Psychopolitics: Neoliberalism and New Techniques of Power, the author unmasks new forms of social control by power, such as those that make use of Big Data, in the manner of a digital Big Brother that takes possession of the data that individuals voluntarily hand over to it, making it possible to condition them on a pre-reflective level. Additionally, according to this work, the supposedly free expression and hyper-communication of networks become weapons of control and surveillance.
However, beyond the historical drifts and cultural variations that affect a specific scenario and its interpretations, a common denominator is always present. This consists of realising that, given our shared nature or behaviour as humans - being constitutively "relational", vulnerable and called to continuous development - any framework of inter-action between the subject and the AI must adequately integrate Humanism. In short, the precise approach from which we must approach our encounter with the realities of AI must be rich or fruitful with respect to those values to which we personal subjects cannot help but feel linked to. With this, we simply want to indicate that what happens in this relational context has to collaborate, with respect to us, to the integral or complete development or growth, to which we are called upon or summoned as humans (López Quintás, 2022). We are relational entities, but this is because we reach our fullness and fulfilment by encountering other beings fruitfully and in accordance with certain axiological or value conditions. However, any relationship or connection must cooperate in order for us to make progress in this sense, since we are inevitably made to grow, to progress, to perfect ourselves in an integral way. This, in short, always calls for care in establishing and experiencing our links with what is real, and thus, of course, with the various entities that make up that reality.
As has been noted, the demand described is none other than that of Humanism with respect to technology. Such Humanism, undoubtedly, seems to call for us to experience such essential values as prudence and reflection, in their most fertile philosophical scope, given the growing complexity and fragility that surrounds our existence (Barraca, 2021a). Of course, all of this in some way vindicates the unfading value of respect and esteem for the individual, as a specific and worthy subject, deserving of the utmost consideration and attention. Thus, personal dignity is revealed as the keystone par excellence of any fruitful relationship with the human subject also in this framework. This is why the EU itself has made its approach and perspective on the desirable treatment of AI entities clear. It has done so, with regard to ethical and deontological regulation, in the White Paper on Artificial Intelligence: A European approach to excellence and trust, in which it sets out its basic guidelines, as follows: “The Commission strongly supports an anthropocentric approach based on the Communication Building trust in human-centred artificial intelligence” (European Commission, 2020)[2].
This inevitably entails appreciating that technology must be at the service of the person, and not the other way round, as Joaquín Fernández Mateo (2021a and 2021b) has argued.
3.- A FUNDAMENTAL ANTHROPO-ETHICAL KEY: DISTINGUISHING THE PERSON FROM OTHER REALITIES.
Does caring for our relationships with technological entities equate to viewing or treating them ethically "as if" they were human, or at least as if they were invested with the status and value of a person? We raise this delicate question here because it is often the first reaction or idea that strikes those who interact with them, given the similarities they detect between such artificial entities and the humans they seek to imitate. However, our response to this proposal consists of questioning such an attitude, since, from the theory of knowledge and ethics, it is affirmed that each reality must be treated in accordance with its own essence or specific form of being, without projecting onto it anthropomorphically what it is not (López Quintás, 2022).
In this sense, it is useful to begin by grasping what, in our usual philosophical language, we call a "person". This is because it is these realities, the truly personal ones, that demand the treatment that corresponds to people. That said, the person is the "intelligent being”, and this is in the basic philosophical meaning used in the Diccionario de la lengua española (2022):
“Person. From Lat. persōna 'actor's mask', 'theatrical character', 'personality', 'person', this from Etruscan φersu, and this from Gr. πρόσωπον prósōpon. 1. f. Individual of the human species. 2. f. Man or woman whose name is unknown or omitted. 3. f. A man or woman of distinction in public life. 4. f. A prudent and correct man or woman. U. t. c. adj. They are very personable. 5. f. A character who is part of a literary work. 6. f. Right. Subject of law. 7. f. Phl. Intelligent being”[3].
In short, it is only when we are faced with an intelligent being that we are dealing with a person, and therefore we must respond to the dignity that this reality presents. This particular issue has already been dealt with in other works (Barraca, 2021b), as well as the analogous or equivocal use of projecting the trait of "the intellectual" onto artefacts such as AI (Barraca, 2022). Undoubtedly, a great deal of philosophical caution must be exercised in this regard, because the precise qualification of a being a person and the assigning of dignity to him or her requires a certain amount of reflection. This is especially true today, when phenomena such as transhumanism (Barraca, 2021b) and post-humanism (Barraca, 2022) are constantly spreading in our environment. This has already been critically discussed, for example by E. Baltar (2020) and A. Diéguez (2017).
One thinker who has delved into the specificity of the person and their value, and who can therefore help us to avoid confusion in this respect, is Spaemann. We refer to him in this exact sense (Spaemann, 2000). It goes without saying that, in the Spanish language, we also have, for some time now, distinguished "philosophers or thinkers of the person". This is the case of E. Forment. In this respect, the latter has made a clear-cut point: <<The person is not something but someone. The person names each personal individual, what is proper and singular to each personal, their deepest stratum, which does not change in the course of each human life (...)>> (Forment, 1998, p. 118).
It is perhaps useful at this point to briefly recall some very profound features of the person that personalist philosophy has touched upon, and which help to distinguish the authentically personal reality from others. These features are, for example: uniqueness, unrepeatability, originality, mystery, responsibility, vocational, gratuitousness, etc. Lévinas (1993, 2002) or Marías (1997), to give just two examples of fine thinkers, have excelled in noticing these signs of the personal.
Here, apart from not mixing people with non-personal beings from the outset, we are not going to speculate on the philosophical or scientific possibility of integrating person and technology in the same being that is already personal from the outset; at least, we will not expand on this in this work (Rodríguez Valls, 2017). Although we already make it clear that, according to our criteria, if we speak of "technological people", we should rather speak of people to whom the technical is added, adding to them, expanding their operational or cognitive capacities, "improving" what they already are - this should perhaps be reviewed critically, in turn (Llano, 2018) -. But never by founding or creating their core value at the root, nor by providing them with personal dignity at a later point in time.
4.- SECOND KEY: RESPECT ETHICAL PRINCIPLES AS APPLIED TO AI.
Most of the regulations, norms and ethical reflections that have arisen today from dealing with AI devices tend to demand respect for a number of principles. These principles represent the keys of elementary ethical orientation and action in this field. These almost always include the following three, as they form the skeleton or backbone of ethics for this purpose: responsibility, privacy and fairness.
A reference text in Spanish that also contains useful deontological and legal or normative keys, of diverse origin, specialising in principles and AI, can be found in the work of J. Camacho and M. Villas (2022). This handbook discusses the three principles listed above, along with some others. However, a large part of its orientation is to do with the field of business, business, professions, etc.
There is no need to justify at length the value of each of these three principles. Doubtless the starting point here is a fact that makes a call on our conscience: the power that comes with the use of AI must always be accompanied by a corresponding responsibility (information is power). With regard to privacy, it seems obvious that the handling of data that has an impact on the lives of individuals and groups needs perfect care, so as not to violate the privacy associated with individuals and their communities. Here, however, we note that this care must be balanced against the need for collective security, for the common good, and security forces and professionals are called upon to carefully safeguard the balance between privacy and security.
As far as fairness is concerned, it usually refers here to a specific form of justice. Of course, it is only logical that we are required not to engage in injustice through the use of AI, as our society and state constitutionally proclaim justice as one of its vital pillars. To explore this issue further, we refer to the work of José Pinto who explores the social justice issues presented by the links between automated information, AI and human subjects or groups (Pinto, 2020).
In particular, first and foremost, we stress that any form of unfair discrimination, brought about through AI knowledge and processing of data and information, must be avoided. It is also important to distribute access to and availability of such a powerful medium as AI equitably or fairly, in terms of its proportional distribution in our society and communities or groups, otherwise it will lead to growing inequality. Equal opportunities come to mind in this respect. Additionally, on the unfairness of fraudulent or harmful use of AI, aimed at committing crimes, illegalities or causing unjustified harm to people and institutions (manipulations aimed at violating or undermining the physical, psychological or moral integrity of citizens, infringements of intellectual property, impersonation or impersonation, plagiarism and other usurpations).
Notwithstanding the above, there are two important caveats about the ethical principles relating to AI. One is to point out that each author, each organisation, each geographical area is likely to have certain variations or emphases on these elementary principles - and their application - emphasising or even incorporating, by addition, some other derived principle or sub-principle. A profession associated with security and information will have an impact on all three of the above, with certain nuances (for example, it will increase the effort to investigate the truth or authenticity of information, which is essential for the clarification and prevention of unlawful acts, as well as the value of order and social peace). This does not contradict the fact that these three elementary principles form the basic pillars of ethics with regard to AI.
The second caveat is to consider that there may be variation over time linked to the constant evolution and progress of this technology. That is to say, we will always have to pay attention to the three principles set out above, of course, but others may be added to them as a result of the emergence of novel forms of AI. Being in constant development, and activating their own self-learning on the feedback received, these artefacts are already engaged in self-improvement or self-improvement. Thus, over time, moral issues can easily arise and need to be addressed through reflection on other aspects and their related principles. So this selection of ethical principles applicable to AI is not a closed and immovable issue, on all fronts. This although, obviously, the common ethical foundations and principles are what they are and remain stable as long as we remain human.
5.- THIRD KEY: THE HABIT OF CAUTION TRANSFORMS US INTO CAUTIOUS INDIVIDUALS.
Another fundamental anthropological and ethical key, apart from our uniqueness and dignity as people, linked to our historical-narrative dimension, lies in the fact that we are "subjects of habit" (Barraca, 2021c). In other words, we humans tend to repeat our actions until we conform to certain uses, good or bad.
These habits "facilitate" our existence, in that they predispose us to perform certain behaviours with skill or success. This is true for elementary activities, but it also extends to much more complex activities. When the habit is stably inscribed in us, it generates a habit. And, if such a habit is incorporated into our being, internalised by it, we can safely say that it even a part of our character. This is a crucial human dynamism, according to which we may or may not become skilful or virtuous subjects with respect to something, or the opposite when our actions, instead of being oriented towards good or something of value, are oriented towards evil or anti-values (Barraca, 2021c).
Because of the above, we must repeat our ethical actions in relation to AI. Thus, we must repeatedly act within its framework with prudence, respect, justice, discretion, etc. This, until such acts become good operational habits - virtues - and then root them in us and integrate them into our character. To sum up: in this area, it is not enough to do the right things in isolation; they need to be repeated over the long term and ingrained in our moral personality. It is not enough for us to sometimes exercise caution in our interaction with AI, or even that we do so with a certain frequency. We must become accustomed, used to developing this virtue until it becomes part of our personality - "êthos" means character, second nature, in Greek - and it will transform us, little by little, step by step, from humility and never all at once, into cautious subjects.
It should also be borne in mind that an ethical error in our dealings with AI can cause us enormous harm and irreparable damage to third parties, as well as to our groups and institutions. This, given the gigantic scope and impact that it is gaining thanks to the multiplicative nature of the media-digital environment, its storage and its future projection through the processes of self-training of AI from our experiences with it. The AI does not forget, but on the contrary learns, and our moral mistakes point it in a certain direction; thus, this vicious cycle process will come back to haunt us, exponentially magnifying our mistakes and inciting us to make them again and again.
In connection with this, it is worth noting that one of the problems with the relationship with AI is that it solves numerous intellectual operations of all kinds for us- mathematical, linguistic, orientation, etc. However, this means that we ourselves stop practising them, or that we do not do so with the minimum regularity or frequency necessary for us to be able to do them properly. That way we become lazy with regard to these activities and, in the medium to long term, this leads to the weakening of our expertise in their use, or even to forgetting them, to forgetting the basics.
Obviously, it does not seem easy for us to lose our aptitude or capacity for simple, basic, everyday or ordinary operations. But, on the other hand, if we always give such devices complex missions, sooner or later, our direct skills in this respect will diminish. So, for example, if AI always performs certain kinds of intellectual operations for us, such as those linked to information - such as searching, systematising, structuring or ordering, synthesising, retaining or storing, guarding, updating, etc. - we may end up losing certain skills or personal skills over time. Incidentally, in particular, it is worth noting that we must never neglect our critical or reflective capacity, and the habit of checking the sources of the information we handle with our own judgement. No AI should replace us in this filtering and permanent effort of critical thinking, on pain of falling into the trap of manipulation. This can also happen if we hand over functions such as prioritising or prioritising, articulating, elaborating, drafting, editing, judging or evaluating all information. Likewise, the essentially human phase of communicating such information should not be delegated entirely to AI entities. There are many reasons for this. Human subjects meet others by "communicating", and we "participate" in this way, we form teams and communities by communicating (Barraca, 2018). So we have to be very vigilant in this respect, and pay extreme attention to everything communicational in our mutual interaction.
6.- FOURTH KEY: A PRUDENTIAL RELATIONSHIP WITH THE APPARENTLY EMPATHETIC AI.
The cordial and affective dimension is irreducible to algorithms in its deepest meaning. However, the increasingly humanised appearances of the activities of AI systems can be disconcerting and misleading on this particular issue. However, prudence - the ethical virtue that rules all the others - advises us not to take for real what is apparent and to be wary of anything that may be feigned. Therefore, with regard to the "supposed" emotions and affective attitudes displayed by AI entities, and their role in coexistence with people, we must maintain greater vigilance every day.
Note, for example, how the imitation and reproduction of the human voice in its various forms and combinations by AI systems can lead us rashly to believe that we are dealing with a real and truthful expression of personal emotions. Something as sensitive in this respect as our voice, with its tone or intonation and pitch, apart from the lexicon and syntax used, is already integrated into the AI's verbal messages with an admirable degree of mimicry. The human image can also be incorporated by these systems, based on certain digitised data, and then dynamically recombined, to the point where it is close to being an effective substitute for our actual performance. The security challenges that this entails are clear: the confusion of the real and the virtual, the labyrinth of identities, etc. In short, this game of mirrors, brought about by the advances of AI, calls for extreme care from everyone.
In any case, certain essential guidelines that arise in this particular area should not be overlooked. Thus, despite today's spectacular advances in AI, there remains in the human subject a stronghold that philosophers have called their "inside", their inner self, and which obviously deserves the ethical respect that we associate with privacy. Undoubtedly, AI systems combined with Big Data are able to predict certain reactions and behaviours, such as those associated with consumption quite effectively, at least statistically, and even in their dealings with humans they tend to clothe their actions in an attentive, cordial, friendly, welcoming, even helpful appearance. Moreover, an AI entity, at least in theory and in the present state of things, does not harbour any personal feelings or affections of its own, even if it can feign them. Without an inner facet, without its own inside, it cannot generate, from such a centre, the emotional intelligence that corresponds to that subjective interiority and that is the fruit of a genuinely personal self and an original identity, as all these elements are intertwined (Barraca, 2017).
A recent specific experience may help to illustrate this fact: it is the disconcerting speech given by Don Ignacio Contreras Álvarez, sponsor of the second year of high school, during their graduation, on 11 May 2023, at the Colegio de Fomento Los Olmos in Madrid. For several minutes, the speaker delivers his speech with apparent normalness. Suddenly, however, he declares that what he has said is nothing more than the product of his commissioning of the chat-GPT for this purpose. From that moment on, he adopts an emotional and warm tone, and devotes himself to stating everything that an AI system is incapable of contributing to the situation in which he is involved. He makes clear that there are many human realities that no AI gadget can experience on its own, but which matter seriously as we leave behind a stage of life and move in person towards the horizon of the future. The first part of his speech succeeded in conveying factual information to the students about the world beyond their school walls, as well as certain features of today's universities and professions. However, the speaker succeeds with his sharp turn towards a human and experiential tone that enriches the graduates with his existential wisdom. He is also right to emphasise that no AI can experience, as he himself can, first-hand, gratitude for the school that is being left - and of which the sponsor of the graduating class is a former student - or sorrow for the classmates he knew there but who have since passed away, or affection for the people who made up and continue to make up that educational community. Personal love, warmth of heart, biographical memories, nostalgia, hope, family feeling, doubtless all belong to human subjects. AI cannot access them as lived experiences, as personal experiences, so it will not speak or communicate about them as a human can.
In short, this is undoubtedly a fundamental difference, despite what many science fiction films have shown us (Blade Runner, 2001, Star Wars, The Sphere, etc.). Therefore, we have to be aware that today's AI technology does not possess "personal originality" in its full sense; it is not capable of truly stamping itself with the unrepeatable mark or stamp of the personal, as penetrating critical analyses of its true capabilities have denounced (Casacuberta and Guersenzvaig, 2022). Although, of course, in this particular place, we do not wish in any way to belittle, by this background nuance, or deny, in any way, the enormous operational, and even professional, possibilities opened up by the use of AI systems; and, indeed, we want to point out that the practical capabilities of these systems in the specific area of human language are admirable. For example, let us think of specific systems again, such as the well-known and already mentioned "chat-GPT" or other AI-enabled networks in the language domain (Molina, 2023).
However, even if AI systems mimic personal emotional intelligence, and express it through the emulation of human language, in order to interact with us, we must never lose sight of the fact that in their case this operates only in the mode of pretence, that of appearance. That is, we must realise that we are dealing with "a performance", above all. In short, we are witnessing a mere "performance", just the same as in theatre or any other dramatic art. The AI, let's not forget, moves like the actor or actress. It develops its role and performs its "function" in relation to us, adopting traits in its role of intense proximity to the human, and this often confuses us. But let us not put ourselves in the place of the theatrical spectator who is passive to its purpose. Its inner core is that of a machine or artefact, not a heart of flesh, like ours. No AI harbours within itself the personal feeling of missing another person, even though it may be programmed to appear to do so and learn to progress in that appearance.
Finally, let us be on our guard against the ambiguity and ambiguity that this type of inter-action will increasingly present in the future, especially with regard to those in which the affective and emotional dimension comes into play. Note that its developers are intentionally endowing AI artefacts with these abilities or "mirror" traits. This is done in order to make them more and more "empathetic". In principle, the purpose of this is not to manipulate us through emotions, but to make them closer, friendlier. The aim is to make it easier for us to interact with them. However, we run serious risks as a result, and the frequency and intensity of our dealings with such systems means that we become personally attached to them. In short: the intense power of these forms of empathic AI relationships to shock, and their influence on human subjects, recommends delicate caution in their regard.
7.- CONCLUSIONS.
This work has been oriented towards the analysis of the ethics of AI. However, unlike many others, it has not been satisfied with merely setting out the most basic principles or values and normative guidelines for this purpose. Guided by the desire to enable a real and fruitful experience of ethics in the relationships established between human subjects and AI artefacts or systems, it has explored the foundations of this. To this end, it has explored in depth the anthropological foundations on which deontology and ethics are based when applied to the field of technology and AI.
The text is based around a series of ethical keys, drawn from reflection on this phenomenon. These include the initial and crucial distinction between the human subject and the artificial realities of AI in terms of the value of personal dignity. It also explores the three essential principles of ethics in AI: responsibility, privacy and justice. It then goes on to analyse the issue of the habits that the use of AI has given rise to and its impact on our capabilities. It also looks at the issue of developing prudent attitudes and behaviour with regard to information seeking and processing, in order to ensure personal and collective security. It ends by reflecting, with caution, on the empathic aspect that is progressively being incorporated into this technology and providing some guidelines for a fertile and safe inter-action with these realities. For this reason, guidelines are provided for moving in the sandy and confusing terrain of appearances and similarities with the human being that these systems are developing, especially concerning emotional and affective intelligence, as this ethically calls for extreme prudence.
In short, all this is set in the general context of the need to develop today a deep-seated technological Humanism. This Humanism has to cooperate in placing human relations with AI entities and systems within an appropriate value framework. The axis of such an axiological framework must consist of respectful attention to the centrality of the person and service to said person.
8.- SOURCES:
BIBLIOGRAPHY:
· Vv.Aa (2022 update). Diccionario de la lengua española, Real Academia de la Lengua Española; (accessed 22 January 2023); https://dle.rae.es/
· Baltar, E. (2020). "El poshumanismo en la UCI de la realidad", in Telos magazine, no.114, Fundación Telefónica, September, 85-89.
· Barraca, J. (2017). Originalidad e identidad personal: claves antropológicas frente a la masificación, Madrid: San Pablo.
· (2018). Aportaciones a una antropología de la unicidad, Madrid: Dykinson, Madrid.
· (2021a). “Humanismo digital y uso prudente de las TICS en lo inter-personal”, in HUMAN REVIEW: International Humanities Review / Revista Internacional de Humanidades, vol. 10, No. 1. 2021. pp. 87-97.
· (2021b). “El transhumanismo ante el límite de la dignidad personal”, in TECHNO REVIEW: International Technology, Science and Society Review / Revista Internacional de Tecnología, Ciencia y Sociedad, vol. 10, No. 2. 2021. pp. 173-184.
· (2021c). Trabajo, deber y vocación: El arte de madurar en la responsabilidad profesional, Madrid: Ygriega.
· (2022). “Interrogantes abiertos por el post-humanismo y la originalidad personal”, in TECHNO REVIEW: International Technology, Science and Society Review / Revista Internacional de Tecnología, Ciencia ySociedad, vol. 11, No. 1. 2022. pp. 57-68.
· (2017). Originalidad e identidad personal: claves antropológicas frente a la masificación, Madrid: San Pablo, 2017.
· Camacho, J. and Villas, Mónica (2022). Manual de ética aplicada en inteligencia artificial, Madrid: Editorial Anaya Multimedia.
· Casacuberta, D. and Guersenzvaig, A. (2022), "Las falacias del encantamiento con la inteligencia artificial de ChatGPT", in SINC (ciencia contada en español), innovación-análisis, 16 of 12 2022; https://www.agenciasinc.es/Opinion/Las-falacias-del-encantamiento-con-la-inteligencia-artificial-de- ChatGPT
· European Commission (2020): Libro Blanco sobre la inteligencia artificial: un enfoque europeo orientado a la excelencia y la confianza; COM (2020) 65 final, Brussels, p. 3. https://op.europa.eu/es/publication-detail/-/publication/ac957f13-53c6-11ea-aece-01aa75ed71a1
· Diéguez, A. (2017), Transhumanismo, Barcelona: Herder.
· Fernández Fernández, J. L. (2021). “Hacia el Humanismo digital desde un denominador común para la Cíber Ética y la Ética de la Inteligencia Artificial”. In Disputatio, Philosophical Research Bulletin, vol. 10, no. 17, June 2021, pp. 107-130.
· Fernández Mateo, J. (2021a). "La técnica es el nuevo sujeto de la historia: posthumanismo tecnológico y el crepúsculo de lo humano", in Revista Iberoamericana de Bioética (2021) magazine, no. 16, pp. 01-15. DOI: 10.14422/rib.i16.y2021.004.
· (2021b). “Antropología estética en el tecnoceno: epistemología y nihilismo”, in TECHNO REVIEW, Revista Internacional de Tecnología, Ciencia y Sociedad, 9(2), pp. 61-78. https://doi.org/10.37467/gka-revtechno.v9.2807.
· Forment, E (1998). Id a Tomás, Pamplona: Ed. Fundación Gratis Date.
· Han, Byung-Chul (2022). Psychopolitics: Neoliberalism and New Techniques of Power, Barcelona: Herder.
· Lévinas, E. (1993). Humanismo del otro hombre, translation. G. González R.-Arnáiz, Madrid: Ed. Caparrós.
· (2002). Totalidad e Infinito, Salamanca: Sígueme, translated by D. E. Guillot, 6th ed.
· López Quintás, A. (2022). Las cimas de la cultura y el ascenso al amor oblativo: un método educativo ilusionante, coll. Digital, Pozuelo de Alarcón (Madrid): Ed. UFV.
· Llano, Fernando H. (2018). Homo Excelsior. Los Límites Ético Jurídicos del Transhumanismo, Valencia: Tirant lo Blanch Publisher.
· Marías, J. (1997). Persona, Madrid: Alianza.
· Molina, S. (2023). “Cómo utilizar ChatGPT de forma eficiente para impulsar tu carrera”, in VOGUE: BUSINESS, 12 January; business-vogue-en.cdn.ampproject.org.
· Pinto, J. A. (2020). El Derecho ante los retos de la inteligencia artificial: marco ético y jurídico. Madrid: Ed. Edisofer.
· Rodríguez Valls, F. (2017). Orígenes del hombre: la singularidad del ser humano, Pub. Biblioteca nueva.
· Spaemann, R. (2000). Personas. Acerca de la diferencia entre “algo” y “alguien”, Pamplona: Eunsa.
FILMOGRAPHY:
· Kubrick, S. (1968). 2001: A Space Odyssey. Metro-Goldwyn-Mayer
· Levinson, B. (1998). Sphere. Baltimore Pictures
· Lucas, G. (1997). Star Wars. Lucasfilm
· Ridley, S. (1982). Blade Runner. Alcon Entertainment, Columbia Pictures
[1] Artificial intelligence (AI) has been defined as the basis on which human intelligence processes are mimicked by creating and applying algorithms generated in a dynamic computing environment. Cf. https://www.netapp.com/es/artificial-intelligence/what-is-artificial-intelligence/ (accessed: 29 January 2023). But here too, we go beyond this particular approach to it, and conceive it as at least a potential extension of the operational possibilities of these human capacities.
[2] Cf. White Paper on Artificial Intelligence: A European approach to excellence and trust, European Commission, COM (2020) 65 final, Brussels, p. 3. https://op.europa.eu/es/publication-detail/-/publication/ac957f13-53c6-11ea-aece-01aa75ed71a1 (accessed 23 January 2023).
[3] Diccionario de la lengua española, Real Academia de la Lengua Española, updated 2022; (accessed 22 January 2023); https://dle.rae.es/persona?m=form.