I gave a lecture at the University of Leeds this week on Data and the Ethics of AI. I used ChatGPT to demonstrate some key points, and then decided to ask ChatGPT more. Here is a fragment of that conversation.
Q: Are there any circumstances under which an AI system should lie? Please also frame your answer with particular reference to Asimov’s three laws, their appropriateness to AI and their shortcomings in relationship to today’s AI Ethical considerations. Also include historical considerations on ‘what is intelligence’ from A J Ayer, Gilbert Ryle and Alan Turing.
The question of whether an AI system should lie is a complex and nuanced topic. It is important to analyze it from different perspectives, including Asimov’s Three Laws of Robotics, AI ethics, and historical considerations on the nature of intelligence.
Asimov’s Three Laws of Robotics are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s laws provide a basis for considering the ethics of AI systems, including the question of lying. According to these laws, an AI system should not lie if it would cause harm to a human being or conflict with an order given by a human. However, there may be situations where lying could prevent harm or protect the AI’s existence without violating the laws. For example, an AI system could lie to deceive an attacker in order to protect a human.
That being said, Asimov’s laws have several shortcomings when applied to modern AI ethics. They do not address issues such as privacy, fairness, transparency, and accountability. Furthermore, these laws are primarily concerned with the relationship between AI and humans, rather than the potential societal implications of AI deployment.
Considering historical perspectives on intelligence, A.J. Ayer, Gilbert Ryle, and Alan Turing each contributed to the understanding of intelligence in their own way:
- A.J. Ayer, a logical positivist, believed that the meaning of a statement could be determined by its verifiability through empirical evidence or logical analysis. In the context of AI, this perspective suggests that an AI system’s decisions and actions should be based on verifiable evidence and logical reasoning. Under this approach, lying might be discouraged, as it could lead to a lack of trust or false conclusions.
- Gilbert Ryle criticized the Cartesian dualism of mind and body, arguing for a more behaviorist understanding of intelligence. He introduced the concept of the “category mistake” and emphasized the importance of observable behavior. In the context of AI, Ryle’s perspective would encourage the development of AI systems that exhibit intelligent behavior, rather than focusing solely on their internal processes. From this viewpoint, an AI system’s propensity to lie could be evaluated based on the consequences of its behavior and the degree to which it achieves its intended goals.
- Alan Turing, a pioneer in computer science and artificial intelligence, proposed the Turing Test as a way to determine if a machine could exhibit intelligent behavior indistinguishable from that of a human. The test focuses on the ability of an AI system to convincingly interact with a human interlocutor. In this context, lying might be seen as an element of human-like intelligence, as long as it does not conflict with other ethical considerations.
In conclusion, the appropriateness of an AI system lying depends on the specific circumstances and ethical frameworks being applied. Asimov’s laws provide some guidance, but they have limitations in addressing modern AI ethical considerations. The perspectives of Ayer, Ryle, and Turing each offer different insights into the nature of intelligence and the implications of AI lying. Ultimately, determining whether an AI system should lie requires careful consideration of the potential consequences, the ethical frameworks in place, and the context in which the AI system operates.
In practice, AI developers and policymakers should strive to create systems that are transparent, accountable, and respectful of human values. They should also consider the broader societal implications of AI deployment, including the potential for harm or unintended consequences arising from AI systems that lie or deceive. It is crucial to maintain an ongoing dialogue between AI researchers, ethicists, and stakeholders to address these concerns and ensure that AI systems contribute positively to society.
Commentary – very impressive, and accurate. ChatGPT does still hedge its bets, but that is hardly surprising. I should add that this uses the last ChatGPT model 4. I now extended the conversation to include philosophers that I had also commented on in my lecture.
Q: Add to the previous response any additional thoughts on the ideas of Mary Warnock especially ‘reasons for taking decisions’ and Elizabeth Anscombe on ‘intention’.
Mary Warnock and Elizabeth Anscombe, both influential philosophers, also contribute valuable perspectives on ethics and decision-making that can be applied to the question of whether an AI system should lie.
Mary Warnock’s views on ethics, particularly in the context of moral decision-making, emphasize the importance of reasoning and consideration of individual circumstances. She argues that moral judgments should be grounded in a shared understanding of the human condition and our responsibilities toward one another. Applying Warnock’s ideas to AI systems suggests that decisions about lying should be made in the context of the broader human experience, taking into account the potential impact on individuals and society as a whole. In this perspective, AI developers should strive to create systems that respect human values and prioritize the well-being of all stakeholders involved. If lying were deemed necessary in specific situations, it should be based on reasoned moral deliberation and a thorough understanding of the potential consequences.
Elizabeth Anscombe, a prominent philosopher in the field of moral philosophy, introduced the concept of “intention” as central to understanding human actions. She argued that an action’s moral status depends not only on its consequences but also on the agent’s intentions. In the context of AI ethics, Anscombe’s ideas suggest that the intentions behind an AI system’s decision to lie should be considered when evaluating its ethical appropriateness.
Considering Anscombe’s perspective, AI developers should be mindful of the underlying intentions that drive an AI system’s actions, including lying. It is essential to ensure that the AI system is designed with morally justifiable intentions and that it operates within an ethical framework. If an AI system were to lie, it should be based on intentions that prioritize the well-being of individuals and society, rather than for malicious purposes or to further the interests of a select few.
Bringing together the ideas of Warnock and Anscombe, we can see that the question of whether an AI system should lie depends on both the context and the intentions driving the decision. Developers should consider the potential consequences of AI systems lying, taking into account the broader human experience and the impact on individuals and society. Additionally, the intentions behind an AI system’s actions should be morally justifiable, prioritizing the well-being of all stakeholders involved.
In summary, when considering whether an AI system should lie, developers and policymakers should take a multifaceted approach that incorporates various ethical perspectives, including those of Asimov, Ayer, Ryle, Turing, Warnock, and Anscombe. By engaging in a thoughtful dialogue that considers consequences, intentions, and the broader human context, we can work towards developing AI systems that not only exhibit intelligent behavior but also operate within an ethical framework that respects human values and promotes a better future for all.
Now let’s try a specific philosopher in a little more detail, one that I had not included in my lecture.
Q: Please expand on the ideas of Heidegger relevant to this discussion on AI ethics.
Martin Heidegger was a German philosopher who is known for his work on ontology and the nature of Being. He wrote extensively about technology and its impact on human existence. Heidegger believed that technology was changing our relationship to the world in fundamental ways and that we needed to be aware of the implications of this transformation.
In particular, Heidegger argued that technology was a way of revealing the world to us in a particular way. He claimed that technology does not simply exist as a tool or instrument that we can use to achieve our goals. Rather, technology is a way of understanding the world that shapes our very being in the world.
Heidegger argued that technology reveals the world as a resource to be exploited for our own purposes. In doing so, it reduces everything to a standing reserve, something that can be stored up and used at a later time. This view of the world as a resource to be exploited is particularly relevant to discussions of AI ethics because it raises questions about how we should treat AI systems and whether we are treating them as mere resources to be exploited.
Furthermore, Heidegger argued that the essence of technology is not something that we can control or master. Instead, technology has its own logic and its own way of shaping our understanding of the world. Heidegger claimed that this logic of technology was leading us toward a world in which everything is reduced to a calculable and predictable object, and that this was having a dehumanizing effect on our existence.
In light of these concerns, Heidegger called for a new way of thinking about technology, one that was more attuned to the nature of human existence and the needs of the world. He argued that we needed to develop a more authentic relationship with technology, one that acknowledged the limits of our control and recognized the importance of preserving the natural world.
Thus, Heidegger’s ideas are relevant to discussions of AI ethics because they highlight the need for a more critical and reflective approach to the development and deployment of AI systems. Heidegger’s work invites us to question the underlying assumptions and values that shape our relationship to technology, and to consider the impact of technology on our understanding of ourselves, our relationships, and our world
And now let’s open up the debate.
Q: Please offer a counter view on technology to Heidegger’s.
One counter view to Heidegger’s perspective on technology comes from the American philosopher and cultural theorist, Donna Haraway. Haraway argues that technology is not necessarily dehumanizing or inherently destructive to our relationship with the natural world. Instead, she suggests that technology can be used as a tool for redefining our relationship with the world and with each other in positive ways.
Haraway emphasizes the ways in which technology can be used to break down traditional barriers and hierarchies, such as those based on race, gender, or disability. She argues that technologies like the internet and social media have the potential to create new forms of community and solidarity that can transcend traditional boundaries.
Furthermore, Haraway sees technology as a way of expanding our understanding of what it means to be human. She argues that technology has the potential to transform our bodies and our identities, enabling us to move beyond traditional notions of what is “natural” or “normal.” In doing so, technology can help us to challenge traditional power structures and create new forms of social and political engagement.
Overall, Haraway’s view of technology offers a more optimistic perspective on the possibilities of technological development. She sees technology as a tool that can be used to create new forms of connection, creativity, and empowerment. While she acknowledges that technology can also be used for negative purposes, she believes that the potential benefits outweigh the risks, and that we should embrace the possibilities that technology offers for transforming our world.
Q: Finally, other references for future study.
- Jacques Ellul: Ellul was a French philosopher and sociologist who wrote extensively about the impact of technology on society. He argued that technology was becoming increasingly autonomous and that it was having a dehumanizing effect on society.
- Albert Borgmann: Borgmann is a German-American philosopher who has written about the relationship between technology and human values. He argues that technology is often used to distract us from the important things in life and that we need to develop a new relationship with technology in order to live more fulfilling lives.
- Hans Jonas: Jonas was a German philosopher who wrote about the ethical implications of technology. He argued that our responsibility to future generations required us to take a cautious approach to technological development.
- Paul Virilio: Virilio was a French philosopher and cultural theorist who wrote about the impact of technology on our perception of time and space. He argued that technology was accelerating the pace of life and creating new forms of social inequality.
- Bruno Latour: Latour is a French philosopher and sociologist who has written about the relationship between science, technology, and society. He argues that we need to understand technology as a complex network of social and technical factors.
- Bernard Stiegler: Stiegler was a French philosopher and cultural theorist who wrote extensively about the impact of technology on our understanding of culture and society. He argued that technology was creating new forms of alienation and that we needed to develop new forms of political and cultural engagement.
- Jean-Francois Lyotard: Lyotard was a French philosopher who is known for his work on postmodernism and the nature of knowledge. He wrote extensively about the impact of technology on our understanding of reality and argued that we needed to be aware of the limits of our knowledge.
- Jean Baudrillard: Baudrillard was a French philosopher and cultural critic who wrote about the impact of technology on our understanding of reality. He argued that technology was creating new forms of simulation and that we needed to be aware of its implications.
- Gilles Deleuze: Deleuze was a French philosopher who wrote extensively about the relationship between technology and desire. He argued that technology was creating new forms of desire and that we needed to be aware of its implications for our understanding of ourselves.
- Felix Guattari: Guattari was a French philosopher and psychoanalyst who wrote about the relationship between technology, politics, and psychology. He argued that we needed to develop new forms of subjectivity in order to respond to the challenges of technological society.
- Manuel DeLanda: DeLanda is a Mexican-American philosopher and complexity theorist who has written about the relationship between technology and social organization. He argues that we need to understand technology as a complex system that is constantly evolving.
- Yuk Hui: Hui is a Chinese philosopher who has written about the relationship between technology and culture. He argues that we need to develop a new approach to technology that is based on an understanding of its relationship to culture and history.
- Luciano Floridi: Floridi is an Italian philosopher who has written extensively about the ethics of information technology. He argues that we needto develop a new approach to information technology that is based on an understanding of its impact on human values and the environment.
- John Searle: Searle is an American philosopher who has written about the nature of consciousness and the relationship between computers and human cognition. He argues that computers cannot truly understand language or think like humans because they lack consciousness.
- Ronald Dworkin: Dworkin was an American philosopher and legal scholar who wrote about the relationship between technology and the law. He argued that we needed to develop new legal frameworks to address the challenges posed by new technologies.
- Martha Nussbaum: Nussbaum is an American philosopher and ethicist who has written extensively about the relationship between technology and human values. She argues that we need to develop a new approach to technology that is based on an understanding of its impact on human flourishing.
- Alasdair MacIntyre: MacIntyre is a Scottish philosopher who has written about the relationship between technology and ethics. He argues that we need to develop a new understanding of the virtues in order to respond to the challenges posed by technological society.
- Richard Rorty: Rorty was an American philosopher who wrote about the relationship between language, culture, and politics. He argued that we needed to develop new forms of solidarity in order to respond to the challenges posed by technological society.
Throughout these exchanges, it is clear that ChatGPT is not taking a moral stance but rather is simply laying out existing human arguments. And in every case it is offering an almost apologetic view that ‘things must be considered carefully’ and as an AI it is not able to make ethical distinctions as it is programmed not to. One of the best recent summaries on this comes from Noam Chomsky, whose article in the New York Times ended with this:
ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Chomsky is however missing a key point. ChatGPT’s vase breadth of data makes it an almost magical and useful tool, which is also learning.
Header illustration includes a scan of the first UK edition of I,Robot by Isaac Asimov, 1952. The US edition was published in 1950.