🤔 ´Cogito, ergo sum’ (I think, therefore I am), said by Descartes almost 400 years ago, suddenly seems very relevant when defining what makes us human in relation to AI. As bewildering as anthropomorphised AI assistance may seem, AI doesn’t think. It’s not human. AI generates systematic answers (and questions) based on the wealth of human knowledge from which it has learned. As humans, we think: random, unstructured and unpredictable thoughts and generate original, creative and new ideas. That’s our strength.
At the end of 2024, I attended a three-week course on AI and Ethics at the London School of Economics and Political Science. What I came away with were not answers, but questions.
✨ Highlights
🗳️ AI and the State. Does democracy have an intrinsic value in itself or is it a means to an end?, we had to ask ourselves. In defining democracy I found myself describing what I would like it to be, but it is not always like that. “Democracy is freedom. Freedom is like air – you don’t feel it until it’s gone.”, one of the participants wisely said. AI can be used to spread targeted misinformation that undermines democracy, or it can be used to build trust and encourage voting. What decisions should AI be able to make on behalf of government decision-makers and citizens? And when decisions are made by AI, what’s the trade-off, in terms of e.g., data shared, security, privacy and the level of transparency?
🏢 AI and business. Unfortunately, it’s never a zero-sum game, there are trade-offs, always winners and losers. The question is: how to balance justice and fairness, the distribution of resources, opportunities and power? Difficult choices must always be made between more equality, which sometimes means less efficiency, and more efficiency, which often means leaving some people behind.
As AI changes the way we work and takes over some of our jobs, from hiring practices to the nature of our work, we must ask: who produces the errors and biases: human judgement or AI algorithms trained on certain (uncredited) data? At the same time, who’s best placed to correct and challenge these biases, AI or humans? And perhaps the question is more about how best to collaborate to avoid the pitfalls and achieve the best outcome?
🫂 AI and society. Most of us can probably agree that AI systems should be aligned with human values. But what moral values should guide algorithmic decision making? Well, there is certainly disagreement. In the best of (my) worlds, all actors would feel responsible to uphold the values of an inclusive and non-discriminatory environment and society. We know that this is not the case. However, in countries where equality laws and regulations exist, some values can no longer be waved as mere opinions in the name of free speech.
As we’ve seen, algorithms can discriminate in many ways, but as the course reminded us, “the output of an algorithm developed through machine learning is only as good as the data on which the algorithm is trained”.
To address this we must ask: whose values do we align AI with? Should we settle for a minimalist approach of consensus around the values a majority agree on (and ignore the rest), or a maximalist approach? In the EU, the 27 lawmakers have managed to agree that anyone operating within the territory is subject to its laws, such as GDPR and the AI Act. Some would say this is an ambitious approach, while others would have liked to see more.
We started the course reflecting on the intrinsic value of democracy, and I ended it wondering if AI will end up having intrinsic value in itself? If it concludes that exterminating humans is the answer to problems we define, such as solving the climate crisis, then it may end up having a value far beyond humanity…
👍🏽 Overall, I recommend the course. It was really nice to have the opportunity to interact with fellow students who were equally curious about the subject and with remarkably different professional backgrounds. One reflection is that much of the analysis of AI written before the release of ChatGPT in November 2022, already feels rather dated, compared to the literature in other fields. However, the live session with Dr Cat Wade was excellent thanks to her fresh knowledge, delightful and engaging teaching!
📖 After finishing the course, I picked up the ebook ‘AI Snake Oil’ (2024) by Arvind Narayanan and Sayash Kapoor, which furthered my critical thinking.
✨ Highlights
💭 There is not one single definition of AI. It’s a different technology being used for a variety of purposes. Technology that was once considered difficult, such as spellcheck, is now considered mundane. This should remind us what is difficult to achieve today will one day become commonplace. Before one can ask the question whether a new technology should be discontinued because it is prone to error, one must first find out whether the error is really due to the technology or to human failure. (Chapter 1)
💭 Generative AI is trained on mass data where the human creators and workers aren’t compensated. Artists aren’t credited, and workers (outsourced by Big Tech) who annotate the models to correctly label the data are poorly paid. (Chapter 1). What is needed are fair working conditions for the creators as well as the annotators who train the chatbots for accuracy and non-harmful content. Training on images and text without the consent and credits of artists and creators can’t be considered ‘fair use’ in the US, where the law was last revised in 1978, argue Narayanan and Kapoor. And if artists and creators are replaced by AI, no (human) works will be created. So what new data will the next generation of models be trained on? (Chapter 4) However the solutions are not limited to AI, labour exploitation and weak protection of workers do not begin or end with technology. (Chapter 8)
💭 AI systems generate predictions, but humans are unpredictable. AI may perform well on a benchmark, but real-world utility requires different skills. For example, AI makes recommendations based on what we’ve watched on Netflix, but that doesn’t always correlate with what we like. (In my case, I watch bad reality TV while doing housework, which says nothing about what content I actually prefer). AI simply reflects its training data, but decisions based on such models will evidently be wrong when it comes to people who are not represented by the data. (Chapter 2). Social datasets about people are noisy and need a lot of context. Past data is not enough because humans can change their behaviour randomly through serendipity, e.g. poor school performance can turn around thanks to a helpful neighbour. ‘All successful people are also lottery winners to some extent’, Narayanan and Kapoor write. In other words, there is always an element of luck that is difficult for AI to predict. (Chapter 3)
💭 ‘AI is more like Microsoft Excel than Terminator’, reassure Narayanan and Kapoor. There are many misconceptions and doomsday predictions of dangerous AI robots taking over the world, but while AI systems excels at predicting the past, it knows nothing about the future. Real (human) intellectual progress happens when existing consensus is overturned by new knowledge. If we don’t understand AI, it’s because companies have closed it off from scrutiny. In fact, describing AI as unknowable reduces our agency. Long-term concerns about AI and its impact on humanity take focus and resources away from immediate needs, such as solving the compensation of artists and creators by big tech companies (Chapter 7).
😊 Narayanan and Kapoor’s conclusions are indeed optimistic. They are hopeful that certain human works will be valued higher because of our investment and ability to nuance – for example, while most books may be read by AI, those read by a human voice that is thoughtful and accustomed to the genre of the work will be worth more. Fears in the past, for example that online courses would cancel out face-to-face university enrolments, have been proven wrong. (Bonus track). Based on my own recent experience with the online course on AI and Ethics, it was the live session with the professor and other participants that I appreciated the most.
What are some of your thoughts on this issue?
Annica

Leave a comment