Kate Raworth and the virtues of justice, perspective and empathy

Inspired by Shannon Vallor’s book “Technology and virtues: A philosophical guide to a future worth wanting“, in which she discusses a range of technomoral virtues that we would need to cultivate in order to flourish as people (2016, p. p. 118-155), I am writing a series of portraits of exemplars–people who embody these virtues.

kateraworth

From: https://www.greenbelt.org.uk/artists/kate-raworth/

Kate Raworth embodies the virtue of justice. She calls for a new paradigm in economics: “to meet the needs of all, within the boundaries of our living planet”.

She calls herself a renegade economist focused on exploring the economic mindset needed to address the 21st century’s social and ecological challenges. She was educated as an economist and became increasingly critical about the dominant economic paradigm of growth. Her career has taken her from working with micro-entrepreneurs in the villages of Zanzibar to co-authoring the Human Development Report for UNDP in New York, followed by a decade as Senior Researcher at Oxfam.

She is the creator of Doughnut Economics:

Doughnut

The shape of the doughnut–two concentric circles–visualizes the area where we would need to be in order “to ensure that no one falls short on life’s essentials (from food and housing to healthcare and political voice), while ensuring that collectively we do not overshoot our pressure on Earth’s life-supporting systems, on which we fundamentally depend – such as a stable climate, fertile soils, and a protective ozone layer

There is a parallel between Kate Raworth’s work and Shannon Vallor’s discussion of the technomoral virtue of justice. Vallor defines this virtue as a “disposition to seek a fair and equitable distribution of the benefits and risks of emerging technologies” and a “concern for how emerging technologies impact the basic rights, dignity, or welfare of individuals and groups” (2016, p. 128). Indeed, Raworth advocates seeking a just distribution of benefits and risks–mainly in relation to economic processes, and not specifically or explicitly regarding emerging technologies (as far as I am aware; although I did find a series of five design workshops in 2018, by Dutch media lab Waag Society, based on her work). It would be obvious, to me, however, that care for people and for our planet is a necessary precondition for further deliberations about developing and using technologies

Moreover, Kate Raworth champions the virtues of perspective and empathy.

Perspective, because she wants to (literally) change our perspective on economics. By default, we currently have in our minds a picture of a curve going up (see screenshot below). She wants us to look radically differently at the world and at economics.

growth.png

From https://www.thersa.org/discover/videos/rsa-shorts/2014/03/Kate-Raworth-on-Growth

That is why she drew the Doughnut shape. That is why she collaborated with stop-motion animators to make these ideas visual in short and attractive animations. She understands the power of visuals in shaping people’s conceptions. Vallor defined the technomoral virtue of perspective as “a reliable disposition to attent to, discern and understand moral phenomena as meaningful parts of a moral whole” (2016, p. 149). Indeed, Raworth invites us to look at the world holistically and through a moral pair of glasses, so that we can the relationships between people, planet and profit.

And from her commitment to justice also follows her championing of the virtue of empathy. Raworth urges us to empathize with other people, also on the other side of the globe, and how our lives and economic behaviours affect their lives. Moreover, she calls for action, to change our behaviours. This concurs with Vallor’s definition of the technomoral virtue of empathy: a “cultivated openness to being morally moved to caring action by the emotions of other members of our technosocial world” (2016, p. 133).


Possibly, you find that Kate Raworthembodies other virtues as well. Or you may have other ideas about the virtues discussed above. Please post them below or contact me at: marc.steen-at-tno.nl

Advertisements

Tristan Harris and the virtues of self-control, civility and humility

Inspired by Shannon Vallor’s book “Technology and virtues: A philosophical guide to a future worth wanting“, in which she discusses a range of technomoral virtues that we would need to cultivate in order to flourish as people (2016, p. 118-155), I am writing a series of portraits of exemplars–people who embody these virtues.

From: http://www.tristanharris.com/

Tristan Harris embodies the technomoral virtue of self-control.

His mission to make us aware of the ways in which we use technologies–most notably our mobile phones and social networking services–and the ways in which we become increasingly addicted to these. He explains that these technologies are a by-product of the business models of companies like Facebook and Google–they want to grab people’s attention and sell it to advertisers; and the algorithms they use–these algorithms provide exactly that context that will pull you in and keep your eyes glued to the screen, with your index finger or thumb ready to make the ‘refresh’ swipe every couple of minutes.

Harris graduated from Stanford University with a degree in Computer Science, focused on Human Computer Interaction, behavioral economics, social psychology, behavior change and habit formation in Professor BJ Fogg’s Stanford Persuasive Technology lab. He was CEO of Apture, which Google acquired in 2011, and worked at Google, as Design Ethicist and left at 2016 to found the non-profit initiative, “Time Well Spent“. In 2018 he founded the Center for Humane Technology.

The center advocates “four levers to redefine our future: Inspire Humane Design; Apply Political Pressure; Create a Cultural Awakening; and Engage Employees”. Moreover, they provide practical suggestions to take control of your phone: “Try these simple changes to live more intentionally with your devices right now“.

Harris wants us to cultivate self-control, a virtue which Shannon Vallor defines as an “ability in technomoral contexts to choose, and ideally to desire for their own sakes, those goods and experiences that most contribute to contemporary and future human flourishing” (2016, p. 124). If we cultivate self-control, we can free ourselves from our addiction to technology and use it in ways that support human flourishing. Self-control is not a disposition against technology, but a disposition to use technology consciously and productively.

Harris also champions the technomoral virtues of civility and humility.

Civility, because he warns us that the cultivation of self-control is underneath all our social interactions and the fabric of society. When we are all glued to our screens, meekly following the algorithms’ recommendations, we are unable to have conversations–conversations with others and with our inner ourselves, about ‘the good life’, how we want to organize our societies and live our daily lives. Self-control is thus a key condition for cultivating the virtue of civility–to deliberation and collective action.

Vallor defines civility as “a sincere disposition to live well with one’s fellow citizens of a globally networked information society: to collectively and wisely deliberate about matters of local, national, and global policy and political action; to communicate, entertain, and defend our distinct conceptions of the good life; and to work cooperatively toward those goods of technosocial life that we seek and expect to share with others” (2016, p. 141)

And humility, because he stresses that technology by itself is not necessarily evil, but that we need to focus on the ends we want to realize–and then use our technologies as means to realize those ends. He warns us not to believe in technology, but to free ourselves from our addiction to technology and to be free to choose technologies in ways that support human flourishing. We need to let go of our blind faith in technology and treat it as a means, not as an end.

Vallor defines humility as a “recognition of the real limits of our technosocial knowledge and ability; … and renunciation of the blind faith that new technologies inevitably lead to human mastery and control of our environment” (2016, p. 126-7).


Possibly, you find that Tristan Harris embodies other virtues as well. Or you may have other ideas about the virtues discussed above. Please post them below or email me at:  marc.steen-at-tno.nl

Exemplars of technomoral virtues

Inspired by Shannon Vallor’s book “Technology and virtues: A philosophical guide to a future worth wanting“, I am writing a series of short portraits of people who can be viewed as exemplars of the technomoral virtues that she discusses (2016, p. 118-155):

  • Honesty: Respecting Truth: Cathy O’Neil, Luciano Floridi
    “an exemplary respect for truth, along with the practical expertise to express that respect appropriately in technomoral contexts” (p. 122)
  • Self-control: Becoming the Author of Our Desires: Tristan HarrisAimee van Wynsberghe
    an exemplary ability in technomoral contexts to choose, and ideally to desire for their own sakes, those goods and experiences that most contribute to contemporary and future human flourishing” (p. 124).
  • Humility: Knowing What We Do Not Know: Tristan Harris, Cathy O’Neil, Yuval Noah Harari
    a recognition of the real limits of our technosocial knowledge and ability; … and renunciation of the blind faith that new technologies inevitably lead to human mastery and control of our environment” (p. 126-7).
  • Justice: Upholding Rightness: Kate Raworth, Jaron Lanier, Cathy O’Neil, Luciano FloridiAimee van Wynsberghe, Edward Snowden
    a “reliable disposition to seek a fair and equitable distribution of the benefits and risks of emerging technologies” and a “characteristic concern for how emerging technologies impact the basic rights, dignity, or welfare of individuals and groups” (p. 128).
  • Courage: Intelligent Fear and Hope: Cathy O’Neil, Sherry Turkle, Edward Snowden
    “a reliable disposition toward intelligent fear and hope with respect to moral and material dangers and opportunities presented by emerging technologies” (p. 131)
  • Empathy: Compassionate Concern for Others: Kate Raworth, Yuval Noah Harari, Sherry Turkle
    a “cultivated openness to being morally moved to caring action by the emotions of other members of our technosocial world” (p. 133)
  • Care: Loving Service to Others: Sherry TurkleAimee van Wynsberghe
    a skillful, attentive, responsible, and emotionally responsive disposition to personally meet the needs of those with whom we share our technosocial environment” (p. 138)
  • Civility: Making Common Cause: Tristan Harris, Sherry Turkle, Edward Snowden
    a sincere disposition to live well with one’s fellow citizens of a globally networked information society: to collectively and wisely deliberate about matters of local, national, and global policy and political action; to communicate, entertain, and defend our distinct conceptions of the good life; and to work cooperatively toward those goods of technosocial life that we seek and expect to share with others” (p. 141).
  • Flexibility: Skillful Adaptation to Change: Jaron Lanier, Luciano Floridi
    a “reliable and skillful disposition to modulate action, belief, and feeling as called for by novel, unpredictable, frustrating, or unstable technosocial conditions” (p. 145).
  • Perspective: Holding on to the Moral Whole: Kate Raworth, Jaron Lanier, Yuval Noah Harari, Luciano Floridi
    “a reliable disposition to attent to, discern and understand moral phenomena as meaningful parts of a moral whole” (p. 149)
  • Magnanimity: Moral Leadership and Nobility of Spirit: Edward Snowden

My goal (with making these portraits) is to inspire researchers, engineers, developers and designers to cultivate these virtues, in themselves and in their work.

The people who develop technologies need to cultivate (some of) these virtues, in order to deliver technologies that indeed support others (‘users’) to cultivate the very same virtues. If you are working on an algorithm that can impact people’s lives in terms of justice, e.g., in law enforcement, regarding discrimination, fairness and  equality, then you will need to cultivate the virtue of justice. Similarly for the other virtues.

One can cultivate virtues in two ways:

  • By carefully watching and learning from ‘exemplars’, people who embody, exemplify or champion specific virtues (= list above);
  • And by trying-out these virtues in one’s own life and projects; the aim is to align one’s thoughts, feelings and actions.

Here are some suggestions for cultivating these virtues:

  • Reflect on your current work as researcher, engineer, developer, designer; select one project in which you develop a technology, product or service
  • Use your moral imagination to envision this technology’s impact in society and identify which one or two virtues are at stake, e.g., self-control (does the service aim to make people ‘addicted’, a.k.a. ‘engagement’), justice (can the service have unfair or discriminatory effects), civility (does the service enable people to ‘troll’ others or create ‘filter bubbles’), etcetera.
  • Pick one or two exemplars from the list; people that embody, exemplify or champion the virtues that you want to know more about. Read their portraits and, if you have time, watch their TED Talk, read their books, listen to their podcasts, etc.
  • Next time, in your project, you try-out the virtue(s) you are cultivating; speak up and defend self-control of ‘users’, make a case for justice in the data or algorithm you are using, or build-in features that facilitate civility in communication.

The cultivation of these virtues is not a nice-to-have add-on. It is imperative that we take our responsibility and act responsibly:

“The challenge we face today is not a moral dilemma; it is rather a moral imperative, long overdue in recognition, to collectively cultivate the technomoral virtues needed to confront [diverse and urgent] emerging technosocial challenges wisely and well.” (Shannon Vallor, 2016, p. 244).


Here’s a visualization of the text above. Please appreciate that the  relationship between development and usage is not linear (more like curves going both ways) and that the terms virtues and values are used loosely–some people find it easier to talk about values (than about virtues): things that they value, find important, want to defend and grow.

values-lemniscaat

Q: What is the relationship between engineers’ inner lives and their projects’ effects in society?

A: This question has fascinated me for years. I trained at Delft University of Technology and worked at Philips and KPN Research before joining TNO. All this time, I’ve been fascinated by the relationship between engineers’ inner lives and their motives on the one hand, and the projects they work on and these projects’ effects on society on the other hand.

Regarding engineers’ inner lives, we can assume that engineers have positive motivations; they want to make the world a better place. They believe that something in the world can be improved, and they want to play an active role in that (see, e.g., Deus et Machina, in which I contributed a chapter on the beliefs of engineers, with Louis Neven en Ton Meijknecht). One notable, and very sad, exception are terrorists; a relatively large share of those are trained engineers—they have both the motivation to bring about change, or rather disruption, and the skills to deploy technology for their sinister ends.

Regarding their projects’ effects in society, we see a mixed picture. Obviously, engineers have contributed to technologies that we value as good, such as clean drinking water, warm housing and safe health care. Conversely, some technologies are (partly) evil, such as nuclear weapons (or does their threat prevent conventional warfare?) and plastic bottles that pollute the oceans (or is it people’s tendencies to litter and lousy government policies that make these bottles end-up in the oceans?). A mixed picture, indeed, with many other factors in the picture–obviously not only the engineers.

Let’s take a practical example: Claire is trained as an engineer and is involved in developing an algorithm for the police. The algorithm’s objective is to help the police to deploy their officers more effectively and efficiently to prevent home burglaries. It takes historical data on burglaries and combines these with other data, e.g., on the weather, and gives ‘predictions’ of where and when home burglaries are most likely to happen in the future. The police can then send their officers ‘at the right time, to right place’, to prevent burglaries (‘Predictive Policing’). See below for a short video:

predpol

Claire enjoys working on the algorithm. However, she also wonders whether the collection of data might be biased. There may be neighbourhoods where people don’t report crimes, e.g., because they do not trust the police. Then the police never reports these crimes. Or there may be neighbourhoods where the police are already doing lots of surveillances, e.g., in poor neighbourhoods, which result in more data, which result in more ‘predictions’ and police surveillances, resulting in more data, etc. Claire sees the risk of the algorithm perpetuating the current state of affairs, including unfairness and injustice, such as discrimination.

Claire has thoughts and feelings about promoting fairness and justice, and she expresses these in project meetings. This fuels discussions in the project team and leads to modifications of the algorithm; measures against biases are added, e.g., by adding ‘noise’ to the algorithm, sending police officers also to areas they would normally not go to, and by giving less weight to predictions that are based on police activities (and giving relatively more weight to reports by citizens).

In this example, the relationship between the engineer’s inner life and the output of the project she works in was relatively straightforward. In real life, however, this relationship is often more complex. There are many factors that go into a project and affect its outcomes, such as financial constraints, legacy systems, the tendency to focus on means, rather than on ends, the customers’ and users’ behaviours, etc

Q: How does ethics ‘work’?

A: There are many ways to ‘do ethics’. I approach ethics in a pragmatist manner; I use ethics as a toolbox, a toolbox to ask questions and to develop answers.

toolbox2

Let me give an example of how ethics can ‘work’ in your research or innovation project.

I would ask questions about the project’s overall goals, e.g.: What is the impact that you wish to make in the world?

Such a question is meant to counter the tendency to focus on technology. Yes, the development of technology is often a key part in a project. But the project’s overall goal is not to develop technology. The project’s overall goal is to have an impact in the world, e.g., to give people tools which they can use to develop more healthy habits, to empower people so they can co-create and experience safety in their daily lives—or, put in general terms: to enable people to flourish; to live meaningful and fulfilling lives. Technology is a means—not an end in itself.

Such a question will often trigger an interesting discussion about the role of technology in society and about social responsibility—of your organization and of your own role in the project. Moreover, it will often trigger a very useful discussion about partners that would be needed, if we want to create this or that impact in society, the creation of an innovation, eco-system, and about the type of output that the project will need to deliver so that these partners can indeed use this output in their processes and create positive impact in the world.

I make ethics ‘work’ by facilitating a discussion on the impact a project is trying to make in society. For me, ‘ethical issues’ and ‘societal issues’ are often the same.

Please note that, in these discussion, I will not express any value judgements. I’m not your judge. It is your project. I can only try to serve you in your cultivation of your moral sensitivity and capabilities.

Next time, I will present the ‘Societal and Ethical Impact Canvas’, which we are currently developing in the JERRI project.

Q: Ethics … is that a science?

A: That depends on what you mean with ‘science’. If you mean ‘a field of knowledge’, then yes: ethics is a field of knowledge—arguably one of the oldest. But if you mean ‘a natural science’, then no: ethics is not a natural science. For a more elaborate answer, let me discuss three major branches on the tree of knowledge.

treeofknowledge

There are the natural sciences (‘beta’ in Dutch), which study the natural world, such as physics, chemistry, biology, life sciences and earth sciences—and often mathematics, informatics and engineering are included, as fields of knowledge to model the world and intervene in it. Furthermore, there are the social sciences (‘gamma’ in Dutch), which study people and social phenomena, such as psychology, sociology and economics, and business and management studies.

Moreover, there are the humanities (‘alpha’ in Dutch), which study the products of people and cultures, such as history, literature, media studies and philosophy. Finally, we can break down philosophy into several branches, one of which is ethics: the area of knowledge that aims to support people in articulating and dealing with questions like ‘what is the right thing to do?’

Maybe you know all this already. You know there are different fields of knowledge, each with its specific methods and ways of working. Maybe your question—whether ethics is a science—implied another question:

If ethics is a ‘science’, then why is it so different from what I am used to, in physics, in computer science, in engineering? I am used to measuring stuff that can be measured, drawing models with blocks and arrows, making calculations, building experiments and trying-out whether things work—whether they work as predicted and practically.

So, how does ethics ‘work’?

That will be the topic of next week’s post.

Q: Why would I care about ethics?

A: For me, ethics is about asking questions; question like: ‘what does a just society look like?’ or ‘what is the right thing to do, in this particular situation?’. For me, ethics is surely not about lecturing other people or telling people what to do or not. So, why would you care about ethics?

You work as an engineer, right? Or do you work as a researcher or developer or designer in innovation projects? Anyway, you aim to create things. You are trying to have an impact in the world. So, the way that I see it … you are already ‘doing ethics’. You look into the world, and feel something that is not quite right. You have ideas about what is right or wrong. You want to change things for the better. You want to play a part in that. You perceive the world, you evaluate, you build, you tinker, you try-out.

You don’t need a degree in philosophy to ‘do ethics’. As soon as you move around in the world—let alone tinker with it, build stuff, get it out there, in order to change things—you ‘do ethics’. The thing is… you are probably doing this rather unconsciously, implicitly, and maybe not always very systematically.

Are you in for some exploration? Do you want to upgrade your ethical skills? Do you want to improve your moral capabilities?

It is, increasingly expected of us—as researchers, developers, engineers, designers: that we take into account the diverse societal and ethical issues that are associated with the projects we work on. Noblesse oblige: we are required to engage with society and to behave ethically.

So, when you work on artificial intelligence, self-driving cars, the Internet of Things, social networking services, or mobile apps—on anything that may have a huge impact on society: I do invite you to come back next week for a new blog.

Or better: to start asking questions.