Q: What is the relationship between engineers’ inner lives and their projects’ effects on society?

A: This question has fascinated me for years. I trained at Delft University of Technology and worked at Philips and KPN Research before joining TNO. All this time, I’ve been fascinated by the relationship between engineers’ inner lives and their motives on the one hand, and the projects they work on and these projects’ effects on society on the other hand.

Regarding engineers’ inner lives, we can assume that engineers have positive motivations; they want to make the world a better place. They believe that something in the world can be improved, and they want to play an active role in that (see, e.g., Deus et Machina, in which I contributed a chapter on the beliefs of engineers, with Louis Neven en Ton Meijknecht). One notable, and very sad, exception are terrorists; a relatively large share of those are trained engineers—they have both the motivation to bring about change, or rather disruption, and the skills to deploy technology for their sinister ends.

Regarding their projects’ effects in society, we see a mixed picture. Obviously, engineers have contributed to technologies that we value as good, such as clean drinking water, warm housing and safe health care. Conversely, some technologies are (partly) evil, such as nuclear weapons (or does their threat prevent conventional warfare?) and plastic bottles that pollute the oceans (or is it people’s tendencies to litter and lousy government policies that make these bottles end-up in the oceans?). A mixed picture, indeed, with many other factors in the picture–obviously not only the engineers.

Let’s take a practical example: Claire is trained as an engineer and is involved in developing an algorithm for the police. The algorithm’s objective is to help the police to deploy their officers more effectively and efficiently to prevent home burglaries. It takes historical data on burglaries and combines these with other data, e.g., on the weather, and gives ‘predictions’ of where and when home burglaries are most likely to happen in the future. The police can then send their officers ‘at the right time, to right place’, to prevent burglaries (‘Predictive Policing’). See below for a short video:


Claire enjoys working on the algorithm. However, she also wonders whether the collection of data might be biased. There may be neighbourhoods where people don’t report crimes, e.g., because they do not trust the police. Then the police never reports these crimes. Or there may be neighbourhoods where the police are already doing lots of surveillances, e.g., in poor neighbourhoods, which result in more data, which result in more ‘predictions’ and police surveillances, resulting in more data, etc. Claire sees the risk of the algorithm perpetuating the current state of affairs, including unfairness and injustice, such as discrimination.

Claire has thoughts and feelings about promoting fairness and justice, and she expresses these in project meetings. This fuels discussions in the project team and leads to modifications of the algorithm; measures against biases are added, e.g., by adding ‘noise’ to the algorithm, sending police officers also to areas they would normally not go to, and by giving less weight to predictions that are based on police activities (and giving relatively more weight to reports by citizens).

In this example, the relationship between the engineer’s inner life and the output of the project she works in was relatively straightforward. In real life, however, this relationship is often more complex. There are many factors that go into a project and affect its outcomes, such as financial constraints, legacy systems, the tendency to focus on means, rather than on ends, the customers’ and users’ behaviours, etc.

In the next blog, I will present the concept of ‘script’, as a way to better understand this relationship.


Q: How can I use the ‘Societal and Ethical Impact Canvas’?

A: The ‘Societal and Ethical Impact Canvas’ is used very similarly to, e.g., the Business Model Generation Canvas, that is: to facilitate a discussion and generate practical results.

The Canvas was developed in the JERRI project; it is meant to support business development and project management, and it consists of 4 steps–see Figure below:


Step 1: Impact in society: Clarify the project’s ultimate goal to create positive impact in society, e.g., to promote safety, health, cohesion, justice, wellbeing, etc. in specific group(s). You can also explore unwanted or negative effects of the project, and to develop measures to minimize these.

Step 2: Outputs/outcomes of the project: Define which outputs (= results) or outcomes (= effects of these results) are needed to realize the desired impact in society, e.g., which interventions, activities, products or services the project will aim to deliver.

Step 3: Create an innovation eco-system: Identify key clients, key partners, and relevant stakeholders, and discuss how each might contribute to the project, and how each aims to benefit from it.

Step 4: Project mission: Articulate the project’s mission, by reasoning ‘outside-in’: “The project brings together <clients / partners>, so they can create <outputs / outcomes>, that enables <specific people > <to flourish>”. This mission can help to steer project scoping and day-to-day management.

This is the practical side of the Canvas.

But there is also a philosophical side … which concerns the relationship between engineers’ inner lives and motives on the one hand, and their projects and the projects’ effects on society on the other hand. That will be the topic of the next post.

Q: How does ethics ‘work’?

A: There are many ways to ‘do ethics’. I approach ethics in a pragmatist manner; I use ethics as a toolbox, a toolbox to ask questions and to develop answers.


Let me give an example of how ethics can ‘work’ in your research or innovation project.

I would ask questions about the project’s overall goals, e.g.: What is the impact that you wish to make in the world?

Such a question is meant to counter the tendency to focus on technology. Yes, the development of technology is often a key part in a project. But the project’s overall goal is not to develop technology. The project’s overall goal is to have an impact in the world, e.g., to give people tools which they can use to develop more healthy habits, to empower people so they can co-create and experience safety in their daily lives—or, put in general terms: to enable people to flourish; to live meaningful and fulfilling lives. Technology is a means—not an end in itself.

Such a question will often trigger an interesting discussion about the role of technology in society and about social responsibility—of your organization and of your own role in the project. Moreover, it will often trigger a very useful discussion about partners that would be needed, if we want to create this or that impact in society, the creation of an innovation, eco-system, and about the type of output that the project will need to deliver so that these partners can indeed use this output in their processes and create positive impact in the world.

I make ethics ‘work’ by facilitating a discussion on the impact a project is trying to make in society. For me, ‘ethical issues’ and ‘societal issues’ are often the same.

Please note that, in these discussion, I will not express any value judgements. I’m not your judge. It is your project. I can only try to serve you in your cultivation of your moral sensitivity and capabilities.

Next time, I will present the ‘Societal and Ethical Impact Canvas’, which we are currently developing in the JERRI project.

Q: Ethics … is that a science?

A: That depends on what you mean with ‘science’. If you mean ‘a field of knowledge’, then yes: ethics is a field of knowledge—arguably one of the oldest. But if you mean ‘a natural science’, then no: ethics is not a natural science. For a more elaborate answer, let me discuss three major branches on the tree of knowledge.


There are the natural sciences (‘beta’ in Dutch), which study the natural world, such as physics, chemistry, biology, life sciences and earth sciences—and often mathematics, informatics and engineering are included, as fields of knowledge to model the world and intervene in it. Furthermore, there are the social sciences (‘gamma’ in Dutch), which study people and social phenomena, such as psychology, sociology and economics, and business and management studies.

Moreover, there are the humanities (‘alpha’ in Dutch), which study the products of people and cultures, such as history, literature, media studies and philosophy. Finally, we can break down philosophy into several branches, one of which is ethics: the area of knowledge that aims to support people in articulating and dealing with questions like ‘what is the right thing to do?’

Maybe you know all this already. You know there are different fields of knowledge, each with its specific methods and ways of working. Maybe your question—whether ethics is a science—implied another question:

If ethics is a ‘science’, then why is it so different from what I am used to, in physics, in computer science, in engineering? I am used to measuring stuff that can be measured, drawing models with blocks and arrows, making calculations, building experiments and trying-out whether things work—whether they work as predicted and practically.

So, how does ethics ‘work’?

That will be the topic of next week’s post.

Q: Why would I care about ethics?

A: For me, ethics is about asking questions; question like: ‘what does a just society look like?’ or ‘what is the right thing to do, in this particular situation?’. For me, ethics is surely not about lecturing other people or telling people what to do or not. So, why would you care about ethics?

You work as an engineer, right? Or do you work as a researcher or developer or designer in innovation projects? Anyway, you aim to create things. You are trying to have an impact in the world. So, the way that I see it … you are already ‘doing ethics’. You look into the world, and feel something that is not quite right. You have ideas about what is right or wrong. You want to change things for the better. You want to play a part in that. You perceive the world, you evaluate, you build, you tinker, you try-out.

You don’t need a degree in philosophy to ‘do ethics’. As soon as you move around in the world—let alone tinker with it, build stuff, get it out there, in order to change things—you ‘do ethics’. The thing is… you are probably doing this rather unconsciously, implicitly, and maybe not always very systematically.

Are you in for some exploration? Do you want to upgrade your ethical skills? Do you want to improve your moral capabilities?

It is, increasingly expected of us—as researchers, developers, engineers, designers: that we take into account the diverse societal and ethical issues that are associated with the projects we work on. Noblesse oblige: we are required to engage with society and to behave ethically.

So, when you work on artificial intelligence, self-driving cars, the Internet of Things, social networking services, or mobile apps—on anything that may have a huge impact on society: I do invite you to come back next week for a new blog.

Or better: to start asking questions.