A: This question has fascinated me for years. I trained at Delft University of Technology and worked at Philips and KPN Research before joining TNO. All this time, I’ve been fascinated by the relationship between engineers’ inner lives and their motives on the one hand, and the projects they work on and these projects’ effects on society on the other hand.
Regarding engineers’ inner lives, we can assume that engineers have positive motivations; they want to make the world a better place. They believe that something in the world can be improved, and they want to play an active role in that (see, e.g., Deus et Machina, in which I contributed a chapter on the beliefs of engineers, with Louis Neven en Ton Meijknecht). One notable, and very sad, exception are terrorists; a relatively large share of those are trained engineers—they have both the motivation to bring about change, or rather disruption, and the skills to deploy technology for their sinister ends.
Regarding their projects’ effects in society, we see a mixed picture. Obviously, engineers have contributed to technologies that we value as good, such as clean drinking water, warm housing and safe health care. Conversely, some technologies are (partly) evil, such as nuclear weapons (or does their threat prevent conventional warfare?) and plastic bottles that pollute the oceans (or is it people’s tendencies to litter and lousy government policies that make these bottles end-up in the oceans?). A mixed picture, indeed, with many other factors in the picture–obviously not only the engineers.
Let’s take a practical example: Claire is trained as an engineer and is involved in developing an algorithm for the police. The algorithm’s objective is to help the police to deploy their officers more effectively and efficiently to prevent home burglaries. It takes historical data on burglaries and combines these with other data, e.g., on the weather, and gives ‘predictions’ of where and when home burglaries are most likely to happen in the future. The police can then send their officers ‘at the right time, to right place’, to prevent burglaries (‘Predictive Policing’). See below for a short video:
Claire enjoys working on the algorithm. However, she also wonders whether the collection of data might be biased. There may be neighbourhoods where people don’t report crimes, e.g., because they do not trust the police. Then the police never reports these crimes. Or there may be neighbourhoods where the police are already doing lots of surveillances, e.g., in poor neighbourhoods, which result in more data, which result in more ‘predictions’ and police surveillances, resulting in more data, etc. Claire sees the risk of the algorithm perpetuating the current state of affairs, including unfairness and injustice, such as discrimination.
Claire has thoughts and feelings about promoting fairness and justice, and she expresses these in project meetings. This fuels discussions in the project team and leads to modifications of the algorithm; measures against biases are added, e.g., by adding ‘noise’ to the algorithm, sending police officers also to areas they would normally not go to, and by giving less weight to predictions that are based on police activities (and giving relatively more weight to reports by citizens).
In this example, the relationship between the engineer’s inner life and the output of the project she works in was relatively straightforward. In real life, however, this relationship is often more complex. There are many factors that go into a project and affect its outcomes, such as financial constraints, legacy systems, the tendency to focus on means, rather than on ends, the customers’ and users’ behaviours, etc