read

Over the last few articles, we've discussed whether software engineers have ethical responsibilities over the code that they write. We've talked about how badly written code can lead to harm, how failures — even lethal ones — don't necessarily lead us to write better software.

Let's say, for the moment, that you believe that we do have a moral obligation to the public around the software that we build. I want to discuss a particularly sticky question: should we build software that could kill people?

Narrowing the Scope

It's important to be precise in definitions, here, since there are several kinds of software that can be very dangerous, but that don't have the same moral question mark.

First, and most obvious, is software that powers systems directly affecting human life. Software that runs power plants, powers pacemakers, keeps the electrical grid humming, and lets planes fly on autopilot. More generically, this is software which can cause life-threatening harm to people if it functions improperly, but should not cause harm when working correctly. (The term of art here is "life-safety critical software".)

The next category of lethal software is software whose purpose requires life-threatening harm, but can be more or less harmful depending on its proper function. Here, we put a lot of software in the defense world; a properly functioning missile guidance system will certainly be lethal, but an improperly functioning one can be far more so.

The third category is software cannot be directly harmful, either in normal or abnormal behavior, but can indirectly cause life-threatening effects. Most of the software in the world falls into this category; as previously discussed, even something as innocuous as a hookup app can be lethal if put into the wrong hands without proper precautions in place. Banking software, for example, can't directly injure someone, but an abnormally functioning one can certainly cause harm indirectly — even harm on a global scale.

The last category is the one I want to discuss further: software which is not intrinsically intended to do harm, like some military software, yet can directly result in harm even when working correctly. This is software for which the risks and the proper function are inexorably combined with no ability to separate them. This is the area where much of the controversy both in and outside of the software community lies.

The Dangers of Secrecy

The most commonly discussed software with these properties is encryption. If well-implemented and well-designed, it can resist even powerful adversaries attempts to intercept communications. It has been a well-respected tool used since ancient times, with pen-and-paper ciphers being used for millennia. But this story can be told from two perspectives, both equally correct.

Encryption can save lives. Tor, one of the most famous anonymization tools used in modern times, has been used by members of the media to deliver news into areas where information is heavily censored and controlled, by law enforcement to protect undercover officers from detection, by human rights activists to report on torture and other abuses, by whistleblowers both famous and infamous, and by the military to help field agents and intelligence assets whose lives are at stake. [1]

The same story can be told from a very different perspective, however. Encryption can be used to protect pedophiles distributing child pornography, as a tool to help hide bot-net command-and-control traffic, and a way for terrorists to coordinate attacks against innocent people. It can be used to help criminals escape prosecution, and it can be used by governments to hide abuses and human rights violations. It can be used to distribute information on topics ranging from bomb-making to hate-speech, from money laundering to getting away with murder.

Encryption can help save lives, and encryption can help end them.

Security Through Surveillance

Encryption is not the only technology that presents this kind of duality. Surveillance software, too, can cut in both directions. Software that monitors internet connections can help prevent the spread of malware, reduce the likelihood of corporate espionage, and protect countless people's personal information from being leaked by a careless or malicious employee.

On the other hand, it can also be used as a tool to hunt down whistleblowers, silence protesters, choke off dissent, and intercept people's most personal conversations and information.

Used for either of these purposes, it is working as designed. It would be neigh-impossible to design a system such that it could both block pornography and report on such and yet could not be able to block and report on people visiting political websites. Like a firearm, it is as much about who uses it as how it works.

Deciding to Build

How should we, as software engineers, weigh these potential costs in our mind? Should we simply forgo the potential civic benefits these kind of software can have because of the potential for use by bad actors? If not, how can we look ourselves in the mirror when these tools are inevitably used for evil?

Each circumstance is different, but the following set of questions have been useful to me when I've struggled with weighing these questions in the past.

Who is meant to use the software?

Software which is intended for use by the general public has stronger moral character than those intended for use by a limited set of people, such as corporations, IT departments, or governments. Software can change the balance of power that already exists; centralizing power in the hands of the few can be dangerous. Here we would weigh software like PGP, designed to work in a decentralized fashion between individuals, as more moral than software like S/MIME, where power is centralized in the few. (S/MIME where certificates and keys are generated by a third party, fails this test even more significantly.)

How substantial are the resources required to run the software?

Software which requires few resources has better moral character than software which requires many. If the software requires substantial computing resources, it will likely end up centralizing in the hands of the few, even if there is no de jure requirement for it. Here, though, resources can be non-physical as well; if the software requires network access that is not easily obtained to usefully function, such as the ability to observe all traffic on a link, or be distributed across many points of presence, its moral character is weakened.

Does the software have more than one category of users?

In general, software that has mixed categories of users require more scrutiny than those which only have one. An example, here, would be a software that has "administrators" and "users", or "customers" and "end-users". It is especially concerning if one group of users is given privileged access over others.

As an example, an IDS system which monitors for malware communications and reports infections to an administrator would be considered to have two groups of users -- the IDS administrators, and the end-users who it monitors. This would be judged less moral than a local installation of a malware detection system which monitored and reported to the same person.

Do you, the developer, have a direct financial interest in the software?

Software which you, the developer, have a direct financial interest in (whether you are being paid to write it, or are selling it yourself), is less moral than software you have only an indirect, or no, financial interest in.

This is probably one of the most difficult rules to weigh, and on its surface can be paradoxical. Why should I hold software that I write for work to a higher moral standard than software I write for free? Code is code, whether I'm paid for it or not. All else being equal, what does it matter if I get paid?

It is a fact of life that those who are responsible for paying our salaries have a measure of control over us. Whether explicitly ("I order you to...") or implicitly ("It's important to the team that..."), this pressure can induce us to weaken our resolve. We can even do this to ourselves; I, certainly, have done things I wasn't happy about knowing that my employers would want it done, without them applying any coercion of any form. Even the self-employed are not exempt from this psychological pressure; the potential for money is a very tempting one in itself.

This metric is meant to provide push back against those pressures, both internal and external. The red flags raised in other questions should be considered more severely whenever you are in a context where your personal interests are directly involved.

Working An Example

For our first example, let's use Tor. While it's a generally respected tool, it has had its controversies; the owner of Freedom Hosting, at the time the largest web host designed for use with Tor, was arrested and called him "the largest facilitator of child porn on the planet."

  1. Who is meant to use Tor?

    Tor is designed for use by the general public. Though companies are free to use the Tor network, it is designed to resist any one person from gaining too much control over the network as a whole.

  2. How substantial are the resources required to use Tor?

    Tor requires no substantial or privileged resources to gain the secrecy it controls. Though it does depend on having a number of high-bandwidth connections to the internet along its backbone, the network can be built from an aggregation of common high-bandwidth contributors.

  3. Does Tor have multiple categories of users?

    Yes, Tor does have different categories of users. However, no group is in a privileged position over any other group. (A malicious exit node can intercept unencrypted traffic exiting the Tor network, but this is not particularly a more privileged position as far as Tor itself is concerned.)

  4. Does anyone have a direct financial interest in the software?

    Tor is shepherded by the Tor Project, a 501(c)(3) corporation that relies on grants and donations from the general public. Though they have employees, there is a substantial volunteer base who would be in a good position to detect any malicious behavior.

Getting Advice

Struggling with these sort of questions in isolation can be difficult, if not outright draining. One of the things I've found very helpful is talking them out with other people who understand the issues at hand. Whether it is co-workers, friends, or an advocacy organization like the ACLU or EFF, you should talk to someone if you are weighing a problem like this. If you have no one else to talk it over with, I'm more than happy to be a sounding board.

Asking these questions can be difficult, but I have no doubt that you, and the world, will be the better for the asking.

Discuss this post on Hacker News

Blog Logo

Harlan Lieberman-Berg


Published

The postings on this site are my own and do not express the positions, strategies or opinions of Akamai.
The source for this blog can be found at gitlab.com.
Image

Setec Astronomy

Random rants on politics and discussions on tech.

Back to Overview