Do Software Engineers Need to Consider Ethics?

Jordan Theriault
The Startup
Published in
7 min readOct 16, 2020

--

Picture of a sign with text “Right” and “Wrong” in opposing directions with arrows.

In 2018, Google removed the quote “don’t be evil” from their code of conduct which had been part of the code since 2000. Ethics is a moral philosophy that defines actions between good and evil. But where do software engineers fit in with ethics? What kind of ethical responsibility does the person programming software need to consider?

Law is built into the development community from a code origin standpoint. Licensing could allow modification while citing the original source. It might not allow the distribution of the software under a different name or for payment without significant changes. These clearly define what is right and what is wrong in terms of using the software for an engineer.

While we may largely follow the ethics of using this software in a “good” way as defined by these software licenses (Cisco, Microsoft, and Oracle among others excluded) we lack clear standards for what is “good” when it comes to the people that use it. While litigation is a very real and direct threat, the repercussions of harming someone emotionally is not. Whether an application changes your mood or has a negative effect on the user is not defined in the GNU, MIT, or other open-source licenses. Why don’t we have similar rules for how the software works when interacting with people? Should software developers be paying more attention to these kinds of ethics? How do our software and platform positively or negatively affect people? Should software engineers refuse work if it doesn’t meet our ethical standards?

Stylized text that says: “the social dilemma”
Courtesy of Netflix

Netflix’s documentary The Social Dilemma addresses this far too common problem by bringing in important technologists to discuss the dopamine-creating and growth market targets of these social media platforms that treat you as a product. These are the ubiquitous social media companies such as Facebook, Instagram and TikTok. This series also talks about how this is not an isolated incident. Most of the applications we use for free are collecting data about us to send us advertisements and influence the way we think. In this documentary, the creator of Facebook’s like button, Justin Rosenstein, says he didn’t know the button would affect people in such a negative way. Teens value their self-worth by this button. Post popularity becomes an issue of deep self-worth that causes depression with the like button. How could he have known? The like button was intended to spread positivity. However sometimes an altruistic and “good” feature can have unforeseen reactions and consequences. Most engineers experience this to a lesser degree when they send their code to QA for testing. Those QA testers sure do know how to break things.

Algorithms that were designed to suggest new posts, pages, videos or images we might be interested in by tracking our viewing history seem incredibly convenient and an ambivalent feature. That is until this feature radicalized people to extreme views by putting them inside an echo chamber where you will never see dissenting views.

I can only speak from my own narrow experience as both a consumer and a developer. I took one class on ethics in technology in my post-secondary education. This conversation is bigger than me, but I think software engineers should think about this more and get involved in technology ethics. We are both the user and the creator. We are affected by these platforms and yet we develop them.

I’ve largely removed myself from a social media presence 5 years ago because I felt it was negatively affected my mental health. I was used to endlessly scrolling through other people’s highlight reels and found myself wondering why I wasn’t having as much fun or wasn’t as successful as the people on my feed. The mental quicksand was draining on me and I became depressed about my success and life. Even today despite having largely left social media platforms, I am still not impervious to the draw of these dopamine-inducing applications. I often find myself opening Reddit, Medium, or LinkedIn every hour to check for the latest headlines. These applications are getting more difficult to avoid. Being current is part of being in technology and I find it hard to ignore all the news that comes with the industry. I get excited about new features and technology. Every time Instagram makes a change to their UI, I’m fascinated by the changes despite not using the application. Underneath this excitement is the reality of why they make these changes and features: to influence the way people interact with the application and think. The changes are based on data from their user’s behaviour.

As developers, we need to ask ourselves where our responsibility lies in this ethics crisis. We are not all CEOs of popular social networking apps. The majority are developers that work for companies. We develop apps with minor influence and work with a large team. We do what we are told and need to meet the goals of the company and its performance. Often it’s easy to just program an application and not feel invested in its success. We are often just the hands on the keyboard, not the mind behind the design and decisions. This needs to change and we need to be more aware of the intent and influence of our applications. We need to monitor the results and try to do better for consumers. We are responsible for results, whether or not the result was intended.

Still, we can advocate for including privacy in the architecture and design of our applications. We can try to influence for less intrusive features. We can try to ensure people are taking breaks when they play our games and using our applications.

The massively multiplayer online roleplaying game (MMORPG) World of Warcraft needed to curb it’s userbase’s addiction to its game. The addiction users were facing was hitting the news and giving a lot of bad press to the game. Likely less importantly, addiction is ethically bad. They had created a game that was so enjoyable, time-consuming and vast that people were willing to ignore their responsibilities in real life and the game started negatively affecting their life. Blizzard tried to solve this by implementing a fatigue penalty for people that played the game for too long. This addition was met with a lot of criticism from their users. They changed this to incentivize taking a break from the game by giving an experience bonus when you logged off for a certain amount of time and this was met with a positive response. This addition was a positive change for the users and had ethics in mind. Other games like MapleStory included a reminder to take a break after 2 hours of gameplay, and then sent subsequent reminders hourly. By implementing strategies like this in our application design, we can create a better technology landscape.

Screen shot of the game World of Warcraft where several characters pose with weapons.
WoW promotional image courtesy of Blizzard

The EU introduced 2009/136/EC, also known as the ePrivacy Directive or Cookie Law to broadly protect visitors to websites. This directive forces websites to inform users of risks, prohibit surveillance, and be notified of cookies or trackers being used. This law puts the onus on developers rather than the user to do good.

I once had a heated discussion with a former coworker about the ethics of machine learning after discussing camera AI that had difficulty recognizing black people. The algorithm certainly just found it harder due to light and aperture, right? Could it be possible that the program was designed by people that didn’t consider the differences in skin tone and facial features? I took the side that machine learning is biased because someone programs the application. There is a software engineer that is giving the machine learning algorithm the measure for success. My peer staunchly defended the inhuman nature of machine learning. These algorithms can be programmed to be as biased, or unbiased as possible. Machine learning is inherently unbiased until we give it instructions. I think we were arguing on the same side. Someone in the end does need to give the instructions. The technique is unfeeling, but the use of it is the problem. The question is: how do we ensure the people programming these applications are not including bias in the algorithm? That’s an answer we did not solve in that discussion.

We live with biases all the time. As humans we constantly make decisions. So ethics can help us guide these bias quandaries. What is going to do the most good? What is evil?

There are a lot of methods for dissecting ethics that weigh the good and evil, positive and negative features. Technology undisputedly provides us with a lot of benefits. Social media platforms allow us to connect, communicate and be entertained. But there is also a lot of bad that occurs, intended or not, from it. Do these benefits outweigh the “evil” that these platforms are causing? While we don’t need to be experts in ethics, software engineers can make decisions in our code that are inherently “good”.

The center for humane technology (co-founder being the lead subject of the aforementioned documentary) outlines the principles of humane technology. This is a great place to begin thinking about these issues.

What are the steps we can take as software engineers? Here are a few:

  1. Create an ethical guideline for your team to follow (“don’t be evil” may be too general but it’s a good starting point)
  2. Limit data collection to what is necessary and/or inform your users the intent of data collection (2009/136/EC was a great start in Europe)
  3. Include stop-gaps to influence user behaviour to be more positive (ie. the WoW rested model)

The world is starting to notice ethics in technology. Mark Zuckerberg, Facebook’s CEO, went to the supreme court to answer questions about privacy and data collection. Lawmakers are crafting laws to make the privacy of users a default. We have seen social media platforms need to take a stand against extremist groups. We are aware that technology has influenced the destabilization of countries and influenced elections.

Software engineers need to become a larger part of this conversation and we need to do better.

--

--

Jordan Theriault
The Startup

Web software developer, leader, speaker and writer. Lover of horror games, craft beer, and rock climbing.