Do algorithms have their own business ethics?

Algorithms are not unbiased, and business leaders need to understand how algorithms can influence a company’s ethical character and reputation, writes Robert Elliott Smith

Commenting in Fast Company in 2017, an executive vice president at a leading accrediting body for business education said that teaching ethics and social responsibility has ‘… become a foundational expectation for what Schools do’.

Most businesspeople would probably expect this at a time when transformative social issues, such as rising inequality, political polarisation and climate change can affect and be affected by every action a company might take.

However, a growing number of a business’ actions are not initiated by its executives or employees. Instead, businesses today increasingly employ algorithms that act on their behalf. These programmatic business agents will become even more autonomous in the future. Given this and their assumed artificial ‘intelligence’ (AI), it is time to ask ‘do algorithms have business ethics?’

As an AI consultant with 30 years’ experience, I believe they do. I also think that for the good of society and business, those ethics require serious consideration and reshaping by business leaders with whom the responsibilities of companies ultimately rest. That requires business leaders to know more about algorithms, their operation and the intrinsic ethical implications. Particularly since the potential impact of ethical missteps on businesses and the societies within which they operate (and on which they ultimately depend) have never been more significant.

Tech giants under increasing scrutiny

In our online, multicast culture, reputational damage can spread like wildfire. A 2018 Harris Poll that measures positive perceptions of companies reported that Google slid from being one of the public’s top 10 companies to one of its top 30 in a single year. Meanwhile, Facebook’s reputation has taken a severe beating in the past two years.

A 2018 poll from market research firm, Honest Data, for example, showed that it was the most negatively perceived tech company, with one third of Americans believing Facebook’s very existence harms society. These rapid changes in opinion are undoubtedly related to a constant stream of news stories about these companies’ influence on society, from data privacy scandals to allegedly racist search results to powerful, and possibly nefarious, influence on elections.

Today’s rapid social response to perceived and real business missteps parallels the desire of governments to regulate technology companies. Both the EU and the US have opened taxation and anti-trust investigations of the tech giants. This increasing scrutiny and regulatory pressure not only impacts the largest firms, but also disruptive companies such as Uber, Airbnb, Deliveroo, and many others that are rapidly remodelling our social and commercial infrastructures. This so-called ‘techlash’ is unsurprising given the massive influence that these businesses are having on the societies that generate their profits and losses.

Accountability and the evolving business landscape

Half a century ago, the most important and successful companies in the world made money from the flow of energy and materials on which society depended (mainly electricity, oil, steel, and manufactured goods such as cars). By contrast, many of the most prominent companies of today, for instance, Google and Facebook, aren’t producers of physical products at all; they only host and disseminate information.

Even the world’s largest retailer, Amazon, is primarily a product information, ordering and logistics company. Uber, Airbnb, Deliveroo, and other sharing, or gig-economy, companies own almost no material assets. Despite their more ephemeral character, these organisations are as much a part of the social infrastructure as any power grid or highway system. But, unlike companies operating the physical infrastructure of the past, it is harder to hold contemporary technology companies accountable for their social impact because their actions and those of their customers are more difficult to separate.

For example, in the big industries of the mid-20th century, a board decision on the location of a factory could change the laying of roads, the routing of power lines, or even the direction of a river. These decisions could make or break not just communities, but entire ecosystems too. Likewise, informational businesses now have similarly profound effects on the world with social networks influencing elections, the ordering of search results affecting commercial outcomes, and new business models – such as Airbnb or Deliveroo – greatly impacting communities (housing availability, real estate prices) and the environment (congestion, pollution).

The missing element in ethical analyses

That said, the real-world effects of today’s informational companies require the interaction of millions of people, each making their own ethical choices, which translate into real-world consequences. For instance, the distribution of polarising misinformation (so-called ‘fake news’) on social networks such as Facebook explicitly requires the actions of individual users sharing (and in some cases, producing) that information. In this case, any negative impact on society can be dismissed as the inevitable outcome of ‘democratic’ actions over which the company has no control.

However, the missing element in this ethical analysis is an understanding of the nature and emergent effects of the algorithms at play in the system. Facebook’s algorithms, for example, act as the agents autonomously curating news and information, even if they do not create it, or the links along which it is shared. Those algorithms work to particular aims programmed by Facebook, and those aims have effects that have ethical, social and even moral dimensions. Thus, they effectively operate as gatekeepers and broadcasters, amplifying or de-recommending information in line with their programmatic goals.

We are typically taught that technology has no morals, only its uses do. Moreover, most people have a limited understanding of what algorithms are and how they operate, and virtually no knowledge of the social and historical contexts from which algorithms originate. We believe that algorithms are unbiased and objective, and thus the historical and social settings from which they emerged are irrelevant.

While Shakespeare’s plays are never taught without an understanding of the Elizabethan era, and we don’t learn about Aristotle without knowing something about ancient Greece, the science and maths from which algorithms spring are taught without their historical contexts. For science, maths, and therefore algorithms, it’s as if these contexts don’t matter, because these domains are considered inherently objective, and therefore free of any historical biases or outdated thinking.

On closer inspection, it is clear that the historical context from which many of the foundational assumptions behind algorithms arise are most certainly not free from bias. For instance, one of the primary actions of algorithms today is data reduction. Businesses that deal with ‘big data’ must turn that data into manageable facts on which they can make business decisions. Few leaders of big data businesses are aware that the scientists involved in the eugenics movement in the late 19th century created the first data reduction algorithms, and that those algorithms (and many others based on similar assumptions) are still widely used today.

The development of one of the most important statistical techniques employed by data reduction algorithms (both then and now) was originally motivated by the desire to use exam results (generated by the first waves of universal education in the English-speaking world) to prove that there was an inherited ‘general’ intelligence factor (called the ‘g-factor’).

While these academic efforts were initially benign, they soon became a self-fulfilling prophecy: the assumptions buried in algorithms virtually assured ‘proving’ there was a single g-factor behind students’ differences in performance, supporting the eugenics point of view. Other explanations – for instance, differences in a student’s economic background and opportunities – were implicitly buried in a fog of assumptions and mathematical analyses. Similar tendencies to overlooked hidden assumptions (and thus, biases) are even more likely today, given the massive scale of algorithms and their datasets.

Simplifying biases and assumptions

Inevitably, algorithms applied to highly complex systems (such as people) have simplifying biases of this kind. Recent, careful mathematical analyses reveal that, as datasets (for example, those provided by social networks and other online services today) become massive, the vast majority of conclusions that can be drawn from them are spurious.

How, therefore, do algorithms ever find any facts in the massive datasets of today? The answer is that algorithms are never unbiased. Algorithms must make assumptions in their operations on and representations of data in order to be effective in drawing conclusions from that data. For instance, in the development of g-factor, there is an implicit assumption that correlations in school exams indicate general intelligence, rather than other factors that influence school performance. When reducing massive data to conclusions, implicit representational biases of complex phenomena of this sort are inevitable, and in every case, they are (consciously or not) engineered into algorithms.

That is not necessarily a problem if the assumptions and their ethical and social implications are clearly understood, along with the inevitable representational biases and their possible consequences. Unfortunately, maintaining such clarity is always a challenge. An historical example is how ‘g-factor’ science led to extremely damaging consequences in the real world. First developed to help identify students who needed more support, the IQ test was quickly adopted by eugenicists as a way of measuring the dubiously scientific g-factor.

US immigration policy of the early 20th century was designed to exclude the ‘feeble-minded’, and IQ testing was employed at Ellis Island to make this exclusion scientific’. The testing indicated lower IQs among poorer people from groups (such as the Irish and Italians) who were immigrating in large numbers and the subject of rampant prejudice at the time.

Bogus g-factor science also lent support to laws prohibiting racial intermarriage and enforcing sterilisation of the mentally less able. Some of those involved in these US examples of eugenics policies went on to advise the Nazi government in their implementation of the Nuremberg Laws in Germany. Although the quantitative study of human intelligence has had many benign uses since its invention, there has been a constant battle to avoid its use in racial and gender-based discrimination that most businesses would find unethical or socially irresponsible today.

These outcomes are primarily due to a lack of realisation that reducing complex phenomena, such as human intelligence, to a single number (or even a modest set of numbers) carries inevitable biases. Algorithms inevitably attempt to quantify and categorise through simple features (including analyses like g-factor, and resultant numbers like IQ).

Unfortunately, characterising the complexities of people by simple features and categories is at the core of racism, misogyny, and other forms of intolerance. Therefore, it’s important to understand that algorithms simplify and carry representational biases. It is their nature and, thus, informs their ethics.

Another critical aspect of algorithmic ethics is that these programmes can generate powerful emergent effects when they interact in complex systems, like human social interactions. For instance, recent studies by myself and others have shown that in social networks, algorithmic agents who doggedly distribute biased opinions online inevitably lead to the formation of so-called ‘echo chambers’ and ‘filter bubbles’, where human participants become segregated from one another into like-minded groups, and only hear opinions that strengthen their pre-existing views.

Such results show that single-minded network participants (for instance, algorithmic ones) can entirely determine and control this polarisation of opinion. This effect is particularly disturbing when combined with algorithmic tendencies to adopt representationally biased, simplifying models of people, combined with goals of, for instance, optimising towards profit or greater political influence.

These academic results also suggest that the massive influence of firms, for example (the now defunct) Cambridge Analytica, on elections, may not be due to the accuracy of their market segmentation, but may instead hinge on the intrinsic nature of receiving news that is algorithmically selected, ordered and sent down the fixed connections of a social network.

Smart IoT and machine-to-machine communications

These effects – algorithms’ simplification of people and their emergent influences on human society – are aspects of algorithmic ethics that will only increase in importance with new developments such as the smart internet of things (Smart IoT). Smart IoT is a concept for a future where more things in our life have electronic identities, online connectivity and the ability to compute. The embryonic Smart IoT of today includes your smartphone, but also things you wear, such as an Apple Watch, things in your home, including Amazon Echo, and elements of your ‘smart’ connected car; for example, a Navstar GPS.

However, Smart IoT envisions an even more connected world in which household appliances utilise RFID (radio-frequency identification) and other technologies to deliver new services (for example, a refrigerator that knows what food it contains and can order more when supplies are low). Personal IoT devices, meanwhile, could include apparel that senses more intimate details of your physical state to monitor your health continuously. Some of the more ‘far-out’ visions of Smart IoT include ‘smart dust’ – computing motes that harvest their energy (from the sun, radio waves, or even vibrations) to form an ambient micro-sensor network that is persistent and omnipresent.

While this vision may be somewhat fantastical, real-world infrastructure to support far more online objects is just around the corner. 5G, the next generation of mobile communication, is far more focused on machine-to-machine communications than merely improving download speeds. This is vital for the Smart IoT devices of the future, which will need to communicate better with one another to make their own algorithmic decisions and take autonomous actions on the world.

Potential applications of this technology heighten the importance of examining algorithmic ethics today. Currently, networks such as Facebook and Twitter are established by active participation. This participation comes through who we ‘friend’ and follow, and the profile details and posts we provide. Under some academic speculations about Smart IoT applications, ubiquitous devices may autonomously determine the connections between us and the things around us in the future.

For instance, imagine that a hidden social network could be formulated not only by devices monitoring your location relative to other people, but your purchases, and even the fluctuations of your heartbeat when certain people are nearby. Such data would form a foundation upon which algorithms could use algorithmic techniques similar to those employed by firms like Cambridge Analytica to algorithmically categorise people.

Imagine, then, that the information you get on your smartphone is not just determined by your online friend network, but by real-time data about who you are with and how you are feeling at the time. Further imagine how influential such messages could be on your behaviour, and on the behaviour of society as a whole. All of this is possible with your smartphone, your contactless credit card, and a personal health-monitoring watch. Smart IoT will only deepen and strengthen these possibilities and their potential influence.

Why we need to understand algorithmic ethics

This hyper-connected world vision only increases the importance of understanding algorithmic ethics. It also increases the importance of future cyber security, and in very particular ways. For instance, consider the recent report that a health-monitoring watch provided data about the movement of soldiers on a base, thus leaking information to potential enemies. This is another characteristic of algorithm’s simplifying nature: they create information that is ideal for treating individuals as a group, not only for marketing and social influence but as a potential basis for malicious attack.

The result is paradoxical situations of personal data rights and protections; for instance, recent reports that the Japanese government has authorised attempts to hack their citizens’ personal devices to determine whether they were secure and potentially deter IoT-based hacking threats surrounding the upcoming 2020 Olympic Games in Tokyo. While this action is questionable, their concern is reasonable given that cyber security expert and fellow at Harvard University, Bruce Schneier, called Smart IoT a ‘world-sized robot’ that we have no idea how to control in a 2017 article for New York‘s Intelligencer.

To mitigate the massive and inevitable influence of algorithms on the future, it is necessary to first gain an understanding of how they operate and how their so-called ‘intelligence’ is different from our superior human capabilities. This does not require all business leaders to become experts in computer science and software development, but it does mean that future business ethics training must include elements of how algorithms form an integral part of a business’s ethical and social character and will, thus, contribute to building that company’s reputation through their operation.

This will require an understanding of how algorithms inherently simplify, and how their single-minded pursuit of programmatic goals can have dramatic emergent effects when unleashed in human society, particularly if those simplifications and goals aren’t transparent to business leaders.

One avenue to this understanding is through the study of the often-troubling history of scientific quantification and categorisation of people, from which algorithms have emerged. To avoid repeating the mistakes of the past, at light speed and global scale, new business ethics must insist on an adjustment to algorithmic design to specifically promote effects that are more desirable for society and the long-term future of technology businesses. Fortunately, there is emerging science that shows how programmers might technically adjust algorithms to promote more diverse and socially constructive outcomes rather than the rigid categories, dogged goals and polarising regularity being observed today.

While this is essential at the technical level, it is also necessary to diversify technology companies at a human level, so future design and development can benefit from different perspectives and world views that will be invaluable when creating the representational frames that algorithms require for their operation.

Furthermore, business leaders must play a role in determining regulatory regimes governing algorithms in the future, to ensure the health of the societies on which they ultimately rest. Consider that in the era of broadcast media, regulations such as the US fairness doctrine of 1949 were introduced to ensure the provision of balanced news coverage to the public. Similarly, careful algorithm management and regulation is required to provide desirable social outcomes. A new science exploring the promotion of positive emergent effects needs to be a part of understanding algorithmic ethics.

The starting point is realising that algorithms do have ethics, and ensuring they have the ethics of social responsibility is ultimately good business sense.

How to think about the business ethics of algorithms

  • Question how algorithms may act autonomously on your business’s behalf.
  • Realise that algorithms inherently simplify real-world complexities, particularly those of people.
  • Understand that simplifying algorithms can have complex, unexpected, emergent effects in online society, which could impact your business’ mission and reputation.
  • Know that shaping these effects is possible, but may require reconsideration of the drives programmed into algorithms (market effects alone, for example, may not ensure desirable and ethical outcomes).
  • Engage in the future of shaping and regulating algorithms for better societies and, in turn, better business.

Dr Robert Elliott Smith is CTO for BOXARR Ltd, a researcher and teacher at University College London, an experienced AI consultant, and an extensive author on that subject, including the book Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All.

You may also like...

employee wellbeing

Breathe easy: how to prepare for workplace presentations

Presentations can be daunting for even the most confident employee; fear of standing up in front of colleagues can quite easily make your heart race. Luckily, Carolyn Cowan is on hand with some timely tips on how to keep the worries at bay so you can focus fully on acing that important presentation

Read More »
New curriculum

A shorter route to an MBA opens up at LBS

London Business School (LBS) has announced the launch of a new one-year MBA for candidates who graduated three or more years ago with a master’s in management (MiM) degree from a reputable institution

Read More »