AI has the potential to benefit or harm society, in equal measure. Kevin Lee-Simion introduces the topic and offers insights from commentators in business and education
The use of artificial intelligence (AI) has the potential to blur the line between what is real and what is fake, and this could have a catastrophic impact on society.
Is AI a danger? Could it manipulate our perceptions through the use of ‘deepfakes’ – misleading content in which people create videos and images that replace a person’s face with another to make it appear that the subject was somewhere they were not, or said something they did not say?
Everyone has a view about AI and whether it is beneficial or harmful to society. For example, Mark Zuckerberg, Co-Founder and CEO of Facebook, believes AI will benefit society, stating ‘I am optimistic’ when questioned in 2017 about how it could affect the world.
However, Elon Musk, the US business magnate, investor and engineer, has warned US governors that the potential dangers of AI not imaginary and the technology should be regulated.
Although the way in which AI is used today might seem like something out of science fiction, this technology is available to the majority of people. Organisations such as Amazon use AI every day and there are mobile apps with the power to create deepfakes.
In this article, our commentators give their views on AI, discussing how it may develop in the future, and the dark side of this technology.
AI will shape our future – why businesses must adapt now to reap the benefits
Duncan Tait, SEVP and Head at Fujitsu EMEIA and America, outlines how AI can be used in less ethical ways such as online propaganda messages; how innovative organisations are arising because of it, and why it is important to teach students about the ethical and moral dilemmas of this technology.
Artificial intelligence is well past the time of predictions or ‘what ifs’. Business leaders must prepare for AI now, both to ensure their company’s future competitiveness, but also to protect their workforce.
AI is quickly becoming embedded in our everyday lives; from chat bots that answer online queries, to personalised assistants at work that help with admin tasks. Fujitsu’s Social Command Centre has assistants that employees can use to troubleshoot issues such as retrieving their computer password.
The workforce is where the real change is happening. AI is a key way in which companies can support their employees to do more, and shift the nature of their work from admin-based tasks, to strategic roles that contribute to longer-term goals. For instance, law firms are using machine learning to examine contracts and identify errors, allowing junior lawyers to concentrate on tasks that will provide experience that is more valuable.
Companies are quickly recognising the potential benefits of AI. However, research conducted by Fujitsu shows that only 11% of businesses have an AI strategy in place. At board level, we are seeing executives getting involved in planning, but this has failed to translate into wider strategic initiatives.
Organisations must look to the areas of the business that will most benefit from automation of tasks or where AI can be used to extend and amplify existing skill sets. HR is often a natural starting point, as it’s a process-heavy department and plays an important role in building a robust business. AI can do everything from analysing incoming CVs for the best candidates, to pinpointing employees’ skills and matching them to needs across the business.
However, HR isn’t the only area where AI creates value. In sales and marketing, it can provide a deeper understanding of a company’s customers, while in finance and accounting, AI can assist in fraud detection and prevention. In supply chain, AI is a vital tool for automating planning and fulfilment functions; and it’s also improving cyber-security capabilities and enhancing predictive maintenance. The applications are almost endless, but it relies on companies understanding where automation can lead to more efficiency and better delivery of service for customers.
The real frontier for AI is the need to ‘show its working’; in other words, not only analysing data and presenting an answer, but going further to demonstrate exactly how it came to that answer. What deductions did it use? What data did it draw on? What steps were taken during its deliberations?
At Fujitsu, we have developed two pieces of technology to deliver this ‘explainable AI’ function. Our Deep Tensor technology uses deep learning to look for and learn from patterns in data, while our Knowledge Graph technology enables businesses to turn data into visual structures with which deep learning can work.
Explainable AI is critical in industries such as healthcare and financial services, where decisions are under far more scrutiny.
It’s true that AI can be used to help target certain groups of people with propaganda messages online. The Facebook and Cambridge Analytica scandal showed how it’s all too easy to use people’s private data to pinpoint individuals who will be open to receiving certain political or social messages on the internet.
However, while it might seem as if AI is the source of the problem, it is also the solution. For example, AI can identify fake news, abusive content, hate speech and propaganda online, and allow organisations to remove the content before it has caused harm. Factmata – an AI start-up – is doing just this.
There are obviously some key players in the AI market including Google DeepMind, Facebook and OpenAI. At Fujitsu, our solution, Zinrai, brings together diverse AI techniques and development threads. In fact, Zinrai itself is neither a service nor a product, but a collective framework for the broad family of AI capabilities that are available to customers.
It’s also interesting to see the innovative new AI companies being founded. Each year, the World Economic Forum compiles a list of technology pioneers ,and 2018 saw AI dominate the list. BenevolentAI and CognitiveScale – both notable names in AI – were included, alongside Narrative Science and Gamalon.
The AI landscape is diverse and as long as this diversity continues to grow, we can mitigate the risk of an AI monopoly or duopoly forming.
Finally, it’s important to understand the risks that come with unbridled development of AI applications, and what it will mean for society at large. There is certainly a need to educate students about the ethical and moral dilemmas around AI’s inevitable impact on the workforce. But an alarmist view-point will benefit no one. We must look to the positives that AI can bring, and then figure out the best and most equitable way for these to be realised both at work and in our everyday lives.
The very real future of AI
Ben Pring, author and Vice President and Director of Cognizant’s Center for the Future of Work, discusses how AI has an influence in everything we do, from the music we listen to, to the virtual assistant on our phones.
No doubt you’ve seen lots of AI in movies and TV over the years (HAL, Samantha and Jarvis all spring to mind) and are now starting to see it in the ‘real’ world. You’re right to be paying attention. AI raises lots of big questions which require big answers – and many of them are going to be found by really intelligent people like you. Here are a few to be chewing over.
The rise of AI is the great story of our time. Decades in the making, the smart machine is leaving the laboratory (and the movie lot) and, with increasing speed, is infusing itself into many aspects of our lives: our phones (Siri), our cars (Waze), the planes we fly in (fly-by-wire), the way we bank (Erica), and the way we choose what music to listen to (Spotify).
Within the next few years, AI will be all around us, embedded in many higher-order pursuits. It will educate our children, heal our sick and lower our energy bills. It will catch criminals, increase crop yields, and help us uncover new worlds of augmented and virtual reality.
AI is being used in the creation of ‘deep fakes’ https://bit.ly/2qFMJ3a but will also be used to identify fakes and protect reality. A new generation of ‘watermarking’ software is currently being developed to maintain confidence in what is and isn’t real.
Whether you are managing a large enterprise or starting your first job, deciding what to do about the new machine – this new cocktail of AI, algorithms, bots, and big data –will be the single biggest determinant of your future success. The next wave of digital titans probably won’t be characterised by start-ups from Silicon Valley; instead, it will be made up of established companies in more ‘traditional’ industries, in places such as Baltimore, Birmingham, Berlin and Brisbane – with a new generation of young, progressive professionals, who figure out how to leverage existing industry knowledge with the power of new machines.
The FANG vendors (Facebook, Apple, Amazon, Netflix, Google); the big banks; the government; and a slew of start-ups ranging from X.AI to ASSAP: how much power do they have? A lot. Some people think too much https://bit.ly/2HhGqu6
Some see only the dark side of this shift, and indeed, many of today’s headlines forecast a grim future in a ‘jobless economy’ as robots take over our livelihoods. But the coming digital boom and build-out will be highly promising for those who are prepared. In fact, it will usher in once-in-a-century growth prospects as we reengineer our infrastructure, our industries, and our institutions.
Similar to the previous industrial revolutions, this one will steamroll those who wait and watch, and will unleash enormous prospects and prosperity for those who learn to harness the new machine. The frontiers of the future aren’t simply about substituting labour with software; they’re about building the new machines that will allow us to achieve higher levels of human performance.
As Kris Hammond, Chief Technology Officer of leading AI developer Narrative Science puts it: ‘AI is not a mythical unicorn. It’s simply the next level of productivity tool.’
Those who win in the coming great digital build-out, who seize the incredible rewards, who make history, will be those who stop debating and start building – and, rather than predicting the future, go out and invent it, hand-in-hand with the new machines.
Ben Pring is co-author with Malcolm Frank and Paul Roehrig of What To Do When Machines Do Everything: How to Get Ahead in a World of AI, Algorithms, Bots, and Big Data. (Wiley 2017).
AI’s dark side is not Inherent
Joshua Gans, author and Jeffrey S. Skoll Chair in Technical Innovation and Entrepreneurship at the Rotman School of Management at the University of Toronto, argues that AI can expose bias in our decision making and explains the impact this can have.
Artificial Intelligence provokes imagines of robots taking over and causing harm to people. But that picture of AI remains, at best, a future concern. Today’s AI technologies are really statistical kittens on their own. It is only when people do not understand these technologies are doing that a dark side emerges.
Take, for instance, the numerous instances in which AI has been accused of bias or racism. We are not at the stage where AI can be racist. If it appears to be discriminatory it is because we are using it in that fashion. For instance, AI can expose bias in our decision making. Indeed, that is the point — to get rid of human biases that are harmful.
However, when we train AI to mimic or predict human behaviour, it does a good job. If you want an AI to interact with people on Twitter and it will act like people on Twitter; if you want an AI to identify nationalities based on the colour of their skin, then sometimes it is going to make mistakes that mirror human biases that use animals to disparage. The point here is not that these things are good but that this is what happens when there is ‘garbage in’, you get garbage out.
There is a bright side to this. AI, based on machine learning, is just a branch of statistics. It is very powerful and will open up many new applications as we outline in our book, Prediction Machines: The Simple Economics of Artificial Intelligence (Harvard Business Review Press, 2018). But it will fall into all the traps of statistics. It can miss variables. It can be used by people to infer causation whereas it is really only finding correlations. And it can potentially be used a ‘black box’ which creates harms of misapplication.
However, unlike humans, AI tends to lay these biases there for all to see. What that means is that biases can easily be corrected. Don’t want race or gender to factor in AI predictions? Then programme the AI to ignore or correct for that. Want to make sure you understand what is driving an AI prediction? Then run it again and again with different constraints and find out what it says – much like an interrogation. AI will not lie or evade, you’ll get to the truth.
What this all means is that AI will be a powerful tool but it is no substitute for skilled tool use. We don’t need to teach MBAs the dark side of AI any more than teaching them about the dark side of statistics. What we need to show them that AI has the potential to correct our past and current sins and we should take the opportunity to do just that.
Joshua Gans is the co-author of Prediction Machines: The Simple Economics of Artificial Intelligence (Harvard Business School Press, April 2018).
Using AI ethically and responsibly
Professor Steve Muylle from Vlerick Business School argues that care is needed to ensure AI does not fall into the wrong hands, causing destruction. He also explores the idea of increased regulation surrounding AI in the future.
Though AI was founded as an academic discipline in 1956, it has really come to the forefront of media and business attention within the past few years. Its positive and innovative implementation in areas such as education, healthcare and even tackling crime, has greatly highlighted the benefits AI can have on society and for business. The implementation of AI has accelerated rapidly in these areas – making processes more efficient and giving businesses the ability to interact with a larger audience.
However, with this increased attention towards AI, we have also seen many concerns about the ‘darker’ sides to this technology.
We have all seen the articles stating robots will become so advanced, they will take over the world, which invokes some sort of Terminator-style reality, in which humans cease to exist due to a superior robot race.
This is not quite the case. However, there is a dark side to this technology of which we must be aware. Almost in line with the rise in prominence of AI technology, we have seen the phrase ‘fake news’ coined.
AI, when in the wrong hands, can be used to spread false information easily, through instances such as deepfake videos where, using AI technology, images can be superimposed onto existing footage almost seamlessly – effectively creating uncannily realistic fake videos.
Like any new technology, AI can be used for bad as well as good. Take nuclear for instance; when in the right hands, this can be used to generate energy for a mass number of people. However, in the wrong hands it can be used to create destructive nuclear weapons. AI is not dissimilar, and we have already seen its use in military with image recognition being used in drones to launch military attacks.
Yet, when in the right hands, AI can be used for substantial good, which is why we need to educate the next generation of business leaders about both the good and bad aspects of AI and commit them to using this technology in an ethical, non-biased and responsible way. At Vlerick Business School, on our Digital Strategy programme, we discuss all aspects of AI and teach our students the importance of being ethical and responsible with the tools at their disposable, which should be a benefit to society not a threat.
In the future, it is likely that we will see more regulation surrounding the use of AI, in order to ensure that it is always implemented ethically, responsibly and for the good of society and people. With AI technology being implemented more and more, and the technology developing constantly at a rapid rate, government and business must invest in developing the necessary skills and knowledge to stay on top of this evolution and create geostrategic regulations to ensure it is only used for good. This is likely to be woven into current treaties that countries already have.
Overall, when the technology is in the hands of the right people, AI is a force for good. It can speed up business processes, allow businesses to engage with a larger number of stakeholders than otherwise humanly possible, and act as a vital support for people in their jobs, completing tasks that are time-consuming, remedial or frankly, too difficult.
In Business Schools, for example, there has been talk that AI could spell the end of the role of professors. But, this is not the case. AI can be implemented to aid rather than replace professors in their teaching.
Using AI we could create teaching assistants, meaning professors can engage with a larger number of students with the assistance of AI, or we can create a more individual learning-centric approach for students, using the AI technology to tailor modules, resources and related reading to students for a more personalised learning approach than one professor could ever have the time to do.
Overall, AI is a force for good and can benefit both businesses and business schools greatly, but we must ensure it is used ethically and responsibly.
Steve Muylle is a Professor and Partner at Vlerick Business School, and the Academic Director of the Online MBA. His research interests include the digitization of processes, products, strategy, and work.