What do we really know about AI? 6 important clues from the LinkedIn data

Analysing the AI conversation on LinkedIn reveals the most influential ideas and opinions, who’s thinking what, and whether the rest of us should be worried

June 25, 2018

6 things you need to know about the AI conversation on LinkedIn

Few concepts in professional life stir more ideas and emotions than Artificial Intelligence (AI). There are passionate differences of opinion about what it is and what it’s capable of, whether it will simply destroy jobs or create them as well, whether it can improve our human capabilities or make them redundant. Nowhere will you find a broader and more representative range of these opinions than in the AI conversation on LinkedIn. Analysing it reveals important clues about our understanding of AI. We can see the thought-leaders and themes dominating the conversation, the motives that business leaders and technology companies have, and the likely consequences for different industries and sectors.

If you’re a marketer then this conversation matters to you. It matters because you are a professional and human being with an interest in how AI will be used by businesses, and how decision-makers see the roles of humans and machines in the future. It matters because you can’t plan to make effective use of AI in marketing without an understanding of what’s available and how it overlaps with human creativity. It also matters because AI is one of the most important thought-leadership themes of our time. If your business has a point of view on the future of your category then you need to be able to put that point of view in the context of AI. And to do that, you need a sense of the point that the conversation has reached.

We’ve analysed LinkedIn data on the posts about AI driving most engagement during the first quarter of 2018 – and those that have taken the lead role in developing the conversation since then. We’ve looked at the conflicting claims and opinions out there, the recurring themes, the balance of positive and negative coverage, and the very different agendas that different businesses have. That process has thrown up six important clues about the point that AI has reached, and how well decision-makers and thought-leaders understand the implications. Here are those findings:

The AI agenda is being set by a handful of businesses and influencers
The thought-leadership agenda around AI is closely associated with three businesses in particular. Microsoft and IBM were responsible for several of the most influential AI posts of Q1. Lili Cheng, Corporate VP Microsoft AI and Research arguing why You Shouldn’t be Afraid of AI and Four Predictions from IBM about emerging technologies both drove particularly widespread engagement. Also among the most influential were Microsoft CEO Satya Nadella sharing stories of inspiring AI projects that make a difference to people’s lives, and the same company’s President and Chief Legal Officer Brad Smith, and EVP of AI and Research Group Harry Shum, who introduced The Future Computed, a new book exploring the role of AI in Society.

The third enterprise playing a particularly significant role in driving the conversation is McKinsey, with employees who are among the most prolific AI authors of all, and a clear focus on advising businesses how to leverage competitive advantage from machine learning.

Several individual influencers are also helping to shape the discussion – and leading it in different directions to those of the major thought-leading businesses. Forbes contributor and LinkedIn Influencer Bernard Marr wrote several posts, announcing new initiatives from the likes of Tesla and predictions from influencers and leaders at IBM. Jose Ferreira and Shelly Palmer contributed thoughtful opinions about the economic and social impact of AI. Former US Secretary of State Henry Kissinger wrote an in-depth opinion piece in The Atlantic, which balanced the likely benefits of AI with its potential impacts on human consciousness, morality and our own capacity for learning and self-knowledge.

Top Posts by Total Actions on LinkedIn (Jan - Mar 2018)

The AI conversation is actually two very different conversations
The vast majority of AI-related posts focus either on the cool, competitively advantageous and planet-improving things that the technology can do – or the dangers that it poses to jobs and society. Very few try to tackle both.

There are interesting discussions taking place on LinkedIn about the ethical, economic and philosophical questions that AI throws up: how does an economy function when the majority of people are no longer needed as units of production, but are still required to act as consumers? Will AIs really supplant people because they are better at particular tasks or just because they are cheaper? Are we raising the standard of thinking or dumbing it down? What authority is an AI responsible to? However, none of these discussions take place in articles exploring the implications of AI for business – or what business leaders should do about it.

This division is best illustrated by the contrast between Kissinger’s Atlantic article and the McKinsey feature on The Economics of Artificial Intelligence, which were published within a few weeks of each other in May. Kissinger warns that the basis on which AIs make decisions is artificially narrow. He warns too of the dangers of AIs taking decisions that they cannot explain in human terms and therefore cannot take accountability for. The McKinsey piece, in contrast, urges business to “Trust the machines”, complaining that human recruiters, for example, often override the recommendations of AIs, even though AIs are better on average at predicting how people will perform in the future. The conflicting viewpoints show how there are two competing visions of the future at work: one among those thinking about the wider consequences of AI and one among businesses working out what to do with the technology.

May/June 2018

Can business leaders and professionals afford to think about the wider consequences?
Thought-leaders on both sides of the discussion acknowledge that decision-makers are under pressure to leverage competitive advantage from a technology they don’t fully understand. Wait too long and they could find themselves out-competed and locked out of future growth for their sectors. Because of this, there tends to be a lot more detailed discussion of the potential gains from AI than the risks. Nobody in a decision-making position is incentivised to take ownership of the risks.

McKinsey’s articles argue that businesses need to revamp their ways of working in order to take advantage of AI, while the technology itself is still developing. The assumption is that AI will quickly come to dominate daily life and that businesses have to make sure they are the ones leading the charge.

Is democratising AI the way to manage its impact?
Not all of the companies offering AI advice to professionals take the same approach. Several of Microsoft’s posts offer a vision of an AI-driven economy that is subtly but significantly different to that of McKinsey. Satya Nadella’s post repeatedly puts the emphasis on the human intelligence and imagination behind life-changing applications of AI – and using the technology as a tool rather than as a strategic decision-maker in its own right. Lili Cheng’s talks about how “AI needs to recognize when people are more effective on their own – when to get out of the way, when not to help, when not to record, when not to interrupt or distract.”

This isn’t just talk. Read Smith and Shum’s introduction to their book The Future Computed and you’re introduced to Microsoft’s business model around AI. This doesn’t involve selling competitive advantage to a few large businesses. It’s based on, “working to democratize AI in a manner that’s similar to how we made the PC available to everyone. This means we’re creating tools to make it easy for every developer, business and government to build AI-based solutions and accelerate the benefit to society.”

Does making AI available to everyone protect people from its disruptive impact by helping all businesses and professionals to adapt? Will this prevent a handful of companies and people taking control of the economy? Microsoft’s vision will still impact on the way people work – and the type of work that’s available to them. However, one of the encouraging things about the Smith and Shum post is the way that it acknowledges these issues:



While we believe that AI will help solve big societal problems, we must look to this future with a critical eye. There will be challenges as well as opportunities. We must address the need for strong ethical principles, the evolution of laws, training for new skills and even labour market reforms. This must all come together if we’re going to make the most of AI.

Could we find tech companies lobbying and advising governments on how to regulate the development of AI? Will those businesses with the most to gain from the technology take the lead in ensuring an informed and transparent discussion around it? There are some positive signs – but if you ask me, we need more businesses sharing more detail on what the ethical use of AI looks like.

We need a common definition of what AI means
Part of the problem is the lack of a common understanding around what AI actually is, what it can do, and what it can’t. Is AI better at completing logical tasks – or just cheaper and more efficient? Can it really outperform humans on image recognition? Can it be trusted to make predictions about the future? Should decision-makers trust algorithms or the human beings interpreting them? You’ll have very different answers to these questions depending on what you read.

This is partly because people are discussing very different elements of AI under the same broad term. A huge amount of the engagement generated during May, for example, focused on a demonstration that Google made of its Duplex AI, which mimicked a real person when phoning to make bookings at hair salons and restaurants. This was impressive (although, to be honest, I think I’d figure out I wasn’t talking to a fellow human at several points). However, it’s important to remember that this particular AI wasn’t equipped to do anything else other than make a booking while pretending to be human. It couldn’t optimise a media plan, drive a car, generate a shortlist of candidates for a recruiter, recognise cat pictures or beat world champions at Chess or GO. The AI universe is filled with different types of deep learning algorithms, each focused on its own specific task, and each throwing up very different risks and opportunities. They might give the appearance of human-like intelligence behind them – but ask them to do anything else and you’d quickly realise you weren’t dealing with a human at all.

Influential AI articles on LinkedIn make statements in passing about how emerging AI technologies such as Generative Adaptive Networks (GANs) can give computers an imagination, or about the way that AIs learn replicating the human brain. These are big claims – and there should be more discussion about how true they really are. The fact is that nobody is modelling AIs on how the human brain works – because nobody fully knows how the human brain works. There have been many competing theories over the years as to how we think, learn and remember; many different ideas about how human consciousness and imagination function. None has yet provided definitive answers to these questions. Before we start planning the future on the basis that computers can match or improve on the human brain, it’s worth remembering that we don’t even know ourselves what the human brain is capable of.

As Kissinger, Ferreira and others argue, Artificial Intelligence is most definitely not the same as human intelligence. As we plan how to direct, use and control it, it’s vital that we don’t underestimate its abilities. But it’s vital that we don’t overestimate them either.

There’s a lot of discussion on LinkedIn and elsewhere about how AI can solve the world’s most difficult problems by bringing a new level of hyper-intelligence to bear on the issues. However, there’s also a far less publicised conversation taking place about how AI will simply commoditise thinking and focus on coming up with decisions that are just good enough to keep things moving the way they are.

The McKinsey piece on The Economics of AI and Ferreira’s Influencer post, Will Robots Take my Job? both agree that, as Ferreira puts it, “AI often disrupts an industry not by improving on human performance, but merely by doing it well enough and cheaper.” Nuances in the discussion of AI’s capabilities matter, because they will have a huge influence over the type of economy and society that it gives rise to.

We are still in an era of Narrow AI
The technical term for these AIs focused on particular tasks and particular sectors is ‘Narrow AI’. Every example quoted in a post on LinkedIn falls into this category:  the life-changing AI applications in Satya Nadella’s post, Google’s conversation-replicating machine, the ideas in posts on AI and cyber-security, AI and the legal profession, and 10 promising AI applications in health care. The AIs that we are discussing at this point are trained to apply deep learning to very specific problems. They are not capable of the ‘General Intelligence’ that would lead to them applying their logic to the world around them. Don’t be fooled by the appearance of Alexa or Duplex: AIs that behave like thinking human beings are really just impersonating one or two aspects of what human beings do.

There’s therefore a certain irony in the fact that the second most likely term to be associated with AI in discussion on LinkedIn is ‘The Singularity’. It’s second only to machine learning, which many people use interchangeably with AI itself. The Singularity is the hypothetical moment in time when Artificial General Intelligence gives rise to machines that can program themselves and other AIs, exponentially accelerating the rise of their own intelligence to a point where it overtakes those of humans and leads to a dangerous and unpredictable world. It was first floated as an idea back in 1965 and it’s what many people think of when they imagine a Terminator-style rise of the machines that threatens humanity with extinction.

May/June 2018

The frequency with which LinkedIn posts refer to the Singularity suggest that there are a lot of people out there who are worried about the development of AI. But it also suggests that they don’t fully understand the types of risks that it gives rise to. Artificial General Intelligence is still a long way away, and conscious Artificial General Intelligence with its own motivations and agenda is arguably further still. Does that mean we can relax about AI and just get on with the task of using it to solve problems, work smarter, increase efficiencies and boost revenues? Of course not. The way that we use Narrow AIs will still have huge repercussions for the way that economies operate, the way that decisions made, and the way that human beings themselves think and feel. If we don’t pay close enough attention to them, then the limitations of these AIs will be as damaging as their capabilities could be.

The conversation around AI on LinkedIn is increasingly well-informed and increasingly diverse. When I look at the businesses and thought-leaders involved, and the influence their views are achieving, it gives me some confidence that between ourselves, we can surface many of the issues that matter around this technology. However, the conversation is not fully complete. There are nuances missing; points that get missed by one side or another; gaps in thinking and gaps in action that could prove damaging in the future.

If you’re a marketer for a business with skin in the AI game, don’t be content to sit on the sidelines and leave the conversation to others. This is an area that is emerging and changing at great speed. Considered opinions and informed thought-leadership have a vital role to play in helping to move the debate forward. If you’re not happy with the way the conversation is shaping up so far, then there’s never been a better time to step in and change it.

Topics