Blog

Machine Learning: Are We Nearly There Yet?

Back in 1950, Alan Turing proposed that a ‘learning machine’ could become artificially intelligent. For over half a century it seemed that very little progress was being made. Until now. Today we see a plethora of advancements in voice and visual recognition; autonomous driving is becoming a reality, and the likes of Professor Stephen Hawking and Elon Musk are warning about the perils of uncontrolled artificial intelligence.

Are we nearly there yet?

Machine Learning

The term ‘machine learning’ was coined by Arthur Samuel, while working at IBM back in 1959, to describe how computers can learn from, and make predictions on, data. For several decades, pioneering research didn’t result in anything we would really want to call ‘intelligent’, although Arthur did manage to build a computer that could play checkers better than he could. But being ‘better’ doesn’t itself infer intelligence: a simple calculator can do sums better than the majority of us; a hammer can hit nails better than a human fist. Neither is particularly ‘clever’.

More recently though, advances in machine learning are starting to show promise. Your bank will likely call you if its machine learning indicates a transaction is potentially fraudulent. If you say out loud ‘Hey Siri’, near an iPhone or Apple Watch, it will do a quick double-bleep and be ready to act on your next question or statement, such as telling you what time it is, or what it thinks about you saying, ‘Siri is stupid’. But is it really thinking?

To explain how ‘Hey Siri’ triggers a response, the so-called ‘smart’-phone is constantly scanning acoustic waveforms at 16,000 per second. Every 0.2 seconds of audio is fed into a Deep Neural Network which assesses the probability that you just said, ‘Hey Siri’. If it didn’t quite catch that you just said, ‘Hey Siri’, then for a few seconds its algorithms operate a little more sensitively so that even if you repeat yourself without making any additional effort to be clearer or closer to the microphone, Siri will likely be triggered. The more you use Siri, the better it understands you by learning about your accent and other characteristics of your voice. But is this really intelligence?

The approach to engineering ‘Hey Siri’, based on a neural network, is deliberately intended to be analogous to the neural networking of the human brain. Humans are also constantly scanning acoustic waveforms – up to 20,000 per second. Once a sound wave reaches your ear, your brain’s neural network can recognise it in just 0.05 seconds (see The Speed of Hearing). That is, the human brain is only four times faster than Siri!

So, are we nearly there yet? Let’s now try and answer those three questions:

  1. Are today’s machines really thinking?
  2. Are today’s machines really intelligent?
  3. Are we nearly there yet in producing full artificial intelligence?

Machine Intelligence

Many, many volumes of books written by philosophers and scientists have debated, at length, whether, even in principle, machines could ever think for themselves or be regarded as truly intelligent. To avoid getting distracted by that line of thinking, or what it even means to be think, let’s just look at the technical reality as of now.

If you tell Siri it’s stupid, it will come back with a few stock responses, pretty much proving the point. Far more impressive is the much-heralded AlphaGo from Google which beat 18-time world grandmaster Go champion, Lee Sedol, 4-1 in 2016. (For those unfamiliar with Go, it is ancient board game which has relatively simple rules but 10761 possible gameplays compared, for example, to the estimated 10120 possible in chess.) AlphaGo required hours and hours of training, by humans, on Go gameplay. You could say then that it had a head start when you consider its achievements were based on specialist human training, augmented with the calculation and memory capabilities of a machine. But then came AlphaGo Zero. This new version was able to teach itself how to play Go with just the basic rules of the game, and no human supervision at all. Moreover, it only needed three days to become as good as the AlphaGo that beat Lee Sedol, and just 21 days to get to the level that beat Ke Jie earlier this year.

Unsupervised machine learning is at the frontier of artificial intelligence. It’s made possible because of advances in ‘big data’. When you combine big data with machine learning algorithms, you get what has being dubbed ‘deep learning’. The reference to ‘deep’ stems from the machine’s ability to train itself as time goes by on new data it receives. This is the technology that is enabling Google to automate its picture and speech recognition; Netflix and Amazon to suggest what you may want to watch or buy next, and computers that can predict court case outcomes. All these technologies rely on huge amounts of data that have to be fed through algorithms designed to spot what we would call ‘meaningful’. And there’s the rub. Neither the machines, nor the algorithms, have a clue what is actually ‘meaningful’. They are just designed to work – and always with narrowly defined data sets – to produce results that we ourselves judge to be positive.

It is no surprise then that there is a burgeoning industry to oversee deep learning algorithms and get human feedback to monitor the results. For example, one of the services my own company (NashTech) provides is image processing services for one of the world’s largest eCommerce providers. We translate descriptive search terms into mathematical variables which constitute an object’s visual signature. In other words, we ‘tag’ images. However, our focus is only on eCommerce images – not the full range of all possible images, which is still a huge ask for machine learning. Our algorithms tag 40-60 million images per month with descriptions referring to their colour, object type, size, etc. However, in tandem we run a manual service where samples are checked by human operatives. They are shown on the screen an image, and asked to confirm that is has been tagged correctly e.g. is Image 1 a green T-shirt; Image 2 a pair of pink shoes, etc. When mistakes are spotted, this information is fed back into the algorithms to improve their future success rate.

To date, it’s only us humans that can properly understand what data actually means. While we can fine-tune algorithms to meet our expectations (like being a champion Go player) the definition of success is ours alone. The machine has no idea how well or badly it is doing: we ourselves have to make that call. If you give a machine inaccurate or unreliable data, then the results will be inaccurate or unreliable. So, while the processing power and available data sets have advanced spectacularly since the early days of computing, the end result is still the same: garbage in; garbage out.

The Future

Readers of this may argue that it is only a matter of time before computers are powerful enough to discern meaning without human intervention. And besides, even some humans believe in conspiracy theories and fake news, so we too are not immune from garbage in; garbage out.

Perhaps, but what will definitely slow down – if not derail – continuing progress with machine learning, is the controls that governments are already putting in place. For example, the European Union’s General Data Protection Regulation (which takes effect in May 2018) gives consumers the right to reject automated decisions based on personal information without ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing’ (GDPR Article 13 – Section 2f; Article 14 – Section 2g; Article 15 1h).

This is problematic for machine learning because gone are the days of top-down rule-based programming where decisions are formed from clear-cut ‘if… then… else’-type logic. With bottom-up data-driven machine learning, where new data is being processed all the time, it is virtually (if not totally) impossible to unpick how, and why, a decision at a particular point in time got made. To be fair, the same is true of human decision-making, but machines lack our inherent ability to provide an introspective narrative. (Whether the stories we tell ourselves are enough to satisfy the regulator is another matter!)

Advances with techniques like the Local Interpretable Model Agnostic Explanations (LIME) may help programmers reverse-engineer decisions made from machine learning. This technique analyses the results of machine learning by making minor tweaks to the input variables to ascertain how this affects the decision. In doing so, it is hoped that forensic analysts can pinpoint the data that led to a particular result. However, I would argue that this will still be insufficient to understand the rationale behind machine decisions made at today’s cutting edge. Deep learning algorithms are fed on huge quantities of data, and as new insights are made, the underlying algorithms are adjusted accordingly. Hence, making slight changes to the input variables today may never explain what could have happened two weeks ago, or whenever an earlier decision got made. The outcomes could be entirely different.

While GDPR does not demand a ‘right to explanation’ as is widely and repeatedly claimed (see https://academic.oup.com/idpl/article/7/2/76/3860948) it does still require a ‘right to be informed’ with ‘meaningful information’. This requirement will be incredibly hard to satisfy and could scupper advances in machine learning, at least in those areas that involve personal data such as loan applications, medical advice, job screening, etc.

Are We Nearly There Yet?

Until machines are able to understand what the data is that they are processing, and have the introspective ability to rationalise decision-making based on that data (at least as proficiently as we do – recognising that we’re not perfect either!), then no, we are still a long way off creating an artificial intelligence. And these aren’t the only criteria for claiming success in artificial intelligence. In particular, we are still unable to tackle general artificial intelligence, and instead deliberately limit ourselves to specific applications based on narrowly-defined data sets. An all-purpose artificial intelligence that can see, hear, drive a car, create music, successfully challenge a GDPR lawyer, and so on, is a very long way off.

That said, artificial intelligence – right now – has gone from being an academic curiosity to a mainstream revenue generator. The market research firm Tractica says that ‘the revenue generated from the direct and indirect application of AI software will grow from $1.4 billion in 2016 to $59.8 billion by 2025’ – that’s a staggering 4271% increase! And Gartner predicts that by 2020, ‘the average person will have more conversations with bots than with their spouse’.

What’s spurring the sudden advance in artificial intelligence has less to do with the mathematical models that machine learning algorithms depend upon. These haven’t progressed much in several decades. Rather, it’s the explosion in available data sets and the recent emergence of specialised computer infrastructure – such as Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs) – which surge vast streams of data through algorithms at exponentially increasing speeds.

With all this raw computing power and data availability, are we nearly there yet in artificially creating intelligence? Not really. Not while our current machines are just big number crunchers with no ability to comprehend what these numbers actually mean. Will machines take over our jobs? Yes, in some cases, although it’s been widely argued that artificial intelligence will also introduce new jobs, such as people willing to train a machine to perform better (see, for example, the MIT Sloan Management Review). That said, perhaps AlphaGo Zero demonstrates that machines of the future won’t need a great deal of human tutoring: given sufficient rules and rapid iterative application of the rules on available data, they may be able to figure it out themselves.

For now, machine learning, deep learning, and artificial intelligence in general are extremely valuable tools – whether or not we classify them as actually ‘intelligent’. Except, perhaps, in 2020 when you may spend more time talking with bots than your spouse!

 

Alistair Johnston, Director, Programme Management
Email: alistair.johnston@nashtechglobal.com 
Call: +44 (0)7817 010142