August 29, 2018
AI has the power to revolutionise the how businesses operate and the way we live. It’s why 69% of CIOs have -or expect to – implement intelligent automation in IT in the coming years.
But with all the hype around this emerging technology, it’s far too easy to race ahead of ourselves, jumping straight into solutions without first understanding what’s required to deliver a successful AI product.
At NashTech we’ve broken down the AI development process into three stages:
- Big Data
- Machine Learning
- Artificial Intelligence
This article shines a light on these essential steps to AI success and, in the coming weeks, we will dive deeper into each element to better understand just what is required. I’ll also be discussing the topic at Japan ICT Day on 30th August. Click here to attend.
When starting out on an AI journey, business leaders often don’t realise just how important data management is to the process. To be clear, it is without doubt the most important step in the process.
Businesses must draw on the expertise of data scientists and engineers to consolidate, clean and analyse the data, so that the useful, relevant information is kept and organised appropriately.
While issues such as null values, duplicates and disparate data sources might have gone unnoticed in the past, they can cause major headaches for an AI project and must be removed.
Often, when we suggest machine learning as a step towards AI, we are met with a lot of raised eyebrows. “Isn’t machine learning a type of AI?” is the most common response.
The confusion is understandable. By definition, for a machine to be artificially intelligent, it must perform tasks in ways that is intelligent. In other words, it must be able to figure out and adapt to different situations by themselves.
Machine learning, more specifically, is about building machines that can process data it is provided with, learn from it and make better decisions. It’s best to view it as a subset of artificial intelligence.
What is immediately apparent is the reliance on data for effective machine learning, and why it is the first step to be addressed. Once the data is agreed, algorithms are created to replicate the human decision-making process.
If we consider how marketers have historically used and compared data to predict and influence buyer behaviour, we see that commonalities in demographics, psychographics, sociographics and so forth were first established. These commonalities would have formed the foundation for marketing campaigns that attempted to appeal to market segments based on their values, attitudes, gender and income.
Similarly, developers can teach computers to identify patterns in data, that enable them to make decisions. So in the case above, the machine would analyse the data and conclude who, what, where and how to market a company’s product. However, today’s technology allows marketers to go from granular activities, that might target a specific individual, to global initiatives.
In Japan we are working with a growing number of companies, across a number of industries, to harness the potential.
From cleansing and analysing an EC service client’s 20 years’ data of transactions, to define KPIs for each lead and raise the deal closing rate. To a manufacturer who wants to utilise the data collected daily from the production line to better understand malfunction rates, define best practice for operators and offer advice to clients.
Machine learning goes a long way to delivering a more autonomous solution, which often meets the needs of businesses. However, its ability to make predictions for the future is still restricted by the data available. The machine can’t imagine a scenario in the absence of information and if it makes an error, an engineer is still required to make adjustments. In other words it’s not yet fully autonomous.
This is where deep learning – the pinnacle of artificial intelligence today – comes in to play. While it may be considered yet another subset of machine learning, what differentiates deep learning is that the algorithms used can establish by themselves if their prediction is accurate or inaccurate. They have the ability to reflect on their decision and improve based on past experiences.
Put simply, where machine learning uses algorithms to parse data, learn from that data, and make informed decisions based on what it has learned, deep learning structures algorithms in layers to create an ‘artificial neural network’ that can learn and make decisions on its own, an most closely resembles human-like artificial intelligence.
In our work with Janison, a leading digital assessment and learning provider, we used Deep Learning algorithms (Recurrent Neural Network/ Long Short Term Neural Network, Convolutional Neural Networks and Encoder-Decoder Deep Learning Models). This enabled Janison to increase the accuracy of its scoring system, which can split the scoring into smaller intervals than applying Basic Machine Learning, meaning a more powerful, more reliable tool for its users.
While advances in technology may someday result in plug-and-play solutions, for now businesses must follow a clear path for their AI development process and this is the best approach we’ve seen yet.
At NashTech, we don’t just develop, we also consult. After helping our clients to first build-up their systems and understand the business flow and system operation, we guide them in finding the ‘target’ of data utilization. Only at this stage do we apply the machine learning or deep learning technologies that generate the final AI results. Our ability to alternate from the client’s IT development team to business analysis, and turn idle data into a business generator is the root of our success.
Click here to attend Japan ICT Day