Improving Inventory Management and accelerating time to insight
NashTech helped Swiggy to develop a Guided Analytics application. With the help of this application, Swiggy creates data visualisation, interactive and scheduled dashboards, and inventory forecasting models, as well as generates forecasting for their business intelligently and collaboratively
Swiggy is India’s leading on-demand delivery platform with a tech-first approach to logistics and a solution-first approach to consumer demands. With a presence in 500 cities across India, they deliver unparalleled convenience driven by continuous innovation.
From starting as a hyperlocal food delivery service in 2014 to become a logistics hub of excellence today, their capabilities result not only in lightning-fast delivery for customers but also in a productive and fulfilling experience for our employees.
The challenge: Replacing an inefficient, cumbersome, manual process
Swiggy analysis was completed every month via an inefficient, cumbersome, and manual process that originated with CSV extracts which leads to understock or overstock. Overstocking can lead to decisions like marking down the item’s price, which increases sales turnover, and having limited stock results in lost sales and dissatisfied customers who then purchase from the competition.
A scalable, flexible, transparent, and easy-to-update solution was needed, which also signiﬁcantly accelerated the time to insight. They would like to predict at least 3 months of sales for 50 Items at 10 Different Stores.
The solution: An End-to-End Data pipeline and automated Data Science Solution for Inventory Forecasting
To solve the above problems, we built the Forecasting Platform, a web application built using KNIME that allows decision-makers and stakeholders to be as equally involved as data engineers and data scientists in creating a pipeline.
The solution provides several advantages over historical forecasting solutions:
- Configurable, dynamic platform. Allows the underlying forecasting process to be customised by changing the parameters, datasets, or models, which can be done within a few hours or minutes to provide a timely forecast.
- Faster, flexible processing with big data. End-to-end pipelines can be run, in most cases, multiple times a day and are only limited by the computational spending users are prepared to incur. Companies can choose to generate forecasts on any cadence that they want or need.
- Rich set of prediction models. Machine learning allows for a quick change in models that fit what companies are trying to forecast. The biggest strength of KNIME and, as a result, the KFP, is the ability to plug in advanced models such as neural networks and random forest algorithms with no code (but coding is possible when needed), making the forecast sophisticated and accurate without increasing the complexity.
- Accuracy measurement. Enables measuring the accuracy of the forecast following the principles of machine learning systems. Machine learning algorithms inherently come with accuracy measurements, versions of data-like training, test and production datasets, and give valuable feedback early on.
- Ability to react to black swan events. Reduces the risk of missing out on key global events by allowing for quick changes. Many companies miss out on critical events due to the lack of an easy way to integrate external events.
Discipline due to the implemented forecasting process. Forecasting processes are generally very well established and too rigid to change. For a well-tuned supply chain, flexibility is needed to incorporate stakeholder feedback, configure different forecasting parameters, and integrate it into a legacy system. The KFP can be managed independently and integrated into already-existed business processes.
We helped Swiggy to develop a Guided Analytics application. With the help of this application, Swiggy creates data visualisation, interactive and scheduled dashboards, and inventory forecasting models, as well as generates forecasting for their business intelligently and collaboratively by:
- Single data repository for all reports.
- Ingesting data from different data files.
- Configuring parameters for the forecasting process.
- Creating data visualisation dashboards in an easy and guided way.
- Using already available statistical, machine learning, and AI-based algorithms.
- Using an in-built email service for collaborating on results.
Once the reports and visualisations are generated, data scientists, business users, and domain experts can collaborate on the final results.