In recent years, the amount of information and data about their supply chains that is available to companies has been steadily increasing. Web traffic, social networks, software, and sensors monitor shipments, suppliers, and customers, while ever-growing historical data show past inventory levels and sales. This development is both a boon and a bane. On the one hand, the various data sources enable companies to visualize and forecast flows in their supply chains. On the other, the range of information and data is vast and often unstructured, so a company has to make smart choices in order to select the right data for a particular application.
When used correctly, an analysis of the right data enables companies to enhance the visibility of their supply chains. There are many different definitions of supply chain visibility, but in general, it refers to the knowledge of and control over inventory, orders, and shipments and the various events and costs that affect them. Some examples include the extent to which a company can ascertain the location of delivery trucks, the effects of weather on spare parts' availability, and which item to supply next in order to meet customer demands.
Given the time and cost pressures on companies today, it is surprising that organizations rarely employ a rigorous approach to making systematic use of their data in order to benefit from an efficient, up-to-date view of their supply chain activities. According to Adam Hamzawi, chief executive officer of the information technology (IT) consultancy eTURNING and a former Capgemini consultant, only 20 percent of companies fully exploit the data they have at their disposal.1 The remaining 80 percent are missing an opportunity: Smart data management is linked to supply chain visibility—a strategic advantage that enables companies to outpace competitors in supply chain performance.
Indeed, recent business reports have highlighted the need for supply chain visibility and noted that the lack thereof may undermine a business's general and financial performance. 2 That may explain why over the past five years, prioritizing visibility within organizations and across their end-to-end supply chains has moved to top management's agenda. Nevertheless, the results of a recent survey among 111 managers in supply chain functions in international businesses in Switzerland, suggest that the real-time visibility of their supply chains is mediocre, averaging 3.9 on a 7-point scale.3 This troubling gap should be a mandate for action.
How can companies achieve the kind of supply chain visibility they need? We believe that a three-step approach involving data gathering and analysis, alignment of data sources and visibility requirements, and information sharing allows companies to leverage the potential of both the data itself and the information-intensive environments of their supply chains.
Gather and analyze the right data
Each year the amount of stored data around the globe increases by 40 to 60 percent.4 This huge and steadily growing amount of information and data (originating from both internal/company sources and external/public sources) holds enormous potential benefits for companies if they tap into it. But to do so effectively, business leaders have to understand their companies' true information needs. Utilizing the right data provides a better information base, which translates into superior decisions. In fact, companies that employ data-driven decision-making processes outperform their peers by 5 percent in terms of productivity and 6 percent in regard to profitability.5
There are many ways the right data, properly analyzed, can provide the information companies need in order to make improvements within their supply chains. Data from customer service and social media will enable research and development (R&D) engineers at a consumer-focused company to craft the kinds of products consumers really desire. Obtaining data that identify where drivers are wasting their time is essential to improving the efficiency of a delivery fleet. Accurate, real-time, stock-level information will help managers improve delivery reliability for orders. Machines equipped with sensors that measure parts wear promote higher utilization of production equipment by transforming maintenance into an activity that is driven by demand instead of by schedules.
After evaluating what kind of data will best support their decision-making processes, business leaders need to formulate a clear strategy for how to obtain this data and then make sense of it. Their strategy should include the standardization of IT platforms and interfaces to increase companywide availability of data.
Finding new meanings in existing data
As part of the second step of the journey—alignment of data sources and visibility requirements—companies must structure and analyze established data sources in terms of their potential to meet their information needs.
In most cases, statistical algorithms perform that task better than do human decision makers, who typically take irrelevant context information into account. This advantage is especially apparent in low-validity environments characterized by a high degree of uncertainty and unpredictability. An illustrative example of this phenomenon is the prediction of future wine prices. The standard practice is for a circle of skilled experts to rate fine wine after harvest and then predict which bottles will become the most valuable. Strikingly, researchers found that a simple linear regression analysis of three features of the weather conditions during the growing season outperformed the experts' appraisals.6 For supply chain executives, the implication of this example is that they should abandon the practice of having people analyze key performance indicators (KPIs) and develop action plans based on contextual factors, and instead rely on decision-making algorithms.
In many cases it is not even necessary to collect new data to enhance decision-making processes. A fresh look into a data warehouse can open up many new business opportunities, because the stored data often contains an abundance of unused but potentially useful information. This may happen because individual pieces of data are collected simply for documentation and then are stored to comply with regulatory statutes. In other cases it may be because analysis takes place on a descriptive level only, and no action is taken as a result of the findings. This latter situation is all too common; many companies have elaborately designed dashboards that present useful information but are not backed up by processes and workflows that translate those findings into actions.
Many optimization problems in supply chain management represent instances where a closer look at available data can improve decision making. One example is lot sizing in production planning. Although most companies retain ordering data in structured form, "educated guessing" guides many production dispatchers' day-to-day routines. This practice leads to wasted resources, especially when inputs are perishable and material and setup costs are high, as they are in the pharmaceutical industry.
Today many optimization routines in enterprise resource planning (ERP) systems are still based on variations of the classic linear optimization method or upon the heuristics derived from it. Despite their advantages, models applying linear optimization bear a considerable disadvantage that makes them unsuitable for many real-life scenarios—their specification requires information that may be fragmented or just not available at the required level of detail. In addition, the underlying data structure may shift dramatically without the model accounting for the new pattern.
Such challenges can be met by applying nonparametric algorithms to the evaluation problem. Mother Nature provides an example of such an algorithm: "Survival of the fittest" is a robust method for solving adaptation (that is, optimization) challenges. "Genetic algorithms" mimic the evolutionary process and natural selection. Applied to the lot-sizing problem, a genetic algorithm exchanges permutations of orders, which is analogous to the permutation of chromosomes in the natural process of reproduction. The results are compared against a target function, and well-fitting permutations are chosen for new iterations. This method has clear advantages. In a research project, when we compared the results obtained with this algorithm to the current state of human decision making at a contract pharmaceutical manufacturer, we identified a savings potential of up to 8.7 percent of the total purchasing costs of the product group where the algorithm was applied. Moreover, the genetic algorithm outperformed the classic linear modeling approach by 26 percent. Our findings support the opinion of many experts that existing internal data often hold sufficient information to optimize networks.
Still, sometimes it is worthwhile to think outside the box. Simply combining separate data domains can yield exciting insights. For instance, by juxtaposing its point-of-sale data with severe weather warnings, Wal-Mart discovered a remarkable pattern: In areas threatened by hurricanes, not only did the demand for emergency-relief equipment increase, but people also hoarded Pop-Tarts (a sweet breakfast pastry sold in North America). This nonintuitive finding was produced by a simple correlation analysis; it now helps the retailer ensure that regions facing a potential natural disaster have sufficient supplies of water, shovels—and Pop-Tarts.7
There are cases, however, where such retrospective insights are not sufficient to meet the challenges of a particular environment. Real-time data will then be needed to transform supply chains into dynamically adapting networks. In that case there is no choice but to seek new data sources.
A new age of real-time visibility
By tapping sources like Web search queries and social media in addition to data provided by sensors and mobile devices, companies gain access to a massive and steadily growing flow of data. This flow, characterized by its unprecedented volume, velocity, and variety, is known as "big data." The potential economic implications of big data are huge; for instance, the consulting firm McKinsey & Company expects the U.S. health care industry alone to create US $300 billion in value by using big data to drive efficiency and quality.8
The use of unstructured data sources, such as Web search queries, has proved especially useful in increasing the accuracy of predictions for outcomes of events in the immediate future. For example, it has been demonstrated that Web search queries can predict influenza epidemics9 or the commercial success of movies, music, and computer games more accurately and more quickly than can traditional approaches.10 Big data analytics solutions can help users to not only understand what has happened in the past, but also to analyze what is happening as it happens, and then to simulate the impact of any related decisions, a point stressed by Jan-Willem Adrian of Quartet FS, a supply chain software provider offering an in-memory aggregation and analytics technology using streaming data. 11
In a supply chain context, the major advantage of big data is the velocity of data availability, enabling (almost) real-time monitoring or forecasting. In order to take advantage of this, the Swiss industrial company ABB uses structured and unstructured data sources to collect and consolidate information to increase the resilience of its supply chain against a variety of risks. (See the sidebar for more about how ABB's supply chain has benefited from big data analysis.)
Having information instantly available transforms daily operations. It allows a supply chain to dynamically adapt to requirements for the near future or even to real-time customer demand. In an exploratory research project with a parcel-delivery service in Switzerland, for instance, we were able to improve the accuracy of the company's weekly predictions of key business clients' shipping volume by up to 34 percent by adding publicly available search query data to autoregressive models. Such models predict future developments based on historical data and are the prevalent forecasting method for a wide range of applications. The new procedure helps the parcel carrier improve short-term resource allocation during peak periods.
In transportation operations, such capabilities can have a major impact on both efficiency and cost. For example, a major U.S. airline improved the prediction algorithm it uses for estimating the time of arrival of approaching aircraft at its major hubs by connecting publicly available data about weather and flight schedules with internal data such as feeds from radar stations. The improved estimates that resulted reduce idle time for ground crews and could save the airline several million dollars annually.12
In mega-cities, traffic jams are a daily burden for commuters and commercial traffic. To avoid standstills, some express delivery services use global positioning satellite system (GPS) data to dynamically adapt their routes during the last-mile delivery. In one pilot project, DHL is using GPS data provided by taxis to dynamically adapt the routes of its delivery vehicles to real-time traffic conditions.13 Another example is UPS, which gathers traffic data from its delivery vehicles and uses that information for route optimization in its On-Road Integrated Optimization and Navigation (ORION) system. When their routes are optimized to the current traffic flow, the truck drivers save fuel and time on their daily runs through the city.14
On the consumer side, mobile apps are reinventing the taxi market in megalopolises. Patrons can find a cab suited to their personal preferences and pay with the app. To avoid supply shortages, the providers dynamically adapt fares during rush hours or inclement weather. The application of this process to business logistics would give small and medium-sized enterprises access to transportation services without intermediaries. This would be a sea change from today's freight exchanges, which carry big transactional costs and where the final price of a service is open to negotiation.
As is clear from the examples above, real-time-enabled supply chains enhance operational efficiency by allowing companies to make real-time adjustments in response to demand and capacity fluctuations. The business case, however, must be evaluated for each scenario.
In addition, any efforts to benefit from the analysis and application of big data will be subject to the same success factors as any other business endeavor; that is, it is critical to set clear goals and requirements and to not overestimate the capabilities of new technologies. Modern technologies, like in-memory processing (the computation of data without storing it on the hard drive, resulting in a large velocity benefit) make predictions and searches faster, not better! Prior to systemwide rollouts, therefore, it is best to prove the applicability—and especially the profitability—of a solution through a pilot project and data collection coupled with utilization scenarios.
Three steps to visibility
We have described how the smart use of classic and novel data sources can help companies reduce costs and adapt to changing environments. If they seize those opportunities, they can realize a 26-percent performance improvement from big data analysis, according to the consulting firm Capgemini.15 To extract this value, it is essential to align data collection and analysis efforts to the visibility requirements, and to not over- or under-engineer these processes. The following three-step approach will help companies achieve supply chain visibility in an efficient manner:
1. Set goals and explore visibility needs companywide. The first step toward alignment consists of a companywide stocktaking of visibility requirements and information availability, conducted by a cross-functional team with top management's support. The result of this step is a decision map that breaks down the decisions that usually are made to achieve corporate goals and sub-goals, along with the visibility levels they require. The visibility needs for the goals of reducing inventory levels, increasing sales, and protecting the supply chain from risks, for example, will vary depending upon their environmental dynamism. Decisions made in a dynamically changing environment demand continuous visibility. In contrast, when the environment remains relatively static within the decision horizon, the visibility need is discrete.
2. Match data collection to visibility needs. The decision map created in the initial step identifies each function's visibility needs, which are deduced from the corporate goals. The second step calls for action: Data collection must be aligned with those visibility requirements. As summarized in Figure 1, data collection fits the visibility needs when data characteristics meet the analytic requirements. In the case of discrete visibility needs, the use of historic data sources is sufficient to provide solid decision support—recall the example of Wal-Mart analyzing point-of-sale data and discovering an increased demand for Pop-Tarts in regions threatened by hurricanes. The forecast proved accurate because the environmental dynamism was low and the historic data reliable. However, in a setting where the environment can change dramatically, there is a need for continuous visibility, which calls for data provision at high volume, velocity, and variety. The example of ABB (detailed in the sidebar) shows how valuable continuous visibility can be: The use of real-time weather forecasts, social media, and newscasts in combination with internal ERP systems allowed the company to counteract the impact of the 2011 Thailand floods on its supply chain.
When data collection and visibility needs do not match well, putting them in a "misfit" quadrant, companies will incur additional costs. They may, for example, have needlessly invested in sophisticated information systems, which will diminish their financial performance. However, if failing to analyze and respond to environmental changes would have a significant detrimental effect on the supply chain or on customer satisfaction, then investments in such information systems will be both justified and wise.
3. Distribute data across the company and reevaluate processes. Lastly, the information must be provided to decision makers in standardized formats, and in a timely way. Again, for discrete information needs, scheduled query updates will provide the right amount of visibility, whereas a need for continuous visibility calls for a "push-information" flow. Data access should not be limited to the primary addressees, but should instead be made available to all possible stakeholders, as human creativity will drive new applications. Once a well-fitted information system has been established, periodic evaluations will ensure that information collection still matches the visibility needs and that the underlying assumptions continue to hold true.
In conclusion, it is clear that the application of big data analysis in a supply chain management context provides magnificent opportunities for improvement, but before engaging in costly experiments it is paramount to exploit the data already at hand. Following a structured approach and taking into account the limitations of both novel and traditional data sources, companies can achieve an optimal level of visibility in their supply chains and maximize the value extracted from their data warehouses.
Notes:
1. A. Hamzawi, eTURNING internal document (2014).
2. World Economic Forum, Building Resilience in Supply Chains (2013).
3. V. Trost, "Cross-industry comparison of supply chain visibility—Do complex supply chains have a higher supply chain visibility?" (master's thesis, ETH Zurich, 2014).
4. F. J. Ohlhorst, Big Data Analytics: Turning Big Data into Big Money (Hoboken, N.J.: John Wiley & Sons, 2012).
5. A. McAfee and E. Brynjolfsson, "Big Data: The Management Revolution," Harvard Business Review 90, no. 10 (2012): 60-68.
6. O. Ashenfelter, "Predicting the Quality and Prices of Bordeaux Wine," The Economic Journal 118, no. 529 (2008): F174-F184.
7. M. A. Waller and S. E. Fawcett, "Click Here for a Data Scientist: Big Data, Predictive Analytics, and Theory Development in the Era of a Maker Movement Supply Chain," Journal of Business Logistics 34 no. 4 (2013): 249-252.
8. J. Manyika, M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A. Hung Byers, Big data: The next frontier for innovation, competition, and productivity, McKinsey Global Institute (2011).
9. J. Ginsberg, M. H. Mohebbi, R. S. Patel, L. Brammer, M. S. Smolinski, and L. Brilliant, "Detecting influenza epidemics using search engine query data," Nature 457, no. 7232 (2009): 1012-1014.
10. S. Goel, J. M. Hofman, S. Lahaie, D. M. Pennock, and D. J. Watts, "Predicting consumer behavior with Web search," Proceedings of the National Academy of Sciences 107 no. 41 (2010): 17486-17490.
11. J. W. Adrian, Quartet FS internal document (2014).
12. McAfee and Brynjolfsson (2012).
13. DHL, "Intelligent transport hits the road" (2014).
14. UPS Inc., "ORION Backgrounder" (2013).
15. Capgemini, "Big data—Finding the value" (2013).
The Switzerland-based engineering company ABB's technology can be found in factories, in trains, power plants, and even out in space. To protect its global supply chain from disruption, ABB sets benchmarks for supply chain risk management and applies a well-balanced mix of risk management strategies.
For example, multisourcing with local suppliers increases the flexibility of its production network and incorporates resilience in the core design of its supply chain. The company also applies legal measures whenever feasible, but since they do not provide risk prevention, ABB goes far beyond that.
The engineering company proactively approaches the risks its supply chain is facing. Regular risk reviews estimate risk exposures both locally and globally. Consequently, the identified risks can be reduced to an acceptable level. Additionally, the regular reviews provide the basis for business contingency plans, which define responsibilities and mitigation measures.
A key risk mitigation tool is a software program called Emergency Dashboard. This dashboard is activated in the event of any real or potential disruption to ABB's supply chain, due to any natural or man-made event, by tapping several different types of data sources. The Emergency Dashboard determines ABB's exposure, by business line and supplier, in regions affected by adverse events. This enables the company to put together a proactive mitigation plan, which has helped ABB in the past to either completely eliminate or minimize losses due to a disruption.
For example, in 2011, when a monsoon was heading toward the mainland of Thailand, the Emergency Dashboard prevented supply chain disruptions by alerting responsible managers, who immediately took action. During the subsequent flooding, the combination of prepared managers and an information advantage successfully protected ABB's supply chain from disruptions—a benefit many other companies did not achieve.