In a description of the National Science Foundation sponsored center for Smart process Manufacturing (http://www.rockwellautomation.com/resources/downloads/rockwellautomation/pdf/about-us/company-overview/TIMEMagazineSPMcoverstory.pdf) the authors suggest that market disruptions such as a “$3000 automobile or a $300 personal computer” might be outcomes. Plant integration, plant optimization and manufacturing knowledge are listed as the phases to get to this reality. What are the barriers to such an evolution in manufacturing ? How much integration of people, process and technology needs to happen to transform existing manufacturing ? Will leadership for this transformation come from small, agile companies who, when successful, will be integrated into larger ones or can the large companies lead such a transformation ? Finally, how global will this phenomenon need to be to transform supply chains ?
In the article The Analytics Supply Chain, the author explains an interesting concept that appears to be a bottleneck for big data analytics projects at firms. This issue lies within the analytics supply chain itself. Deploying big data projects has become more and more popular, but the results are not always satisfactory. Often projects take too long to source the data, build the models, and deliver the analytics-based solutions to decision makers in an organization. In a twist of fate, the analytics supply chain becomes the issue for supplying the analytics which was meant to help the supply chain. The author suggests looking at the analytics supply chain in terms of customers being decision makers and the products being consumed are analytical models. Just as a normal supply chain, bad inputs usually equates to bad outputs.
When thinking of data as raw material and output as models, bad or incomplete data usually results in poor or incomplete models. Furthermore, sometimes sourcing enough data for complete models takes too long, and thus substitutes are used such as spreadsheets. The article states that between 20% and 80% of spreadsheets have been found to have errors, and as one might imagine, errors lead to the proliferation of different versions of the truth. So a complete model using complete, accurate data is necessary. And such models can take time, and one can think of unfinished models as inventory. Inventory does not contribute to the bottom line, so the requirement of complete models outputted in a reasonable amount of time becomes important. Perhaps this requires a more precise approach or hiring more individuals, but it’s important to recognize that data driven models that are too complex or incomplete will normally not deliver the analytics based decisions as anticipated.
Finally, the author makes some suggestions about identifying if an analytics supply chain is in need of repair. If analytics projects are hindered by lack of IT, data, and/or other scarce technical resources, there might be an issue in the analytics supply chain. If a firm’s ability to proceed with new analytical models is hindered by the constant maintenance on older systems, there may be an analytics supply chain issue. If big data systems have been employed, but the results don’t seem to justify the investment, perhaps it’s time to evaluate the analytics supply chain as opposed to scrapping big data projects because they appear to be a waste of money. Evaluations like this could save firms money in multiple ways not to mention the time invested in preexisting projects.
Has anyone ever heard of evaluating the analytics supply chain?
Have you ever run across models that were too complex to implement?
Have you seen any instances where big data projects were scrapped because results were not produced fast enough?
Small business owners and entrepreneurs are starting to employ autonomous machines in collaborative and even artistic ways, and that may point to a new chapter of artisanal and small-batch manufacturing in the U.S. this report looks at an example of a TLAC, a 2D and 3D design, print, and publishing shop in Toronto, Canada, that creates books for self-published authors.
The automation industry is seeing a shift in its labor force. As many current workers get ready to retire, a younger workforce has yet to arrive to take its place. In response to this shortfall, the advanced robotic market has grown significantly. Advanced robotic systems and collaborative robots are taking center stage at a time when manufacturing industries need them the most. This is a report based off some findings by the Boston Consulting Group and others.
In the article Why Manufacturers Need Process Mining – A New Type of Data Analytics, the author is extolling the benefits of what he calls a new type of data analytics – process mining. Process mining can be used to reduce inventory costs, identify production bottlenecks, improve on-time delivery, and optimize logistics between production sites, distribution centers, and end clients. It’s hard to justify that process mining is actually a new type of data analytics that can be used for manufacturers, but process mining does follow the a few of the key rules for successful big data usage. First, as the name implies, it concentrates on one process. Implementation of successful big data projects normally requires concentration on one area for improvement. The specificity of big data projects usually allows them to be successful, and the article looks at why process mining, and big data analytics on the whole, can be successful in saving manufacturers money.
Process mining specifically looks at a very important factor in manufacturer’s processes, KPIs or key performance indicators. KPIs are exactly what they sound like, the main factors that measure performance and the overall successfulness of process or project. Process mining’s value lies in the fact that it initially makes one look at KPIs of a process. It also challenges the manufacturer to validate if they are measuring the right KPIs and understand if the data they are gathering can be related back to KPIs. Process mining, as all big data does, uses software do the difficult work of visualizing the processes and highlighting specific variances impacting KPIs. One example used in the article is the examination of throughput times and the ability to identify that specific vendors are not meeting their lead-time commitments. These types of analyses and results is exactly what big data projects are meant to do and achieve.
Another point that the article points out is that process mining encourages manufacturers to identify inefficiencies and problems within the process. It encourages companies to embrace their issues. This ideology is absolutely necessary for continuous improvement, and it is a key to any big data project. Embracing issues can be difficult, especially for older, engrained processes, but it is the only way to eliminate them. It certainly is not the easiest thing to do, and requires a humble mindset entering an improvement project. Process mining has all the key factors of a successful big data software, and it could be very useful for manufacturers that want to embrace the big data revolution.
Do you believe addressing KPIs first is the best way to approach a big data project?
Do you think process mining is actually a new or different type of big data analytics or just a rebranding of basic big data?
Do you think that some manufacturers are reluctant to implement big data projects because they do not want to know their inefficiencies?
In the article IoT And Big Data At Caterpillar: How Predictive Maintenance Saves Millions Of Dollars, the author examines an interesting case of the company Caterpillar saving significant amounts of money using big data and the IoT. The best part of this case study is that Caterpillar is seeing a very quick ROI on their big data investment, which is not something that can be said for most companies. As a Caterpillar manager put it, you don’t have to look for a “grand slam” with big data; sometimes you just need multiple smaller applications of big data to experience significant savings. In this instance, gathering as much data as possible seems to be the best approach, and utilizing experts in the processes and in the data to analyze and understand the insights gleaned helps realize real value.
Caterpillar utilized big data on in its Marine Division, mainly to analyze fuel consumption for its customers as it most affects the bottom line. Sensors on the ship monitored everything from generators, engines, GPS, refrigeration, and fuel meters, and Caterpillar utilized Pentaho’s data and analytics platform. Insights gained have been a correlation between fuel meters and amount of power used by the refrigeration containers, and also that running more generators at lower power instead of maxing out a few was more efficient. The cost savings here added up to $650,000+/year. Another insight was to the optimization of a ship’s hull cleaning schedule. Through the collection of data of cleaned and uncleaned ships, the data showed that cleanings should be performed once every 6 months instead of once every two years. The savings associated with this optimization was $400,000/ship.
In the grand scheme of big data, predictive maintenance analytics seems to be the most powerful tool consistently being used. With data being generated just about anywhere you could imagine via the IIoT, understanding trends becomes easier and easier. Interestingly and contrary to previous articles, Caterpillar believes that you can’t collect too much data. They point out how data storage is very cheap. In the words of a Caterpillar manager, you can’t see “relationships about relationships” in the data if you don’t collect it. Although a more is better approach is definitely not what some companies have ascribed to, it seems to be working well for Caterpillar’s marine division as they continue to pull value of out of big data and analytics.
Quick returns on big data investments seems to be rare, so do you think that companies just aren’t utilizing the big data correctly?
Do you believe in the more is better approach with regard to collecting data?
Do you believe Caterpillar is more likely to invest in big data projects in other parts of their company due to the success in the marine division?
In a recent article on the website, DailyMail, the development of extraterrestrial drones is discussed. On a distant Jupiter moon, Europa, there is believed to be oceans. NASA scientists and engineers are working to develop submersible drones that can gather data and detect microbial life by operating in these icy waters. A major requirement for these drones will be for them to operate autonomously and manage their own resources. Only time will tell if these drones will accomplish the mission NASA hopes for.
In the article A Signal in the Noise: How to Best Manage Big Data, the author advocates some ways to improve the use of data for manufacturers as well as his idea for the best way to approach the use of big data analytics and data management. The author equivocates big data as sometimes “a needle in the haystack” when it comes to finding and using it in the manufacturing environment. With so much data produced by even small firms, the management, organization, and analyzation of the data can become overwhelming if not impossible. Industrial Internet of Things-based systems are estimated to create $4 trillion to $11 trillion in new economic value for manufacturers by 2025, according to McKinsey Global Institute. With such large value to be had, ignoring or underutilizing data could be a catastrophic mistake for manufacturers.
In a study of manufacturers conducted by the MPI Group, 76% of respondents reported plans to increase their use of smart devices in the next two years while 66% plan to increase their investment in IIoT-enabled products over the same time period. With many firms taking the necessary steps to collect and use data, a strategic approach is necessary. The author suggests a number of steps to jump-start the transformation to using data analytics. Recognizing human limits and the burden of isolation is the first suggested step. Here the author is advocating that firms understand that human teams are just not capable getting the insights that technology is capable of. The next step is forgetting the traditional supply chain cycle, and embrace the complexity of modern supply chains. With the IIoT, data can be captured all along the supply chain which can offer useful insights not previously possible.
Finally, the author advocates understanding four different measures before taking on new data management and analytics initiatives. Find the actual problem, the “signal in the noise” is the first and foremost issue at hand. Unless there is direction to the project, the project will almost certainly fail. Next is understanding the business case. One has to understand the strategic advantage behind any data management implementation. Lastly, finding where the optimization is most desperately needed and identifying the experts that are best suited to handle the data being generated is key. Overall, the biggest pitfall in bringing on new technology is the belief that good things will just happen. An understanding of the problem, the right people, and a tailored process will allow the technology to do the work it is supposed to.
Do you believe that understanding big data is really about finding the proverbial needle in a haystack?
Do you believe the increase in smart device usage automatically translates to more useful big data, or just more data in general?
As with most strategic IT plays, projects start off to gain a strategic advantage but quickly become part of the IT infrastructure. Do you think we are in the middle stage between big data being a strategic advantage and a necessary IT infrastructure requirement for manufacturers?