The 6C Framework to Build a Connected Factory—Part 2
ABSTRACT
Embracing Industry 4.0 (I4.0) relies on the crucial role of data collection and utilization. The benefits of I4.0 drive operational excellence in connected factories by elevating productivity, uptime, and quality. While the rationale of Industry 4.0 adoption is widely acknowledged, the challenge lies in the practical implementation of a connected factory. The resolution of both technical and human challenges is required for successful adoption.
The proposed 6C Framework is structured around six key components to solve these challenges: Criteria, Connect, Communicate, Collect, Consume, and Culture.
Understanding this high-level framework is foundational, as it guides countless strategic management decisions when implementing data collection. This framework is deconstructed into tactical aspects to ensure proper technical and cultural questions are considered throughout the data collection process. The 6C Framework addresses the human and technical aspects of data collection to aid in implementing I4.0.
COLLECT
The Collect phase focuses on storing data within IT systems. This is a culmination of all the effort of the previous phases of framework. The company plans the data collection process and details (Criteria), generates the data with sensors or hardware (Connect), and utilizes protocols to transfer the data (Communicate), with the goal of storing the data for immediate or future use (Collect).
The tools used in the Connect phase are typical skill sets of IT professionals. Data is stored in databases, which are managed on servers. Databases offer a structured method for storing data. Database tables are created to hold specific types of data. These data types include numbers, dates, time, characters, and others. The IoT platform organizes the data communicated from the hardware in a structured manner to be written into these database tables. This is referred to as structured data. In contrast, unstructured data is used to store data that does not fit into structured databases format, such as text files, video, and images. Both storage options will be discussed within this section.
Structured Data. The data generated by sensors often ends up in a structured format. As companies embark on their data collection journey within I4.0, they will need to understand structured or relationship databases.
A database is usually stored on a server dedicated to managing databases. A server can hold multiple databases. A database is a collection of tables, which contain columns and rows of information. Each row represents a record with a unique identifier and all associated data. Each column contains individual data attributes of that record.
Structured databases are called relationship databases since they establish relationships between different tables using primary and foreign keys. To illustrate this, consider two tables used for sales order software. One table, the CustomerTable, contains all the information regarding the customer, such as their name, shipping address, primary contact, phone number, payment terms, and so on. This table includes a primary key, like a unique number or customer ID that increments with each row. The second table, the OrderTable, contains information about each order placed with the factory.
During the order process, the customer is selected in the software. Instead of the OrderTable duplicating all the customer information (name, phone number, address, etcetera), it includes a single column for the CustomerID, which is a foreign key that links back to the unique customer identifier in the CustomerTable. Relationship databases can join this information together. Figure 8 provides a visualization of this concept.
When collecting manufacturing data, there are two primary data storage philosophies to consider: wide tables and tall tables. A wide table has records that span multiple columns with related information. For example, a quality inspection entry might record the test ID number, date, test type, quantity checked, part number, and results. Wide tables result in fewer rows of data but more columns. In contrast, a tall table contains one of these observations per row, linked by the test identification number. This means fewer columns of data but many more rows.
Figure 9 provides a visual comparison of the structure differences between tall and wide tables for two example inspections.
Tall and wide tables contain the same information but structure it differently. There are positives and negatives for each structure. People are familiar with wide table structures, as data is often presented this way in daily work. Wide tables can be challenging when changes to the schema occur. Imagine if a new observation for the inspector’s name needs to be added to the tables. In a wide table, this will create a new column, and all previous rows of data stored into the database will have empty or NULL value associated with it, since this data was not known at the time of entry. In a tall table, new variables can be added or removed at any time without changing the data model of the table. It is just another row of data stored with different information. Another benefit of tall tables is that software used to create charts and plots of large datasets, which will be discussed in the Consume phase, may prefer, or even require, data in a tall table format.
Unstructured Data. Text files, PDFs, emails, images, videos, and audio files are all examples of unstructured data. Typically, factories begin with structured data storage but eventually need to store advanced datasets provided by unstructured data.
These datasets often offer valuable insights for advanced analytics, including AI tools like large language models (LLM), making them important to consider with I4.0 implementation.
Unstructured data storage is also referred to as blob storage or sometimes a data lake. These are specialized storage devices capable of handling file types like storing images, videos, and more. Blob storage and data lake are not fully interchangeable terms since a data lake can contain structured and unstructured databases.
Unfortunately, unstructured storage is often more difficult to search, sort, and analyze when compared to schemas of a relationship databases.
It is also important to consider semi-structured data that can exist in unstructured storage. Semi-structured data uses tags and metadata to help organize information within the file. Examples of semi-structured file types include XML (Extensible Markup Language) documents and JSON (JavaScript Object Notation) files. An example of a JSON file for test T01 found in the tall table from
Figure 9 can be seen in Figure 10. The structured nature of the data can be seen without the need for a formal database.
Structured, unstructured, and semi-structured provide options for storing different types of data. As a manufacturing company’s data collection implementation matures, it will gain experience with all these storage types. Initially, the focus will be on structured databases, which can handle all but the most complex data. This initial focus is important because many commercially available software solutions for reporting and dashboarding are designed to work with relationship databases. These details will be reviewed next in Consume.
CONSUME
Collecting data without utilizing or learning from it is known as the collector’s fallacy. The Consume phase of the framework aims to prevent this fallacy by transforming gathered data into actionable information and knowledge. Employees then use this information to solve problems and improve the business.
The Consume phase involves technical aspects related to the programming of the software used to interface with the databases and manipulate the data, but this is not the most important part of the phase. The core function of this phase is ensuring that people are utilizing the data generated to make decisions and act on improvements. People are central to this phase of the framework.
A well-defined Criteria will make the Consume phase straightforward. The exact type of data should be specified, with plans clearly outlined for dashboards, reports, and alerts. Additionally, employees should be highly interested in obtaining this information to drive decisions. Defined Criteria will ensure the immediate utilization of data in the Consume phase, avoiding any excuses for not using data. The transfer of this data to information can occur through real-time dashboards (RTD), historical reporting, alarming and alerting, and predictive analytics.
Real-Time Dashboards. A real-time dashboard (RTD) visually displays key metrics and the status of equipment and processes within the factory. A RTD automatically updates with the latest information, enabling operators, supervisors, and management to understand the factory’s real-time performance. Although RTDs can frequently refresh the data, the speed of the updates vary. Some RTDs are updated every second, while others, such as those showing product volumes with long process cycles, may update every few minutes or even on the hour. These dashboards can show more than current values. They often contain recent historical data where trends or patterns can aid in the decision-making process.
There are numerous software providers that offer real-time dashboard solutions. Often, IoT platforms include a module for creating RTDs. Access to tag values in PLCs [(Programmable Logic Controllers) make it straightforward to provide this data to visuals used in dashboards. Management must invest time in evaluating RTD software options, as they are a critical part of the data journey. These dashboards are usually displayed directly on the factory floor. They are the most visual representation of the data collection process to employees. While RTDs are often one-directional, they do not have to be. Employees can monitor data but also have the capability of entering notes or comments into these dashboards, turned data entry screens. Design and implementation of these dashboards is critical for success. An example of a RTD can be seen in Figure 11.
Historical Reporting. Real-time dashboards are important for the immediate, tactical decision-making that occurs daily. In contrast, historical reporting provides insights into long-term trends. Historical reporting shows performance through time, such as the daily max amp draw on a HVAC motor for the past six months or business metrics like monthly scrap rates or uptime for the past five years. The objective of historical reporting is to understand past performance and identify trends. It helps validate improvements made or pinpoint areas needing further attention. Historical reporting is important for monitoring the strategic health of a company.
Manufacturers have numerous software options for historical reporting, each offering various methods for delivering reports. Many systems provide web-based reporting system similar to RTD, where users can log in and view different data visuals. Another common option is scheduled email reports, which are sent out to employees at certain dates or times. The report will appear as an email or attachment. An example of a historical report is seen in Figure 12.
One of the strong links with the Collect and Consume phases is highlighted through historical reporting. This data is often stored in a relationship database. Part of the software process involves requesting data from these databases for the charts, reports, and other visuals. This request process is called a query.
There are different languages used to send queries based on the type of relationship database. SQL (structured query language) is one such programming language used for processing the data within a relationship database.
Understanding SQL and other database commands is essential for unlocking data from databases and making it available to be used.
Alarming and Alerting. Alarms and alerts are indications of an abnormal condition in the process or equipment. For decades, alarms have existed in manufacturing equipment.
However, these alarms were limited to those operators in sight or earshot of the equipment. When something went wrong, the operator would need to recognize and respond to the alarm, typically calling supervisors or maintenance team members to assist and troubleshoot. Manual reactions and communication are the norm in unconnected factories.
With data connections of I4.0, companies can leverage automated data analysis to monitor and generate alarms and alerts throughout the organization. Companies are no longer reliant on employees visually monitoring and responding to data at equipment. Instead, a computer program evaluates data as it comes into the system, comparing the new data to predefined rules or thresholds.
Once the rule is broken, like the temperature of the process gets too high or a holding tank is close to
overflowing, the computer triggers a notification sent to the relevant personnel across the organization. This automatic analysis becomes critical as data connections grow to cover the entire factory with all of its equipment.
It is important to understand the difference between alarm and alert, as these terms are often incorrectly used. The International Society of Automation (ISA) publishes the standard on alarming in ANSI/ISA-18.2 Management of Alarm Systems for the Process Industries. According to this standard, an alarm requires an immediate response, while an alert is a notification that may not require a timely response.26 The key difference is whether immediate action is needed by the employees.
Alarms and alerts can be visible or audible notifications of process or equipment anomalies. Blinking lights or buzzers can be used for alarms or alerts in critical areas in the factory. Sometimes, the people requiring notification may not be physically near these areas. This is where I4.0 technology can be utilized. Dashboards in a maintenance area could start flashing when any motor within the factory starts to fail. Similarly, a dashboard in the manufacturing engineering office could highlight failed quality checks that require an engineer to verify a process. The technology exists to push alarms and alerts via emails, texts, and even phone calls.
The first step of sending alarms and alerts is developing the rules for the software to follow. IoT platforms often include modules for this purpose, but other software options are also available. Generally, the software needs access to the tag or sensor directly or via the database where the value is stored. It monitors the value and compares it to the predefined rules. These rule usually include threshold values that trigger notifications when exceeded. Alarming and alerting software often have additional functionalities, such as tracking response times, escalating alarms if there is no response, and maintaining a history of alarms. This data can be used to improve response times, thereby reducing downtime or process inefficiencies within the factory.
The benefit of these notifications is the employees only consume the data when there is an action need. One challenge with I4.0 data is the amount of information that can be automatically distributed. People become desensitized and unresponsive to these automatic notifications if there are too many of them. The company improves its chance of success by developing a robust alarming policy that focuses on true anomalies that require a timely response. Employees will understand the alarms are critical to the business, with meaningful thresholds to drive immediate action. This approach avoids nuisance alarms based on poorly designed thresholds and over-notification.
Additionally, the software will provide metrics on responses and give management the opportunity to measure and drive improvement.
This is a basic introduction on the topic of alarming and alerting, a critical method of utilizing data in the Consume phase. Additional effort and research are needed to be fully knowledgeable regarding alarms and alerts. A robust understanding of the equipment and values that drive process anomalies is also required. Re-engaging employees after they experience a poorly designed alarming and alerting system is difficult.
Therefore, this method of consuming manufacturing data needs to be executed well from the start.
Predictive Analytics. Artificial intelligence (AI) and machine learning represent the future and pinnacle of data consumption for I4.0.
Predictive analytics utilize historical manufacturing data to identify patterns, enabling future predictions or the detection of process anomalies. AI and ML create these prediction models without being explicitly programmed.
While entire books cover topics of applications of AI and ML, this section highlights the significance and types of predictive analytics. Generative and applied AI will be defined and examples discussed to provide a general overview on the topic of advanced analytics. AI and ML are poised to revolutionize the future of manufacturing, but these tools are complex and require substantial effort and data to yield results.
Generative and applied AI are important subsets of the broader field of artificial intelligence, each serving different purposes and having distinct functionalities. Generative AI creates new content, typically text and images, based on learning patterns from existing datasets. In contrast, applied AI focuses on solving specific, practical problems by applying machine learning algorithms to real-world data and tasks. Both cases require datasets to be utilized to help train the AI, though the type of data and problems they address differ significantly.
Large language models (LLMs), a tremendous growth area of generative AI, are used to understand, summarize, and generate new content from datasets. As discussed in the Collect phase, unstructured data – such as PDFs of equipment manuals, emails of daily production notes, or previous work instructions – can be fed into LLMs to generate summaries and insights into the factory. A typical interface for LLMs is a chatbot. Using chatbots with LLM can eliminate the time-consuming process of humans sifting through all the unstructured data generated in a factory.
Applied AI focuses on identifying patterns in data to make predictions about specific processes it was trained on. Classic examples of applied AI include utilizing equipment data to predict machine failure or part quality or applying computer vision to images. Process data is collected and used to train these algorithms, which then make future predictions based on current process conditions. Ideally, applied AI enables predictive maintenance to be performed before a motor fails, optimizes a process to avoid scrap, or helps complete visual inspections with cameras to improve customer quality. Applied AI uses various types of machine learning algorithms, ranging from easy-to-understand models to complex ‘black-box’ algorithms, where the reasoning behind prediction is not transparent.
Based on the author’s experience, real-time dashboards, historical reports, and alarming/alerting should be prioritized over predictive analytics for manufacturing companies in the Consume phase. The data that feeds into predictive analytics must be validated and well understood. The adage “garbage in equals garbage out” is especially true for predictive analytic tools. Real-time dashboards and historical reporting are the first steps in validating manufacturing data. Additionally, these tools enable management to drive company improvements that are easy to explain to employees. The reasoning behind some AI and ML models is not always clear and can be difficult or impossible to explain. While real-time dashboards and reports are easily scalable across diverse equipment, predictive analytics may require specialized data, programming, and work on each piece of equipment separately.
This lower prioritization does not imply that predictive analytics are unimportant – quite the opposite. Predictive analytics is likely the most important use of data consumption in the long term for factories. However, the implementation involving the data, software, algorithms, and prediction can be complex and challenging. Early and quick wins are crucial when starting a data collection implementation. Predictive analytics is unlikely to provide quick wins and may become a resource drain on the technical team with little useful results to show initially.
Additionally, as will be discussed in the next phase, employee acceptance must be considered. Employees who struggle to utilize foundational dashboards and reports to improve the organization will not blindly embrace AI and machine learning. There will be challenges in using the data that predictive analytics provides. Culture is important in the overall success of not only the Consume phase but the entire framework.
CULTURE
The technologies of Industry 4.0 and data collection are highly specialized. Sensors, controllers, software, and coding represent technical decisions and challenges for a company. Research publications and studies often focus on these technical components. However, technology is only half the conversation. The employees are the other half. From experience, people can sometimes be the more challenging piece of the puzzle.
Historically, since the late 1980s to the start of the 2010s, the United States has experienced a consistent increase in labor productivity in the manufacturing sector because of the technologies brought about by the third industrial revolution. The labor productivity metric is published by the US Bureau of Labor Statistics. Interestingly, since 2010, productivity has plateaued. This is illustrated in Figure 13. As mentioned in the introduction, Industry 4.0 was coined in 2011. This timing is ironically near the start of the labor productivity index plateau. This initially seems counterintuitive, given that industrial revolutions typically bring about significant changes and improvement to manufacturing. Technology may be available, but productivity gains only come when people are willing to adopt and utilize the technology.
This conflict with technology adoption is seen throughout history. In the early 1800s, the Luddites were textile weavers who destroyed the mechanized looms, fearing the technology would cost them their jobs. In 1831, 150 to 200 tailors rioted and destroyed dozens of sewing machines believing the machines would lower their wages and increase unemployment. The earlier story on Henry Ford showed the workforce leaving assembly lines in masses due to changes in the working conditions. Ford was only successful at changing this culture after a dedicated effort was made at changing pay and employee hours. This fear of technology is not just a thing of the past. Terms like “computerphobia” or “cyberphobic” were used to describe people with significant fears of using or touching computers. A study from the early 1980s found that about one-third of 500 college students and corporate managers had some level of cyberphobia. Fast forward to today, and it is hard to open a newsfeed or listen to the news and not hear about the fear of AI ending humanity.
People do not change easily; it takes education, time, and effort. Manufacturing companies should expect these same needs when adopting I4.0 and data collection.
Companies will only achieve the benefits of streamlined decisions-making, improved productivity, and increased quality when their employees fully embrace the use of data. People have been integrated into many of the different Cs of this framework already. Criteria and Consume are phases dependent on people in the process. The last C, Culture, examines the overall human aspect of implementing I4.0 within a company. A company’s culture can be defined as the human behaviors, norms, and patterns used within a given company. Within the company’s culture is a subset of cultural behaviors associated with the use of data and data-driven decision making.
Culture is a difficult topic, which is why it is not always covered in publications on the technical implementation of I4.0. There is no single correct method for implementing I4.0. What works for one company may not work for the next. Manufacturers like to follow steps to produce a known outcome, like a standard operating procedure to achieve the desired product. This does not exist with culture, and, therefore, it does not exist when implementing I4.0 technologies. There is no set of instructions to follow and arrive at the same destination as a competitor in the industry. This is why many I4.0 implementations have not been successful over the past decade. This framework is needed to guide management down the right path, allowing them to adopt the implementation right for the culture of their company.
Even without a set of detailed directions, there are concepts that companies need to consider to ensure their compass is pointing in the right direction for their data journey. The first of these is recognizing and addressing the skill-set gap that likely exists in manufacturing companies. Throughout this framework, there have been numerous mentions of sensors, hardware, communication protocols, databases, reporting, coding, and software.
These terms involve OT and IT skill sets that are not commonplace in manufacturing companies, especially small- and medium-sized manufacturers. In these companies, people often serve multiple roles. For example, the manufacturing engineering manager might also help run the IT department for a small company because he or she has an interest in computers. There is a chance that the manager may be the entire IT department.
Similarly, a recent graduate might oversee a small company’s IT system because they have experience setting up servers for gaming in college. The resourcefulness of manufacturing companies is always amazing. However, the tradeoff is these employees lack the specialized skill sets and time to fully develop and advance the IT and OT components necessary for I4.0.
One of the first realizations experienced when implementing I4.0 is the need to understand programming languages and databases. Data flows into and out of databases with software. No coding means no data. One way to improve this skill set on the I4.0 team is to hire college graduates with computer science degrees. This approach leveraged their education and skill sets to develop and program the necessary software to extract the data from the manufacturing equipment. They are trained in manufacturing, and in turn, the manufacturing team is trained in IT. Both educations were essential for success. Hiring skill sets outside of the norm can be challenging for small- and medium-sized manufacturers due to budget constraints. However, management must be resourceful to overcome this challenge to aid in the rapid adoption of I4.0 technology.
Another concept that will help companies improve their data culture is ensuring management leads with data- driven decision making rather than gut feelings. Data needs to become part of the daily conversation at all levels of the organization. Company-wide metrics need to be defined and turned into department metrics, which are cascaded down to operating facing cell-level metrics. These cell-level metrics need to be tactical and easily understood by the operators. Simplified metrics, such as number of parts produced or total uptime hours, are preferred over the metrics managers typically utilize, like efficiency or uptime percentages. The goal is to provide a tactical measurement that operators can relate to and influence directly. Increasing efficiency by 5% can be confusing or difficult to understand for some people.
However, most people can understand the value of producing two more parts per shift. These two metrics may be identical, but extra parts can be physically handled by the operator, while a percentage is a calculation held in the mind. The physical nature of manufacturing creates the need for tangible metrics for operators.
Having the goal-setting system cascade and disseminate targets is the first half of the problem. The second half involves building a culture of responsibility and accountability for achieving these targets. Regular reviews and communication of metric results are needed at all levels of the company. This creates a symbiotic relationship within the organization. The front-line employees must be held accountable for delivering on the metrics set by their supervisor. In turn, the supervisors are held accountable by the managers, managers by vice presidents, and so on. Responsibility though the opposite direction: vice-presidents must ensure that reviews are conducted with managers and that they have the necessary resources to succeed. Managers are responsible to supervisors to remove barriers. Supervisors are responsible for aiding front-line employees. These are two-way streets that need to foster dialog and conversation around the metrics.
This drives an understanding of what the metrics aim to achieve and, more importantly, integrates these data-driven metrics into the company’s norms and behaviors, becoming part of the company’s culture. This does not happen by accident; it takes consistent effort from all levels of the organization.
Another critical concept to consider is the significant change in daily work that some employees will experience when using data. As a trained engineer, using data to drive decisions was always a norm for the author, however, this is not the case for all people in manufacturing. For many factories, data has never existed or been available to solve the daily problems.
Historically, supervisors and managers made decisions with limited or possibly no data. These decisions built experience and confidence in their decision-making ability through the years, and possibly decades, they worked. This approach is disrupted within a data-driven I4.0 company. The data now exists and is readily available to assist with many manufacturing decisions like scheduling operators based on efficiencies, showing the performance of a dozen machines real-time on the supervisor’s mobile phone, identifying quality issues as they occur, and communicating performance to all levels of the organization in real time. The role of many supervisors and managers will drastically change with the increased use of data. While it is unlikely the Luddites will return with pitchforks to destroy the I4.0 data collection systems and servers, the amount of change in work content for employees, supervisors, and managers will be monumental. As history has shown, not all people adapt well to such changes.
The interconnectedness of data and breaking the data silo mentality is another necessary cultural shift. The third industrial revolution began generating data, but the networking and connected systems did not exist yet. Data silos were created because they were the only option.
ERP software and its data did not communicate with the timekeeping software employees punched in and out daily. The motor performance data from the building management system did not share information with the maintenance software. The dimensional data from the quality CMMs was isolated from everything else. As networks were built, the ability to connect and communicate this data became available. Today, many companies still behave as if the data from these systems is siloed and should not be connected. The policies of some software providers can hamper this connectivity.
By leveraging the advantages and desires of company to implement I4.0, software providers are building out modules and storage options they can sell as a service with monthly costs charged. Some companies are forthright with the data, providing it to the customer to leverage. Others, however, are less scrupulous and try to maximize sales of software while holding the data hostage. Management of manufacturing factories needs to have a policy and approach to selecting and integrating the multitude of software systems that help run a factory. Will management accept a timekeeping software vendor that only allows data to be accessed through its reporting? This could prevent the connection to an ERP system that could automate inventory allocation, avoiding a manual process for employees. What if this software is already in place? What will be the management’s behavior and pattern in these cases? Management is responsible for defining the culture of sharing and connecting data. The approach with vendors also cascades to the approach of connecting and sharing data between departments within the company. Transitioning away from a siloed mentality of data is an important part of the culture of data. To support the data journey, training needs to occur within the company. This includes technical training on new software packages and the hardware that will be utilized. Additionally, training must cover general data and quality concepts that may not be currently used. With real-time streaming process data, quality concepts like statistical process control charts (SPC) and capability studies can be performed continuously on processes with every new piece of data captured. Advanced analytics skills, such as machine learning, should be reviewed at all levels of the organization, so employees understand why these tools are being used, even if they are not programming the algorithms themselves. In some cases, basic mathematical concepts like calculating percentages and understanding graphs may need to be part of a training program for employees. The need and level of training will differ between companies. Recognizing culture is driven by behaviors, and training is an important method to reshape employee’s behaviors and willingness to change.
Culture drives the adoption of technology. Changing culture requires management to provide consistent effort through time. Change will not be immediate. The questions and comments in this section, like all sections of the framework, are designed to make the management think deeply about the steps needed for the adoption of I4.0. The examples shared are a subset from the author’s experiences. There are many other potential strategies to influence behavior and change culture that can be researched and implemented. The objective was to introduce the topic and emphasize the significant influence of Culture on the adoption of I4.0, though each culture is unique.
CONCLUSIONS
The 6C Framework offers a structured approach to translate the concepts of I4.0’s factory digitization into practical applications. The phases––Criteria, Connect, Communicate, Collect, and Consume––are essential for planning, executing, and benefiting from data utilization in manufacturing. This process is influenced by the data Culture of the company, encircling all the phases in the framework.
The framework begins with people coming together to plan the data process. Criteria defines the need, giving clarity to the needs in the other phases. This includes identifying the problems the data will help solve once the implementation is complete. With this foundation in place, the data collection moves into the hardware and data definition phase, known as Connect. The goal of this phase is to execute the hardware or software creating the data. Once the data is available, Communicate outlines how it moves through the network via communication protocols and an IoT platform. The IoT platform delivers the data to long-term storage in Collect. The type of data and how it is structured are important items to address in this phase. With the data in place, the company can leverage it to make improvements in the Consume phase. Real-time dashboards, historical reporting, alarming and alerting, and predictive analytics are tools that can be used to deliver the data to the employees. Culture, or how the employees interact and behave with data, influences all these steps.
Management can use this framework to guide the decisions necessary for implementing a large-scale data collection system within any size factory. The 6C Framework is more than just a strategic tool. The same phases can be tactically applied to any individual data collection problem. The goal with this framework is to translate the theory and concepts of data collection into tangible, actionable phases that any company can implement.
Building a connected factory comes with challenges. Different processes, equipment vendors, age of equipment, and software require various technical implementation patterns to be developed to harness the manufacturing data. Additionally, employees need to be actively engaged in planning the criteria and consuming the collected data. The human aspects can easily be ignored given all the technical challenges and decisions needed when starting I4.0 implementation. For the engineers and technicians implementing this I4.0 data collection, the focus on technical decisions and equipment testing can overshadow the people component. This oversight is a recipe for failure, which has plagued many implementation attempts in the industry thus far.
Successful data implementation involves integrating the technology (OT), the data (IT), and the people. The purpose of data in Industry 4.0 is to optimize decision-making and enhance the productivity of employees involved in manufacturing. Data serves as a starting point in a long-term Industry 4.0 improvement journey. When leveraged properly, data can revolutionize manufacturing. This 6C Framework provides a guide for manufacturers to embark on the digital transformation journey.