Make Your Machines Talk to You

Kim Phelan

Returning five years ago to the foundry division of the West-Coast business founded by his grandfather in 1952, Ryan Showalter was in for a shock. 

It wasn’t because he’d forgotten the process ins and outs of metalcasting or because the equipment was inferior. In the early 2000s, he’d worked in the pattern shop for several years before venturing out into the world of software development. When he came back to help manage operations at Fresno Valves & Castings, Inc. (an AFS corporate member in Selma, California), Showalter expected what any GM would naturally desire at his fingertips, regardless of foundry size: daily, real-time visibility into performance, production, and predictive trends throughout the business. But the absence of modern metrics reporting was frustrating.  

“I wanted better visibility of the system as a whole,” he said. “I needed to see more high-level metrics of what was going on––they were there, but you had to really dig into it to find anything. For example, in our Hartley [sand mixing] system, it was still printing on a dot matrix printer, every single batch that it made, and a summary would happen toward the end of the day. 

“So, if I wanted to see what the current moisture levels were, or compactability, or whatever else, I’d have to actually go to the machine and scroll through the menu until I got that compactability, or I’d have to go down to the sand lab and figure out what particular batch we were on, or what batch happened an hour ago. It was kind of a big hassle.”

Using his skills and knowledge from the software industry, Showalter embarked on over a yearlong journey to solve the connectivity and communication of the systems he most needed to keep track of: sand system variables, baghouse differential pressures, power consumption, machine cycle time, weather trends (to monitor storm water systems and anticipate humidity issues that affect cores and pouring), as well as furnace temperatures. He placed priority emphasis on: creating alerts if his settings go out of bounds, reducing manual recordkeeping, and making trends easy to view. 

He had a considerable mix of equipment brands, ages, and control logic to corral into his visibility pen. Built in 1974, Fresno Valves had added and removed lots of equipment over the years as requirements and technologies changed, and Showalter quickly discovered that while most of the foundry’s equipment had valuable metrics data available, the PLCs (programmable logic controllers) that the equipment ran on were isolated. PLCs, which are industrial computers used to control different electro-mechanical processes, are themselves diverse in size, form, and function. The control logic for Fresno Valve’s equipment ranged widely from legacy relay logic to modern control systems, including PLC5, DirectLogic, Siemens, and more.

Equipment type and logic silos aside, his quest for getting machines to talk to him was also impeded by a set of additional constraints, most tied to budgetary concerns. First, the company preferred all its data to remain on the premises, and management was averse to subscription fees for cloud hosting and other web-related services––“pay once and be done with it” was the mindset. Showalter wanted to avoid any novel or customized software development as well as significant infrastructure to keep costs at a minimum. Likewise, he preferred any acquired software or hardware to be inexpensive, so that if the system did not pan out as expected, the sunk cost would be minimal.

He tells the rest of his story in the following Modern Casting Q&A. 

MC: Right out of the gate, it feels like this conversation should have one of those “Do not attempt this at home” disclaimers—it’s rather a high-level technical subject that’s not exactly for novices. What’s the skill level required to implement the kind of system you assembled?

Showalter: I think anyone who has experience with software development would be able to piece together a similar solution if the pieces of the puzzle are laid out in front of them. One of the most difficult aspects is knowing what tools are available to help you solve the problem. I had the fortune of designing metrics collection systems for web applications in a former life, and so I had a rough idea of what parts of the infrastructure needed to look like, but I went through several iterations of solutions before I arrived at something I was happy with. By exposing some of these ideas, I hope that I can give some inspiration to others who might be interested in similar goals.

MC: As you began assessing how to collect machine metrics, what specific foundry equipment and logic mix were you dealing with? 

Showalter: We have two Sinto FBO automatic molding machines running SLC 5 controls. Our Hartley green sand controller is running a PLC 5. Our cold box core machines are Gaylord VSTB-19s running a DirectLogic platform, as is our GMD Thermal Reclaim system. We have three 1,000kW coreless induction furnaces that are controlled by an AJAX Magne-com panel running an oddball Z-World PLC. There are also several other pieces of equipment running on Micrologix and Siemens controllers.
The biggest piece of legacy equipment still in service is our sand system control panel. It was installed when the foundry was first built back in 1974, and it runs entirely on relay logic. I had a contractor come out one time to quote on redoing it––we were putting some VFDs in for our bag houses and other things. The first thing he said when he looked at this panel was, ‘Oh my God, this is an antique; this should be in a museum!’ I said, ‘What are you talking about? We have been using this every single day since the ’70s.’ And he goes, ‘Wow, this is really incredible––I would leave this exactly like it is. It’s a piece of history. Don’t even touch this.’ 

Not a big help! But there are probably some other foundries that can relate to this kind of challenge.

MC: Can you explain the basic infrastructure for collecting the data you needed to see?

Showalter: Essentially, the entire infrastructure revolves around the idea of clients publishing messages (which can include metrics) to a central location (the message broker server) over a network connection. Clients can be PLCs that have support for the message format directly or “IoT Gateways,” which are hardware devices that communicate with the PLC over a known protocol (such as Modbus) and can translate those metrics to the message broker format. 

The message broker speaks a communications protocol called “MQTT.” Clients can publish messages to the broker over various topics. For example, if you want to publish a temperature for a furnace, the message might look something like “/meltdeck/furnace/temperature=2100.” The neat thing about MQTT is that not only can a client publish messages to topics, but it can also subscribe to topics and be alerted whenever that topic changes. This allows a metrics collection server to subscribe to the topics that you are interested in monitoring and ingest them whenever they change for further processing.

The metrics collection server is a front end to a database that is optimized for storing lots of metrics sequentially in time. It will take the metric, normalize it, and store it in the database. There are several open-source software packages available that implement this sort of metrics collection behavior, including Graphite, Influxdb, and Prometheus. Some of these projects have built-in support for subscribing to MQTT brokers and others support it via plug-ins. Once the metrics are in your database, the only thing left is how to visualize it. This is where a tool such as Grafana becomes useful.
Grafana is an enterprise grade tool for visualizing data. It can query dozens of different data sources and produce stunning, real-time visualizations of your data. Many different types of visualizations are supported, from basic line graphs to complex 2D heatmaps and more. The time range can be easily adjusted to zoom in on specific events, or dashboards can be created that automatically update in real-time so that the data can be visualized in common area kiosks. Grafana is really what makes this project shine. Collecting data is almost useless if you cannot quickly and easily visualize it to see trends.

MC: What exactly is MQTT?

Showalter: MQTT stands for a message queue telemetry transport. It is a lightweight network protocol that was developed by IBM in the ’90s to transmit oil pipeline telemetry for SCADA systems. So, imagine that you have an oil derrick out in the middle of nowhere that needs to transmit telemetry over a satellite uplink so it can be monitored at a central control room; this protocol is ideal for that. But it is also just as good if all your sensors are hooked up to the same network and they have a high bandwidth connection to each other; you will be very efficient over that network. 

It also allows many clients to communicate via that central message broker I was talking about. 
MQTT has a very mature ecosystem for developers, too, so I don’t have to write a protocol from scratch in the programming language that I use––there’s already software libraries out there that serve as a good foundation. If I wanted to write a program that subscribes to the MQTT broker or publishes a metric, it is very straightforward using a language like C++, Java, Python, etc., using, for example, the PAHO library provided by the Eclipse Foundation.

MC: If a device does not support emitting MQTT, how do you translate?

Showalter: You can purchase devices called “IoT Gateways” or “MQTT Gateways,” which can communicate with a PLC either over Modbus, Ethernet/IP, or some other protocol. These devices provide a nice point-and-click web interface where you can define which registers you want to expose from the PLC and what topic you want to post those registers to on the MQTT broker.

It’s also possible to create a custom MQTT Gateway with a little bit of programming. Like I explained earlier, MQTT software libraries are very mature, and so if you can access the data you want to publish from a program you make, you can easily publish that data to the broker.

MC: What about the message broker––how do you choose? 

Showalter: There’s lots of different brokers to choose from––remember, the broker receives messages and determines how to forward them. There are open source solutions that your IT department can fully manage, and there are also hosted solutions that offer enterprise grade support.

It really just depends on what your needs are, how much you want to pay, how much you want to get involved with it––if you want them to be hosted or not. There’s lots of opportunities for that. 

For self-hosted, some choices include: VerneMQ, Eclipse Mosquitto, and EMQ X. For remotely hosted, you could look at CloudMQTT, HiveMQ Cloud, and AWS IoT Core.

MC: So, to break it down again, your equipment has sensors or PLCs, which are clients. The clients send prescribed messages to the message broker via MQTT, and that data is ingested and queried by the metrics collection server, which your visualization software can query to display the metrics in a useful format. Are we missing anything here? 

Showalter: One caveat is, none of these metrics collections options––such as Prometheus, InfluxDB, and Graphite (which I use)–– will ingest MQTT off the bat, but many if not all of them do have plugins that hook up to the metrics collection software. The plugin will configure the metrics collection server to speak to your MQTT broker, so it’s pretty much plug-and-play, as long as you have the right plugin installed for however you’re transmitting your messages. 

MC: Finally, as of today, what have you successfully integrated to the point of actually receiving real-time data?

Showalter: So far, the melt deck is somewhat completed, but I still have work to do on that. The sand system monitoring is done in our sand lab; our power monitoring, with FBO 2 and FBO3 metrics, is done, too. I’m monitoring the weather as well––we have our own weather meter that we put in. 

And then my projects that are currently in progress include:

  • Working on our bag house differential pressures, getting those all into the system. 
  • Our thermal reclaimer metrics––we’re logging all that by hand right now, and I’d like to eliminate that entirely. 
  • Core room metrics––similar to how we’re doing the molding metrics monitoring, I’d like to also know how quickly we’re operating in the core room, especially for our cold box machines. 
  • Alerts for weather conditions––alerting or alarms in general is something that I want to set up soon. 
  • Tighter integration of molding metrics of job data. Eventually, the idea is to scan in jobs so that on the molding counter screen you can actually see what jobs are being made and how long the cycle time is for each particular job in real time. 
  • And upgrading the relay logic––then eventually I’d like to integrate all this into an affordable SCADA solution.     

Click here to view the article in the April 2022 digital edition.