Generative AI like ChatGPT is already showing a ton of utility in areas like marketing and software programming—and it can write a heck of an entertaining limerick for your break room fridge. But a lot of manufacturers are still struggling to see the potential productivity leaps for them. Coming at a time of historically tight labor in factories, especially among workers with high levels of technical skill, unlocking even some productivity gains from AI would be a huge win for factory operators. But how?
Pavan Muzumdar, chief operating officer of Automation Alley, Michigan’s Industry 4.0 knowledge center, has some ideas. He and his colleagues have been studying and developing use cases for the mid-sized manufacturers they serve, (some of which have found their way into a new playbook they’ve made available).
Muzumdar will be leading a session on brining generative AI into the factory at our upcoming annual Manufacturing Leadership Summit on May 7 & 8 in Detroit (join us!).
In a conversation with Chief Executive ahead of the event, we asked Muzumdar for a sense of what he’s seeing—and how he’s deploying—generative AI like ChatGPT right down to the factory floor. What follows was edited for length and clarity:
What are you finding right now about the state of generative AI and usage and desire for it in the manufacturing companies that you work with and you talk to?
The manufacturing industry tends to work incrementally because manufacturing traditionally has been an incremental science, art and all of the above, right? Manufacturing does not change in leaps and bounds or has not traditionally done that. So, when you get into manufacturing excellence, everything you’ve learned is that you change in small steps continuously.
But with generative AI and all digital technologies, change is exponential. So, there’s this impedance mismatch, if you will, between the way the traditional has changed and the way the new is changing. If you are a traditionalist, you are looking for a reason sometimes to say, “Well, that technology isn’t up to snuff, right?”
In the popular news, we’ll see things like, “Hey, generative AI told me to add, you know, five tablespoons of salt in a recipe that’s clearly inappropriate.” So [a manufacturing executive might say] gen AI doesn’t work, and that’s true for that particular point in time.
But gen AI is changing so rapidly. It’s no longer going to ask you to make a simple mistake like that. It is going to start converging to the right solution. If you form your opinion on the technology as of a particular point in time and ignore it, what’s going to happen is six months down the road you’ll find out, oh my God, my competitors are using this technology very effectively because it’s not making as many mistakes anymore, and it’s now useful. It can lead to blind spots.
Just because it’s making a mistake today, don’t assume it’s going to make mistakes three months down the road, because there are very smart people figuring out how to make it not make mistakes. That’s a big thing that we see.
What you’re saying is that zero defect culture can work against some of the innovation you might want in your shop with experimentation. What are the tips you’re giving people to try to balance those two things? Because they do seem on the face of it pretty incompatible with each other.
Our collection of do’s and don’ts for gen AI is to use gen AI to get started on something, use gen AI for ideation, use gen AI to kind of get you a little bit further ahead. But don’t use gen AI for completion.
Be very cautious in using gen AI for factual information because gen AI has this feature, or you can call it a bug, of what they call hallucination. It’ll make stuff up, right? That’s the one thing you have to be very careful of. If you’re asking it for factual information, if you’re asking it for a specification of something, make sure you can verify that independently. It can get it right, but it can also lie to you because it doesn’t know the difference between true; it’s not doing it intentionally.
The other thing is, whatever you ask AI to do, ask it so that the output that you get is in a chunk that you can independently verify. You can get great value, but make sure that you’re doing this “trust but verify” type of approach. Get it in chunks. It can turn out to be extremely productive but be careful that you’re not abdicating your responsibility in asking it to do something blindly.
And then, specifically for our manufacturers that are using traditional technologies, gen AI can help you with old technologies. One of the very popular uses of technology in the finance world is to use gen AI in helping with COBOL programming. COBOL, if you know, is a very old programming system that is still used by commercial applications. They’re using gen AI to help modern programmers who have never learned COBOL debug, write new COBOL code or maintenance code—but the gen AI is helping them in deciphering some old code.
An example is if you have a CNC machine and you don’t have a CNC programmer on staff, but they understand general programming, they can actually be quite productive if they ask gen AI to get them started on CNC programming. If they can kind of inductively learn how to do CNC programming just with the help of gen AI. But again, don’t trust it fully; learn how to use it and then verify that by making sure that you’re actually doing some physical tests to see that it’s actually giving you the results. But it can still be very, very valuable.
This is one of the things that I think is underrated by many manufacturers: a gen AI like ChatGPT has digested almost every user’s manual and how-to guide, best practice guide on almost every machine that’s sitting on your shop floor, right?
That’s correct.
What else on the shop floor might we be surprised to hear that gen AI is really potentially great for?
I like to look at it as any problem that we have that we have not kind of figured out process-wise to solve. I always ask myself, “Hey, could I use gen AI to not solve the problem for me, but to accelerate my understanding.” As you said, this system, this LLM, this being, whatever you want to call it, has ingested all this information. How can I leverage that and solve the problem? Sometimes it’s not so much about the answers, but what questions can I come up with that I might be able to pose and how could I iterate through that and then make it inclusive?
It’s not AI, it’s human plus AI. How can I empower my own staff to be much more open about it, knowing that it has limitations, but even things with limitations can have substantial value in making us much more productive.
AI is very useful to get started on something. Let’s say we’re having some trouble on the shop floor in keeping the area neat, or maybe following certain processes. AI can be a great way to kind of say, “Hey, how can I create maybe the top five things to check on at the end of the day and give me a checklist of things that my shop floor should be looking at.” And you’ll see it’ll come up with stuff that you never thought about. Now you can maybe put that on a board or something that says, “Five things to check before we leave for the day.”
So put it on the team, but don’t make it the boss.
Exactly. It should never be the boss. It is an assistant. We are always in charge. We always need to be in charge.