testing-the-unanticipated:-how-instrumental-brings-ai-to-quality-inspection

There is a lot of talk of applying artificial intelligence and machine learning to manufacturing. But the big question is still: How can AI actually lead to better manufacturing outcomes? For Palo Alto, CA-based Instrumental the best opportunity lies in using AI for quality assurance and testing.

Anna-Katrina Shedletsky (Image source: Instrumental)

By aggregating images from cameras placed throughout the manufacturing line, Instrumental uses a combination of cloud- and edge-based machine learning algorithms to detect product defects and failures throughout the manufacturing process. The company’s system has already found use cases with name brands including FLIR, Motorola, and Pearl Auto.

Design News spoke with company Founder and CEO, Anna-Katrina Shedletsky, about Instrumental’s machine learning technology, how her time as an engineer at Apple sparked her vision for the company, and the company’s larger vision to transform supply chains forever using AI.

Design News: You spent time at Apple working on the Apple Watch? How do you transition from working at Apple to where you are now having founded Instrumental?

Anna-Katrina Shedletsky: My background is as a mechanical engineer. I was at Apple for six years. I led a couple of product programs there and then I was tapped to lead with some product design for the very first Apple Watch.

The role was very interesting because I was responsible for the engineering team that we flew to China. We stood on the [assembly] line and we’d try to find issues. I had visibility into all that minutia, and I was really seeing how the tail can wag the dog in terms of these seemingly small defects on the line having major impacts.

DN: Was this something that was unique to Apple?

Shedletsky: No, all the companies in the space have this problem. Something like a machine stretching your battery too much can cause your Samsung Galaxy Note 7 problem. You can create these inadvertent perfect storms where these quite small things can impact when you ship the product the schedule, how much money the company is going to make on it, how many returns there are – all of that stuff.

In my role I was continually frustrated by not having good tools to actually fix those small and minute problems – some of which are actually very big from a business perspective. So I decided to leave Apple and start a company to build that technology.

DN: Can you walk us through Instrumental’s machine learning technology and how it works?

Shedletsky: The whole thing we’re trying to do is to actually find defects that our customers have not yet anticipated.

If you know that there’s a certain type of failure mode that could happen as an engineer you can try to design it out or minimize it. You can also put tests in place. What Instrumental does is we’re very focused on finding these unanticipated defects.

Our neural networks learn from a very small sample of units – about 30 pieces is enough to get started. The algorithms learn what is normal based on just normal input – we don’t need perfect units or defect samples – and from there we can set up tests to essentially identify new defects that weren’t anticipated. We can also still look for defects that you do anticipate. So if you do find the defect you can set up an ongoing test to make sure you can catch those units in the future.

Instrumental uses machine learning algorithms to determine what  product details are important or not and flag potentially serious issues for engineers.

DN: In your sampling of about 30 images or so are you basically telling the algorithm, “this is what the ideal product should look like” from various angles and it’s looking for any deviations from that?

Shedletsky: Actually it’s even better than that. If you have to produce an ideal unit that’s really hard to do in real life. What Instrumental is trying to do is actually determine which differences matter and which differences don’t matter and highlight the ones that matter.

As an engineer I’m very creative; I can come up with a thousand different ways that something can go wrong and fail. But I can’t create a thousand different tests. It’s not practical.

The beauty of machine learning is that we learn over time from the data set about what differences matter. An example of a difference that doesn’t matter might be the orientation of the screw head. So our algorithms become desensitized to that particular area.

However, if we’ve seen 20,000 images that all look pretty much the same and then suddenly there’s an image where something is different. There’s a really high probability that that difference is important.

From an engineering perspective we’re building hundreds, thousands, or in some cases even millions of units a day. I can’t look at all of these images, but if a computer can show me the top one percent that are interesting I will look at those because I’m trying to see if there are true defects. Then I can set up enduring tests that make sure that we can find those defects in the future.

DN: Is there any sort of limit to how many tests or experiments an engineer could have running on a particular product?

Shedletsky: We haven’t found it yet. In terms of running live on the line, tests do add some time, but we’re talking about milliseconds. Today if you want to add an inspection test for a human any kind of test is going to add a second or two. Human inspectors can typically only inspect about five things total per person. So if you wanted to test five different ways something could fail that would take up your whole human, and that human will probably take 10 seconds to do that inspection. We can do those five tests much faster and in real time.

Now, if you ran, say, a thousand tests I think it would it would slow us down initially, but we would work hard to bring that the back up. Our customers are typically running 10 to 15 tests on an individual kind of inspection – an inspection being a whole view of the product.

Design News: There’s also a physical component necessary as well, right? Can you talk a bit more about the hardware setup required for this?

Shedletsky: We create images from an array cameras that are deployed wherever the manufacturing lines are. The reason is because you want checkpoints that keep states of assembly before you close up the unit and it’s hard to get back at it.

We put our vision in places where vision hasn’t typically been deployed in the past. Typically industrial vision is deployed for a very, very narrow purpose like, “I want to measure this gap.” We’re using it in different way that’s more generalized.

DN: Are these proprietary cameras? How does the camera system get implemented?

Shedletsky: The cameras are not proprietary. They’re off-the-shelf. We can integrate if customers already have cameras that are producing images. Many of our customers do have cameras that are in reasonable spots where they want the data records; they just want this augmentation for new defects as well.

DN: How big of an infrastructure change is it for companies that don’t already have large integrated camera systems? What’s the implementation process like?

Shedletsky: The reason we built our own hardware stations is so that we can deploy them incredibly quickly. We do not rely on our customers to have significant infrastructure in place. We sit down with our customers, we figure out where we’re going to put stations on the line, and then four weeks later we show up with equipment. We literally plug it into the wall and then we’re ready to go.

We’re not taking [assembly] lines down to do this. We get everything set up. It takes about an hour or two per station. We calibrate, we train everybody, and then they just run their build and we’re just like a station on their line.