world’s-first-ai-powered-lego-sorting-machine-built-with-10k-lego-bricks

There are some individuals who find sorting LEGO pieces therapeutic, but most of us loathe the task. And there are entirely non-LEGO machines that could do it, although what fun is that? Some people have tried to use LEGO to build sorting machines, yet their limitations have been quickly apparent. Enter Daniel West and his incredible Universal LEGO Sorting Machine! This baby uses Artificial Intelligence, with the most extensive index to date, to sort parts at a speedy one brick every 2 seconds!

What makes this sorting machine so unique is that it utilizes a concept called a Convolutional Neural Network. It uses existing databases to learn and recognize nearly all pieces ever produced, even positioned different ways. That’s over 25 million 3D images! What databases, you ask? They’re ones that you may use on a regular basis, such as the LDraw parts library and Rebrickable. We asked Daniel about some other aspects of the project and the numbers are certainly impressive. He estimates that the build uses roughly 10,000 LEGO elements, including six LEGO motors. It also uses several non-LEGO parts, including nine servo-motors and a Raspberry Pi brain. It’s split into three modules — part separation, scanning, and bucket distribution.

The build process and concept design

As Daniel tells us, the inspiration came in 2011 after seeing other sorting machines on Youtube, but work didn’t start until 2016 while he was studying “computer vision” at university. Unfortunately, early tests didn’t work, until a year later when he realized he should implement AI. The building portion took about 6 months, with many iterations by trial and error.  The most difficult parts to incorporate were the vibrating feeder and output buckets. But the real triumph was the programming development, which took a whopping 2.5 years! For the programming geeks out there: Daniel used Python to write the code, as well as Tensorflow for the machine learning framework. Processing 25 million images also required a huge amount of computing power: Daniel used Amazon Web Services (AWS) to perform 2 years of CPU core time in just over a day!

Some of you might have an excellent grasp on what that is, but for the laypeople out there, Daniel has a video to explain it more simply.

While building this marvelous machine, Daniel tells us that he realized these complex problems were nearly impossible to solve with LEGO-produced electronics. The possibilities of LEGO may be virtually endless, but this required some more serious technology. For example, vivid lighting, a high-resolution camera, and a special conveyor belt were all necessary. Additionally, the cost of using LEGO servo motors and Mindstorms brains would’ve been too expensive. Daniel also says that his goal was actually not to build the machine out of LEGO, but that LEGO lent itself perfectly to some of the structure.

Moving forward

Daniel is continuing to fine-tune his work. He has already written more than one article about the project on Toward Data Science, and he hopes to also write an academic paper on the topic. When asked about drawing up early building plans, he tells The Brothers Brick that he didn’t use any and says, “I think one of the key benefits of working with LEGO is that redesigning or changing the shape of something is so nondestructive, it allows you to be really flexible and agile when it comes to design.” It’s uncertain if he will offer building instructions to people who want one of their own, but the good news is that he hopes to turn the programming into an open-source dataset! He’s excited to see what others come up with. It’s safe to say that his advice to builders attempting something similar would be to go for it and don’t give up.


Although it’s less advanced, check out another impressive LEGO sorting machine built by the BrickIt team using only LEGO. Or if you’re stuck with doing it the old-fashioned way, consider this sorting tips essay. And if you’re a fan of both LEGO and AI (if you’re here, of course you are!) check out Braille LEGO and AI.

the-world’s-most-advanced-ai-can’t-tell-what’s-in-these-photos.-can-you?

Is that a manhole cover or dragonfly sitting on a table? Is that a green iguana or just a squirrel running with some nuts? Is that a unicycle or a crocodile crossing the road? To humans, the answer is obvious. But the best image-identifying artificial intelligence in the world hasn’t a clue.

That’s because each of these images was carefully selected to fool state-of-the-art image recognition technology. They’re part of a collection of 7,000 images curated by researchers from UC Berkeley, the University of Washington, and the University of Chicago.

“Current [machine learning] models are brittle,” says Dan Hendrycks, a PhD student in computer science at UC Berkeley who was an author on the paper. “While other research uses artificial data to study robustness, we show that models make egregious and highly consistent mistakes on real data [with real photos].”

Unicycle [Image: courtesy Dan Hendrycks]

To understand why that matters, let’s rewind. Over the past few years, image recognition has gotten really good, really fast. That’s largely thanks to an ever-growing open data set created by the University of Stanford called ImageNet. The collection now consists of over 14 million photos, which are each labeled with identifiers like “tree” and “sky.” This massive database is a training set, or a reference for new AI systems to learn how to identify images, much like a toddler can reference a picture book to slowly learn new words. Artificial intelligence trained with ImageNet—which you’d probably know best from Microsoft services like Bing—have gotten extremely accurate, able to identify objects with accuracy as high as 95%. This is actually better than humans performing the same job!

But closing that last 5% accuracy gap is an extremely large problem. Since 2017, computers haven’t been getting more accurate at identifying images. That’s why researchers are exploring ways to understand those few images that computers can’t seem to parse. The team behind the new collection scoured Flickr by hand, looking for photos they thought might confuse software. They’d test them against AI models trained on ImageNet, and if the images proved confusing, these photos were added to their new data set—which they dubbed ImageNet-A. It’s basically the anti-ImageNet. The 7,000 photos in this collection drop the accuracy of AI from over 90% to a mere 2%. Yes, you read that right. Ninety-eight times out of 100, the best vision AI models in the world will be confused by these photos.

The question of why AI systems don’t understand these images is complex. Teaching AI today tends to involve throwing a lot of data into a black box—in other words, you can only judge its accuracy based on its final conclusion, not the process it took to get there. If that black box sees enough variations of a tree that it begins identifying new trees in new photos, we consider it successful. (This repetitious task is known as machine learning.) The problem is, we don’t know why the AI has decided that a tree is a tree. Is it the shape? Color? Context? Texture? Is it because trees have some unifying core geometry that humans have never recognized? We don’t know. AI is judged by its answers, not its reasoning. That means we can get all sorts of unexpected bias from AI, which poses a major problem when AI systems are being used in technology like autonomous cars or fields like criminal justice. It also means that image recognition systems aren’t intelligent in any real way; they’re more like match game savants.

Sea Lion [Image: courtesy Dan Hendrycks]

Building ImageNet-A is about tricking AI to discover why certain images confuse these systems. For example, when an AI mistakes one of the images—of a squirrel—for a sea lion, the lack of deeper intelligence and reasoning starts to become clear. The system is relying on texture of these animals alone, and not considering their relative size or shape for identification. “Photos which require understanding an object’s shape seem most likely to trick a model,” says Hendrycks.

With ImageNet-A, researchers have successfully found 7,000 blind spots in vision AI. Does that mean these images can just go into a new training set and fix their shortcomings? Probably not. “As there is much diversity and complexity in the real world, training on these images would likely not teach models how to robustly manage the full range of visual inputs,” says Hendrycks. “Collecting and labeling, say, 1 trillion images may resolve some model blind spots, but patching each previous blind spot will likely fail when novel scenarios occur and when the world changes.”

In other words, simply adding more and more photos to current machine learning data sets cannot fix the core shortcomings of its logic. There will always be images a computer hasn’t seen enough of before to identify accurately. So what can researchers do to close that lingering 5% gap? Hendrycks says they need to develop new methodologies, outside the bounds of modern-day machine learning, to create more sophisticated AI systems. Or, you know, they could not do that—and let us humans keep our smug superiority over machines for just a little bit longer.