deepfakes:-the-looming-threat-of-2020

I’m watching a clip from the movie The Shining. Shelly Duvall is hiding from her crazed husband as he chops down the door with an axe. Jim Carrey sticks his head through the opening and cackles the iconic line: “Here’s Johnny!”

…Jim Carrey is not in The Shining.


What you’re seeing is not a Hollywood special effect. It wasn’t done with After Effects, green screen, or with costuming and makeup. The video is a fake created by deep learning artificial intelligence – a deepfake. And anyone with a powerful computer and enough time can make one.

You might have heard of deepfakes before, or glimpsed headlines discussing the technology. You might even have laughed at various YouTube videos on channels such as Ctrl Shift Face that have swapped faces of celebrities in iconic roles to some humorous and sometimes unsettling results (once you’ve seen any of the bizarre deepfakes involving Nicolas Cage you can never un-see them.)

But deepfakes, once confined to darker corners of the internet, are becoming a serious threat. In the US, particularly as the 2020 election season rapidly approaches, AI experts are warning that deepfakes could become a powerful tool for spreading misinformation and manipulating the public. With enough effort a bad actor could create a video of any political candidate saying nearly anything. And in today’s climate of social media outrage and algorithm-driven content distribution, there’s no telling how far it could spread before someone caught it.

It’s time engineers, developers, and technologists all had a serious discussion about deepfakes.

(Image source: Adobe Stock)

The Origin Of Deepfakes

There’s no one particular person that has taken credit for originally developing deepfakes. Their existence owes to a confluence of technologies ranging from ever-more sophisticated computer vision algorithms and neural networks, to increasingly powerful GPU hardware.

The first deepfakes to emerge on the internet seem to have emerged in 2017, when an anonymous Reddit user called “Deepfakes” began distributing illicit, altered videos of celebrities online. Other Reddit users followed suit and it wasn’t long before a community had sprung up around distributing both deepfakes themselves as well as tutorials and software tools to create them.

In an interview with Vice, [NSFW link] one of the first outlets to take an extensive look at deepfakes, the Reddit user outlined how comparatively easy the process is:

“I just found a clever way to do face-swap. With hundreds of face images, I can easily generate millions of distorted images to train the network. After that if I feed the network someone else’s face, the network will think it’s just another distorted image and try to make it look like the training face.”

But it wasn’t all fun and games. Far from it. When they first appeared, deepfakes had one particularly popular and disturbing use case – pornography. Much of the early deepfake content available was pornographic films created using the faces of celebrities like Gal Gadot, Scarlett Johansson, and Taylor Swift without their consent.

As the videos proliferated, there was an crackdown with Reddit itself shutting down its deepfakes-related communities, pornographic websites removing the content, and sites like GitHub refusing to distribute deepfake software tools.

If private citizens weren’t that concerned yet it was probably because sites got somewhat ahead of the problem. Left unchecked it wouldn’t have been long before deepfake pornography spread from celebrities to every day people. Anyone with enough publically available photos or video of themselves on a platform like Facebook or Instagram could potentially become a victim of deepfake revenge porn.

In 2018, Rana Ayyub, and investigative journalist from India, fell victim to a deepfakes plot intended to discredit her as a journalist. Ayyub detailed her ordeal in an article for The Huffington Post:

“From the day the video was published, I have not been the same person. I used to be very opinionated, now I’m much more cautious about what I post online. I’ve self-censored quite a bit out of necessity.

“Now I don’t post anything on Facebook. I’m constantly thinking what if someone does something to me again. I’m someone who is very outspoken so to go from that to this person has been a big change.

“I always thought no one could harm me or intimidate me, but this incident really affected me in a way that I would never have anticipated…

“…[Deepfakes] is a very, very dangerous tool and I don’t know where we’re headed with it.”

How Deepfakes Work

On the surface the process of creating a deepfake is fairly straightforward. First, you need enough images (hundreds or more ideally) of your target – showing their face in as many orientations as possible (the more images you can get, the better the results – hence why celebrities and public figures are an easy target). If you think it might be difficult to get hundreds or thousands of images of someone remember that a single second of video could contain 60 frames of someone’s face.

Then you need a target video. The AI can’t change skin tone or structure so it helps to pick a target and source with similar features. Once a deep learning algorithm is trained on a person’s facial features, additional software can then superimpose that face onto another person’s in your target video. The results can be spotty at times, as many videos online will attest to, but done right, and with enough attention to detail, the results can be seamless.

In an interview with Digital Trends, the anonymous owner of the Ctrl Shift Face YouTube channel (the channel responsible for the Jim Carrey/The Shining videos, among others) discussed how simple, yet time-consuming the process is:

“I’m not a coder, just a user. I don’t know the details about exactly how the software works. The workflow works like this: You add source and destination videos, then one neural network will detect and extract faces. Some data cleanup and manual extraction is needed. Next, the software analyzes and learns these faces. This step can sometimes take a few days. The more the network learns, the more detailed the result will be. In the final step, you combine these two and the result is your deepfake. There’s sometimes a bit of post-process needed as well.”

On one hand, the relative ease at which this can be done with little to no coding experience is certainly disconcerting. On the other however, deepfakes are an impressive demonstration of the sophistication of AI today.

At the core of deepfakes is a neural network called an autoencoder. Put simply, an autoencoder is designed to learn the important features of a dataset so it can create a representation of it on its own. If you feed a face into an autoencoder its job is then to learn the distinguishing characteristics that make up a face and then construct a lower-dimensional representation of that face – in this case called a latent face.

Deepfakes work by having a single encoder train to create a generalized representation of a face and then have two decoders share that representation. If you have two decoders – one trained on Person A’s face, the other on Person B’s – then feed the encoder either face you can transpose Person A’s face onto Person B’s (or vice versa). If the encoder is trained well enough, and the representation is generalized enough, it can handle facial expressions and orientations in a very convincing way.

Since faces in general are very similar in their overall shape and structure, a latent face created by an encoder using Person A’s face, can be passed to a decoder trained on Person B’s face to good effect. The result at the other end is a video of Person B, but with Person A’s face.

As long as you have two subjects similar enough and a computer with enough processing power, the rest just takes time. Faceswap – one of the more readily available deepfakes apps – can run on a Windows 10, Linux, or MacOS computer and recommends a newer Nvidia GPU for processing. “Running this on your CPU means it can take weeks to train your model, compared to several hours on a GPU,” according to Faceswap’s documentation.

top-10-tech-failures-from-2019-that-hint-at-2020-trends
  • As the last year of the last decade, 2019 had a lot to live up to. Within the span of 10 short years, service apps like Uber, Lyft, AirBnB and others on mobile phones became big business. Mobile phone companies introduced amazing personal features like voice assistance (e.g., Siri and Alexa), iCloud connections for fast video streaming, and very high-resolution HD cameras. Not to be outdone, the automobile was transformed with automation tech and electrification. A Tesla electric vehicle even made it into space.

    Space technology flourished in the last decade with the commercialization of space rockets, the launch of hundreds upon hundreds of communication satellites and the increasing popularity of Cubesats. Back on earth, homes and buildings became smarter while alternative forms of energy continued to improve in efficiency. And the list goes on.

    But there were several notable failures in the last decade, many seeming to culminate in 2019. Here is the short list of the 10 tech failures most worthy of mention, in no particular order.

  • #1 Glitchy Spacecraft Launch

    Boeing suffered several major setbacks this year. The first one was an incomplete demonstration flight of its new astronaut capsule. The mission of Boeing’s CST-100 Starliner spacecraft began successfully but suffered technical problems that prevented it from reaching the International Space Station (ISS). Many observers believe that the Starliner capsule on top of an Atlas rocket simply burned too much fuel as it climbed into space, leaving an insufficient amount to reach the ISS. Some have suggested the failure was from a glitchy timer system that turned off the rocket thrusters too soon.

    The demonstration test wasn’t a complete failure as the Starliner did land successfully in the deserts of New Mexico.

  • #2 Andromeda Strain revisited?

    Remember the Andromeda Strain? It was a techno-thriller novel from 1969 written by Michael Crichton that centered around the efforts of a team of scientists investigating the outbreak of a deadly extraterrestrial microorganism in Arizona.

    Fast forward to 2019. A company in Israel launched its first lunar lander that unfortunately crashed-landed on the moon. The small robotic spacecraft called Beresheet was created by the SpaceIL and Israel Aerospace Industries (IAI). It failed just moments before landing on the moon.

    This was an unmanned operation, but not one devoid of life. A US-based nonprofit had added tardigrades, or water bears, to the capsule. These microscopic, eight-legged creatures could survive in a dormant state through harsh conditions, and maybe even on the moon.

    In other words, earth-based lifeforms have now been introduced to the moon’s ecosystem. Without some water, the tardigrades aren’t likely to revive and spread. But this failure highlights the need for planetary protections – both on the moon and earth.

    It should be noted that the goal of the Arch Mission Foundation was not to contaminate the moon but rather to, “create multiple redundant repositories of human knowledge around the Solar System.” The foundation tests out technologies for long-lasting archives, like securing information in DNA strands or encapsulating insects in artificial amber. In addition to water bears, the Arch’s payload included nickel sheets nanopatterned with thousands of pages of Wikipedia and other texts.

    One of Arch’s first missions was launched by SpaceX on the Falcon Heavy rocket and is now entering an orbit around the Sun for millions of years.  The first books in the Solar Library were Isaac Asimov’s Foundation Trilogy. Can you guess where they are located? The books containing Asimov’s Foundation Trilogy were placed in the glovebox of the Cherry Red Tesla Roadster that will soon be orbiting the Sun.

  • #3 Communication Failures (again)

    Both Boeing and the FAA have been cited for oversight breakdowns that contributed to 737 Max failure. But the actual cause of the tragedy that resulted in the crash of two Boeing 737 Max aircrafts seems to be broad failures in the automated system that controls the new planes. The report by the Joint Authorities Technical Review panel said that assumptions about critical aspects of the plane’s design were “not adequately reviewed, updated, or validated.”

    This lack of communication and incorporation of warnings from the engineering teams is a common problem with very complex, modern systems, e.g., the Challenger Space Shuttle and others.

  • #4 Disappearing Bitcoin Miners

    While 2019 was overall a profitable year for the semiconductor chip development market, there were a few noticeable declines. One was the system-on-chip (SoC) devices made specifically for bitcoin mining. The cost of mining for bitcoins dramatically increased in 2019, leading to a drop in the need for hardware SoC-based equipment.

    In essence, it took much more effort for bitcoin miners to solve the equations required to validate transactions on the Bitcoin network. This increase in mining difficulty reflects the increased competition.

    Another slowdown was in the market for automotive chips and electronics, as companies and drivers realized that autonomous car technology won’t really be ready for several more years. This corresponds well to Gartner’s famous “trough of disappointment” portion in its hype cycle for emerging technologies.

  • #5 Cloud Buckets

    A new type of cybersecurity issue has emerged in which millions of people have had their personal information exposed through file storage systems known as cloud buckets. Such storage areas typically consist of public resources that are easily accessed by a variety of web service applications. Cloud buckets are like public file folders which contain user information.

    Placing sensitive user data information in the cloud offers companies the capability to offload their security to big firms like Google, Apple, Amazon or Microsoft. The problem is that the buckets are not configured by these firms but rather by the companies who use their cloud networks.

    Not all of these companies are storing their customer information properly. This lack of security is easy pickings for identity thieves. It is an example of readily available information that doesn’t require any hacking.

  • #6 Hacks of the Year

    Speaking of hacks, this year experienced even more cybersecurity breaches. In 2018, there were 500 million personal records stolen, according to the Identity Theft Resource Center. But that number was miniscule compared to the 7.9 billion records exposed in 2019 by over 5,000 breaches, as reported by Risk-Based Security. Compared to the 2018 Q3 report, the total number of 2019 breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent. Here’s just a small sampling of the more infamous breaches (more details here):

    > ElasticSearch Server Breach

    > Canva Data Breach

    > Facebook App Data Exposure 

    > Orvibo Leaked Database

    > Social Media Profiles Data Leak

    Sadly, the common theme in many of these data exposures is that data aggregators obtained and used personal information in a way the owners never imaged or gave their consented. This is a legal problem as much as a technical one.

  • #7 Google Glass

    In 2019, Google announced a new $999 Glass augmented reality headset that looked suspicious like the failed Google Glass from the past.

    Early in 2012, Google co-founder Sergey Brin debuted Google Glass. A year later, the founder and head of the Google Glass Project, Babak Parviz, delivered a keynote about the technology at the IEEE Hot Chips event at Stanford.

    One of the ongoing leading smart phone trends is the ever-improving screen resolution and larger screen size. During his keynote, Parviz argued that there was a physical limit to this trend, but glass offered the next display form factor evolution, i.e., immersion with one’s surroundings. This will be especially important in augmented reality applications.

    Originally, Google Glass was a standalone unit (not yet cloud-based) that included internet access, voice controls, and a camera for pictures and videos. It accomplished all of this with dual core processors running at more than 1 GHz. Five MEMS sensors capture all the environmental data. It had a two-dimensional touch panel on side of glass.

    Why was this technology a failure? It wasn’t because of the technology, but rather because it wasn’t clear to the customer what problem it solved or why they needed it. Additionally, many felt it was intrusive as a user of the device could take pictures and short film snippets of people without their knowledge.

    In January 2015, Google announced that they would no longer be developing Google Glass. But that wasn’t the end of the project. Instead, Google pivoted to the business sector by launching Glass Enterprise Edition for workplaces like factories in 2017. This year, Google announced the Glass augmented reality headset.

  • #8 Folding Phone

    Samsung’s Galaxy folding phone was billed as a new dawn in display technology. The phone levered open into a 7.3-inch dynamic AMOLED display.

    Unfortunately, the company had to postpone the launched of the folding phone after early review models broke, delaminated, and got filled with gunk. The problem seemed to be potential defects with a weak hinge as well as substances found inside the device.

    As with many new technologies, the price tag also presented a barrier to anyone but early adopters. A reengineered and improved version is now on sale for near $2,000.

  • #9 Machine-Bias or Garbage-in, Garbage-out

    The challenge of machine-bias came clearly into focus in 2019. Similar to human-bias, machine-bias occurs when the learning process for a Silicon-based machine makes erroneous assumptions due to the limitations of a data set and pre-programming criteria. One example of machine-bias was recently revealed in Apple’s new credit card, which contained an algorithm to decide how much trustworthy (or risky) a user might be. This evaluation used to be done by trained humans but now is often performed by AI based algorithms.

    Apple’s credit card was shown to have a gender bias. Males are more likely to get a higher credit line limit than females. This bias was highlighted when a male entrepreneur was assigned a spending limit 10 times higher than that of his wife, even though they have a common account.

    How does a machine get a bias? A report from IBM Research outlines two main ways AI systems could inherit biases. First, the AI software might contain errors and dependencies. Second, the data set from which AI learns its task may have flaws and bias. These data points come from the real world which contains many biases, e.g., favoring white men to the exclusion of women and minorities. Algorithms are only as smart as the data you feed them. This is a modern update of the old computer data expression, “garbage-in, garbage-out.”

  • #10 Software App Failures

    No list of tech failures would be complete without mention of the apps that didn’t make it. The range of the applications that failed is wide.

    Consider first British Airways (BA) glitch, whose computer system completely wend down during a peak travel season. Over a hundred flights of BA were cancelled and near to 300 delayed. Thousands of passengers were affected. Sadly, this wasn’t the first time the system had failed, which suggests a systemic problem that has not been properly addressed by management.

    Or how about the Facebook 2019 failure that prevented users from viewing or loading images form the newsfeed? Several other social media apps had a similar problem, including Instagram, WhatsApp and Messenger. In each case, users were prevented from sending messages, media files and the like.  Facebook claimed their problem was the result of an accident during routine maintenance.

    Several app failures or hacks from 2019 include Apple’s Facetime bug and the Ring security camera intrusions. The later may have been more of a customer problem as Ring notes that the system invasion was likely the result of the hacker gaining access to the family’s account through weak or stolen login credentials.

liberty-defense's-radar-can-detect-undetectable-ghost-guns
Hexwave can detect guns and other dangerous items on people in real time and at a walking pace. (Image source: Liberty Defense)

On the day of his 16th birthday, Nathaniel Berhow, a student at Saugus High School in Santa Clarita, Calif., shot and killed two of his classmates and wounded three others. The tragedy mirrored so many others that have been happening all over the US at an alarming rate in recent years. But one particular detail of the .45 caliber handgun Berhow used in the shooting caught the attention of law enforcement and media.

The gun was a “ghost gun,”so-called because it contained no serial number – making it difficult, if not impossible, to be traced by law enforcement. Ghost gun parts can even come from multiple sources further muddying the the chain of custody for law enforcement.

The gun was assembled from a kit of parts purchased online. Sellers of such kits argue that since they are only selling gun component rather than fully assembled firearms, they are legally in the clear.

Ghost guns can even be assembled using 3D-printed parts, printed by a seller or by the buyer in their own home. Files to print gun parts are readily available on the Internet and their legality is the subject of an ongoing legislative debate.

The modular nature of these guns, as well as their plastic components, also means it can be easy to sneak them past metal detectors.

The Saugus High School shooting was only the latest in a series of increasingly high-profile incidents involving ghost guns. But while law enforcement and government agencies are grappling with legislating and tracing ghost guns and 3D-printed guns, other groups and companies are working on the ground level to develop technologies to make these firearms easier to detect.

One such company is Vancouver, British Columbia-based Liberty Defense, a spin-off from MIT’s Lincoln Lab. Liberty Defense has developed a technology it calls Hexwave that combines machine learning and computer vision with an array of radar antennas to detect weapons regardless of their material composition. The goal is to augment existing security systems in airports, malls, stadiums, and other public spaces with a mobile system capable of detecting not only concealed weapons, but bullets, magazines, explosives, and even larger weapon components as well.

3D Printed Gun
A 3D-printed firearm, called “The Liberator,” made nearly entirely of plastic printed parts.

(Image source: Defense Distributed)

Bill Riker, CEO of Liberty Defense, told Design News that Hexwave is aimed at detecting weapons and other articles in dynamic, urban environments – places more chaotic and unpredictable than the queues at airports.

“In the airports and other places, it’s a very rigorous and effective process, but it takes a long time. And you’ve got a very captured audience,” Riker said. “In the urban environment – schools, stadiums, and malls – you’ve got a much less controlled environment, but you have an equally growing need for security.”

Riker said Liberty Defense’s technology allows for both indoor and outdoor detection zones, which is “important because of the need to be able to accommodate a layered defense when you are really trying to buy time for a security operation to proactively prevent a weapon from coming into a facility.” Hexwave can also be deployed covertly via kiosks or other means such as being hidden in walls.

Riker, who has over 35 years of experience working in security systems with Smiths Detention, DRS Technologies, General Dynamics, and the U.S. Department of Defense, likens Hexwave’s sensor technology to an insect’s eye – where a number of eyelets function together as one unit. Here, instead of eyelets, an array of low-frequency radar antennas (200 times lower power than Wi-Fi, according to the company) creates a three-dimensional point cloud of a person – capturing over 400,000 data points, according to Liberty Defense. “Think about it like a child’s coloring book. You’re connecting all the dots, except it’s in three dimensions,” Riker said.

“Part of the great work that MIT did here was to craft a design that can create multiple transmit and receive antennas,” Riker explained. “Typically, an antenna array has its transmitting and receiving off the same dish. This array actually separates them and you’re able to have essentially a multistatic display configured into a monostatic display.”

That point cloud is then converted into an image that a deep learning neural network analyzes to look for weapons and suspicious objects – all done in real time.

“We teach [the AI] a whole different series of weapons and as new weapons occur, we teach it and then we send that out by Wi-Fi into all the different systems to update them on a regular basis,” Riker said. “What’s important here is you don’t have to go from the cloud to a lookup table. All the analysis is done on the unit, therefore it’s super fast.”

Rather than distinguishing specific models or types of guns, what Hexwave looks for is identifiable shapes. “At this point, it’s general, but we can tell the difference between a plastic handgun and a metal metal handgun,” Riker said. “You can see if it’s a revolver or if it’s a semiautomatic pistol. You can also see explosives, including pyrotechnics. In that case you can actually you see the accelerates within the container.”

He continued. “There are all kinds of high-strength polymers that you can build 3D-printed guns out of. We can see that because it’s got a physical form to it, meaning, regardless of the material, there’s going to be some degree of reflectivity there. And so when you’re emitting energy against it that’s going to have a return that gives you that three-dimensional look.”

Once the AI has made its analysis, it can notify a human guard using a tablet. Riker said Hexwave can also be implemented into security systems to control things like door lock systems, video management systems, and command and control in cases where an immediate response is needed.

Hexwave uses an array of antennas to create a 3D point cloud image of people that pass by its towers. (Image source: Liberty Defense)

Liberty Defense says about 1,000 people can pass through a Hexwave-enabled security post. But there are limitations in terms of queuing people through the system. Entrance ways in typical stores, malls, and even schools can be chaotic – with lots of people entering and existing simultaneously in clusters. Hexwave requires each person be scanned one and a time – which means single file lines.

“The radar or the the energy we’re using is non-ionizing. So we’re out of the x-ray side, past millimeter wave, and into a different area,” Riker said. “And so what ends up occurring here is you can’t see through people. But, quite frankly, you can’t see through multiple people with x-ray systems either.”

Riker said the Hexwave system does offer an increased throughput over typical security stations and metal detectors – as well as more accuracy for security personnel – that Liberty Defense believes will ease any burden by requiring people to line up.

“So you get your increased throughput from two things really: Number one, you’re passing through at a walking pace and you typically should not have to divest of your typical articles on your body like a cell phone, a set of keys, or something like that. And then that’s also when it does identify something that’s either a known threat, clearly not a threat, or as an anomaly.

“Let’s say there’s an odd shaped object on you that’s really big that may look like a bar of soap. Why are you carrying a bar of soap on you? That’s kind of an odd thing to have, so the system will say something’s wrong here. It’s an anomaly. But the system also shows where that object is on your body so that a security official can go ahead and say, ‘Okay, please step over here for a second; what is that in your back pocket?’ ”

The real time speed of the system also offers an advantage over systems such as what the public may be used to in airports, for example.

“The airports do a great job. The technology is very effective. Airport scanners or the body scanners are typically using a four-millimeter wave and they are creating an image. It’s just that it’s not in real time. The reason why you can’t have little kids go through it is not because of the emission level. It’s because the kids typically can’t stay still. If you move ,you blur the image.

“For our system, it’s capturing the images several times a second. Little kids, adults, everyone can just keep walking through,” Riker said.

This anomaly detection feature becomes particularly important as criminals will often find ways around any new security system given enough time and experience. Different facilities can instruct the system to flag certain large items that shouldn’t be carried in (think large bags at sports stadiums) and human guards can intervene if something like a radar jammer or scrambler causes interference in the system. “If you’ve got a handgun or something like that in a radar-proof bag, it’s going to see a big bag on your body, flag the anomaly, and trigger the subsequent inspection,” Riker said.

In April 2019, Liberty Defense was granted an experimental license by the FCC to begin testing Hexwave. The company is planning to roll out beta tests in first half of 2020. As of writing 11 site have agreed to participate including Rogers Arena in Vancouver, the University of Wisconsin, and the Virginia state capital building in Richmond, VA. Utah Attorney General Sean Reyes has committed to testing Hexwave in his state. And the Maryland Stadium Authority will be testing the technology in various sporting venues.

Under federal law all firearms must be able to be detected by a metal detector, and all major components must be detectable by x-ray. The Undetectable Firearms Act of 1988 requires guns to be detectable by metal detectors even “after removal of grips, stocks, and magazines,” and the barrel, the slide or cylinder, or the frame or receiver must be detectable by x-ray.

However, the Giffords Law Center to Prevent Gun Violence cautions: “The Undetectable Firearms Act does not specify what portion of the firearm must be detectable by a metal detector. This could allow an individual to create a mostly plastic but technically compliant firearm, using a 3D printer or other technology, that contains metal in an extraneous part of the firearm that could be removed prior to entering a security area.”

Riker said Liberty Defense is hoping to provide the public with a degree of safety and peace of mind that legislation hasn’t been able to. “That peace of mind seems to be eroding right now. So there’s a great potential to help recover that in our society,” he said. “I think we’re pretty confident that we’ll be able to stay ahead of the threat curve”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? 

Register to attend!

ai-enables-design-of-spray-on-coating-that-can-generate-solar-energy

Solar energy is increasingly becoming a source of renewable energy throughout the world. Now researchers have found a way that could make generating energy from the sun even more ubiquitous by creating a spray coating that can be used on bridges, houses, or even skyscrapers so they can be energy self-sufficient.

artificial intelligence, AI, spray-on coating, solar energy, University of Central Florida, machine learning
 

A team from the University of Central Florida used artificial intelligence (AI), or machine learning, to optimize the materials used to make perovskite solar cells, or PSCs. The organic-inorganic halide perovskites material used in PSC converts photovoltaic power into consumable energy.

Perovskite cells in general have long been viewed as the future of solar cells because the material has key advantages over the silicon used to develop this technology today, including higher efficiency and less cost in cell production.

Another benefit of perovskite solar cells is that they can be produced not just in a solid-state form for typical solar panels, but also in a liquid state, which expands the flexibility of how solar-energy-generation technology can be delivered, researchers said. This could ultimately pave the way for any object or structure to generate its own electricity from the sun, which could revolutionize how people use energy.

AI as a solution

Perovskite materials have one major drawback, however—they are difficult to fabricate in a usable and stable material, costing scientists a lot of time just to find the right combination of materials to overcome this limitation while reaping the material benefits. That’s where a new method developed by a team led by Jayan Thomas, an associate professor at the university’s NanoScience Technology Center, can be an effective way to solve this issue.

Thomas’ team reviewed more than 2,000 peer-reviewed publications about perovskites and collected more than 300 data points that they then transferred into a machine-learning system they created, he said. The system analyzed the information and predicated which recipe for perovskites would optimize the material for solar-energy generation.

The results of the study show that AI can be used to craft perovskite materials for the creation of highly efficient technology to make this type of renewable energy more prevalent, Thomas said. “This can be a guide to design new materials as evidenced by our experimental demonstration,” he said. Researchers published a paper on their work in the journal Advanced Energy Materials.

Indeed, if the team’s technology lives up to its potential, researchers can create a world standard for PSC, paving the way for spray-on solar cells in the near future. “This is a promising finding because we use data from real experiments to predict and obtain a similar trend from the theoretical calculation, which is new for PSCs,” said Thomas. “Perovskites have been a hot research topic for the past 10 years, but we think we really have something here that can move us forward.”

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

best-ai-stories-of-2019
(Image source: Adobe Stock)

We’ve picked our favorite AI-related stories from 2019.

The 10 greatest issues AI needs to face

While we celebrate the positive impacts of artificial intelligence let’s not forget there’s also a lot to be concerned about.

The Apple Card Is the Most High-Profile Case of AI Bias Yet

Apple Card users have alleged that its credit decision algorithm discriminates against women.

How AI at the Edge Is Defining Next-Generation Hardware Platforms

Moving AI from the cloud to the edge was a big trend in 2019. Chris Cheng, distinguished technologist on the hardware machine learning team at Hewlett Packard, takes a look at some of the latest research being done on AI inference at the edge.

(Image source: OpenAI)

OpenAI’s Robot Hand Taught Itself How to Solve a Rubik’s Cube

Rubik’s Cube Solving Robot Hand Sparks Debate in the AI Community

Using novel neural networks, OpenAI enabled a robotic hand is able to learn how to solve a Rubik’s Cube on its own. Concerns regarding OpenAI’s robot hand that can solve a Rubik’s Cube have created a debate among engineers and AI experts on social media.

What’s the State of Emotional AI?

Artificial intelligence that can recognize human emotions – emotional AI – has been gaining momentum. But something’s missing. How long until we’ll be seeing it in our devices and cars?

(Image source: TuSimple)

UPS Has Invested in Autonomous Trucks After Ongoing Tests

TuSimple’s Autonomous Trucks Are Being Tested by the USPS

In 2019, TuSimple entered into partnerships with UPS and the US Postal Service to test self-driving trucks for hauling mail freight.

The New Raspberry Pi 4 Is All About AI and Embedded IoT

The Raspberry Pi has grown from a hobbyist machine to an IoT developer platform capable of even handling machine learning applications. Here’s our hands-on look.

A Look at the US/China Battle for AI Leadership

The US and China are waging a behind-the-scenes war over who will emerge as the global powerhouse of artificial intelligence. Where do each country’s strengths and weaknesses lie?

There’s a Diversity Crisis in the AI Industry

A lack of racial and gender diversity at the companies creating AI ties closely with issues of bias and racial discrimination in artificial intelligence algorithms, according to a new NYU study.

(Image source: Pixabay)

Can Trump’s New Initiative Make American AI Great Again?

A look at President Trump’s executive order aimed at accelerating America’s lead in artificial intelligence.

AI Could Make Quantum Computers a Reality

New research is examining the use of artificial intelligence to handle the calculations necessary for quantum computers to function.

2019's-10-best-books-for-engineers-and-technologists

Engineers will find something of interest in these selections, from Heaviside and Silicon Valley, to sustainable manufacturing, organs-on-a-chip, and more.

  • Don’t know what to get the engineer in your life? Here’s a mix of easily understood, yet engaging, books combined with a few hardcore technical works. All of these books were published in 2019, except for two that still remain worthy of note today.

  • The Forgotten Genius of Oliver Heaviside: A Maverick of Electrical Science

    By: Basil Mahon

    Publisher: Prometheus

    With the release of the film The Current War, it’s easy to forget the contributions of Oliver Heaviside. While The “current war” focused on the competition between Edison, Westinghouse, and Tesla to bring electricity to all of America, Heaviside (a contemporary of Edison and Westinghouse) was focused on electrical engineering technology to help bring mass communication to the country.

    Heaviside gave us the unit step function (remember calculus class?), coaxial cable, and the small coils placed in series with every telephone line to improve the signal by providing inductive loading.

    From the publisher:

    “This biography of Oliver Heaviside profiles the life of an underappreciated genius and describes his many contributions to electrical science, which proved to be essential to the future of mass communications. Oliver Heaviside (1850 -1925) may not be a household name, but he was one of the great pioneers of electrical science: His work led to huge advances in communications and became the bedrock of the subject of electrical engineering as it is taught and practiced today. His achievements include creating the mathematical tools that were to prove essential to the proper understanding and use of electricity, finding a way to rid telephone lines of the distortion that had stifled progress, and showing that electrical power doesn’t flow in a wire but in the space alongside it.

    At first his ideas were thought to be weird, even outrageous, and he had to battle long and hard to get them accepted. Yet by the end of his life he was awarded the first Faraday Medal. This story will restore long-overdue recognition to a scientist whose achievements in many ways were as crucial to our modern age as those of Edison’s and Tesla’s.”

  • Make, Think, Imagine: Engineering the Future of Civilization

    By: John Browne

    Publisher: Pegasus Books

    From the publisher:

    “Today’s unprecedented pace of change leaves many people wondering what new technologies are doing to our lives. Has social media robbed us of our privacy and fed us with false information? Are the decisions about our health, security and finances made by computer programs inexplicable and biased? Will these algorithms become so complex that we can no longer control them? Are robots going to take our jobs? Will better health care lead to an aging population which cannot be cared for? Can we provide housing for our ever-growing urban populations? And has our demand for energy driven the Earth’s climate to the edge of catastrophe? John Browne argues that we need not and must not put the brakes on technological advance. Civilization is founded on engineering innovation; all progress stems from the human urge to make things and to shape the world around us, resulting in greater freedom, health and wealth for all. Drawing on history, his own experiences and conversations with many of today’s great innovators, he uncovers the basis for all progress and its consequences, both good and bad. He argues compellingly that the same spark that triggers each innovation can be used to counter its negative consequences. This book provides an blueprint for how we can keep moving towards a brighter future.”

  • The Code: Silicon Valley and the Remaking of America

    By: Margaret O’Mara

    Publisher: Penguin

    Margaret O’Mara worked in the White House of Bill Clinton and Al Gore in the earliest days of the commercial Internet. There she saw firsthand how deeply intertwined Silicon Valley was with the federal government–and always had been–and how shallow the common understanding of the secrets of the Valley’s success actually was.

    In this work, she tells the story of mavericks and visionaries, but also of powerful institutions creating the framework for innovation, from the Pentagon to Stanford University. It is also a story of a community that started off remarkably homogeneous and tight-knit and stayed that way, and whose belief in its own mythology has deepened into a collective hubris that has led to astonishing triumphs as well as devastating second-order effects.

  • The Design of Coffee: An Engineering Approach

    By: William Ristenpart, Tonya Kuhl

    Publisher: CreateSpace Independent Publishing Platform

    Here’s another work that was published a few years ago but is relevant this year for its emphasis on cross-discipline collaboration, a trend noted in the chemistry industry.

    From the publisher:

    “[This book] provides a non-mathematical introduction to chemical engineering, as illustrated by the roasting and brewing of coffee. Hands-on coffee experiments demonstrate key engineering principles, including material balances, chemical kinetics, mass transfer, fluid mechanics, conservation of energy, and colloidal phenomena. The experiments lead to an engineering design competition where contestants strive to make the best tasting coffee using the least amount of energy – a classic engineering optimization problem, but one that is both fun and tasty! 

    Anybody with access to a sink, electricity, and inexpensive coffee roasting and brewing equipment can do these experiments, either as part of a class or with your friends at home. The Design of Coffee will help you understand how to think like an engineer – and how to make excellent coffee!”

  • Human Compatible: AI and the Problem of Control

    By: Stuart Russell, Allen Lane

    Publisher: Viking

    From the publisher:

    “Creating superior intelligence would be the biggest event in human history. Unfortunately, according to the world’s pre-eminent AI expert, it could also be the last. In this book on the biggest question facing humanity, the author explains why he has come to consider his own discipline an existential threat to his own species, and lays out how we can change course before it’s too late. There is no one better placed to assess the promise and perils of the dominant technology of the future than Russell, who has spent decades at the forefront of AI research. Through brilliant analogies prose, he explains how AI actually works, how it has an enormous capacity to improve our lives – but why we must ensure that we never lose control of machines more powerful than we are. Here Russell shows how we can avert the worst threats by reshaping the foundations of AI to guarantee that machines pursue our objectives, not theirs.”

  • Organ-on-a-Chip: Engineered Microenvironments for Safety and Efficacy Testing

    By: Julia Hoeng (Editor), David Bovard (Editor), Manuel Peitsch (Editor)

    Publisher: Academic Press/Elsevier

    From the publisher:

    “[This book] contains chapters from world-leading researchers in the field of organ on a chip development and applications, with perspectives from life sciences, medicine, physiology and engineering. The book details the field, with sections covering the major organ systems and currently available technologies, platforms and methods. As readers may also be interested in creating biochips, materials and engineering best practice, these topics are also described. Users will learn about the limitations of 2D in-vitro models and the available 3D in-vitro models (what benefits they offer and some examples). Finally, the MOC section shows how the organ on a chip technology can be adapted to improve the physiology of in-vitro models.”

  • Sustainable Engineering Products and Manufacturing Technologies

    By: Kaushik Kumar (Editor), Divya Zindani (Editor), J. Paulo Davim (Editor)

    Publisher: Academic Press/Elsevier

    From the publisher:

    “[This book] provides the reader with a detailed look at the latest research into technologies that reduce the environmental impacts of manufacturing. All points where engineering decisions can influence the environmental sustainability of a product are examined, including the sourcing of non-toxic, sustainable raw materials, how to choose manufacturing processes that use energy responsibly and minimize waste, and how to design products to maximize reusability and recyclability. The subject of environmental regulation is also addressed, with references to both the US and EU and the future direction of legislation.”

    Finally, sustainability factors are investigated alongside other product considerations, such as quality, price, manufacturability and functionality, to help readers design processes and products that are economically viable and environmentally friendly.”

  • Introductory Electrical Engineering With Math Explained in Accessible Language

    By: Magno Urbano

    Publisher: Wiley

    From the publisher:

    “[This work] offers a text that explores the basic concepts and principles of electrical engineering. The author explains the underlying mathematics involved in electrical engineering through the use of examples that help with an understanding of the theory. The text contains clear explanations of the mathematical theory that is needed to understand every topic presented, which will aid students in engineering courses who may lack the necessary basic math knowledge.”

    “Designed to breakdown complex math concepts into understandable terms, the book incorporates several math tricks and knowledge such as matrices determinant and multiplication. The author also explains how certain mathematical formulas are derived. In addition, the text includes tables of integrals and other tables to help, for example, find resistors’ and capacitors’ values. The author provides the accessible language, examples, and images that make the topic accessible and understandable.”

  • What Is Data Engineering?

    By: Lewis Gavin

    Publisher: O’Reilly Media, Inc.

    From the publisher:

    “The demand for data scientists is well-known, but when it comes time to build solutions based on data, your company also needs data engineers—people with strong data warehousing and programming backgrounds. In fact, whether you’re powering self-driving cars or creating music playlists, this field has emerged as one of the most important in modern business. In this report, Lewis Gavin explores key aspects of data engineering and presents a case study from Spotify that demonstrates the tremendous value of this role.”

  • Lithium-Ion Battery Failures in Consumer Electronics

    By: Ashish Arora, Sneha Arun Lele, Noshirwan Medora, Shukri Souri 

    Publisher: Artech House

    From the publisher:

    “This comprehensive resource caters to system designers that are looking to incorporate lithium ion (li-ion) batteries in their applications. Detailed discussion of the various system considerations that must be addressed at the design stage to reduce the risk of failures in the field is presented. The book includes technical details of all state-of-the-art Li-on energy storage subsystems and their requirements and provides a system designer a single resource detailing all of the common issues navigated when using Li-ion batteries to reduce the risk of field failures.

    “The book details the various industry standards that are applicable to the subsystems of Li-ion energy storage systems and how the requirements of these standards may impact the design of their system. Checklists are included to help readers evaluate their own battery system designs and identify gaps in the designs that increase the risk of field failures. The book is packed with numerous examples of issues that have caused field failures and how a proper design/assembly process could have reduced the risk of these failures.”

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

the-history-and-future-of-the-bci

Mindflex, from Mattel and NeuroSky, used brainwaves to control a series of fans to navigate a floating ball through an obstacle course. (Image source: NeuroSky)

Remember about five years ago when several consumer headsets from NeuroSky and Emotiv were pitched as a computer game controllers? For example, NeuroSky promoted its EEG Headset as a training aid for young padawans to focus their minds through various exercises so they could control the movement of tablet-projected holograms. Based upon the wearer’s focus and concentration, the brainwave sensors on the EEG headset would allow them to interact with an application on a table.

NeuroSky and others (most notably, Emotiv) have created EEG headsets for gaming, neurological rehabilitation, and other applications. These headsets are examples of brain computer interface (BCI) technology – a computer-based system that acquires brain signals, analyzes them, and translates them into commands that are relayed to an output device like a computer, game console, medical prosthetic, or similar devices.

A BCI doesn’t actually read thoughts but rather detects the smallest of changes in the energy radiated by the brain when you think in a certain way. A BCI recognizes specific energy/frequency patterns in the brain. Some believe that the combination of humans and technology could be more powerful than artificial intelligence (AI) alone. For example, instead of relying on human perception and reasoning, one could use neurotechnologies to improve our perception, as in the case of interpreting a blurry security camera image before deciding whether to take action.

In the future, perhaps the carbon-based mind will connect directly with silicon-based artificial intelligence, robots, and other minds through BCI technologies to greatly extend our existing senses or maybe add new ones.

In order to appreciate what BCI technologies have to offer, it helps to understand a bit of their history.

Infographic – History of Brain Computer Interface (BCI) Tech

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

5-trends-that-will-guide-automotive-technology-in-2020

Here are five trends that will be playing a key role in making cars safer and more efficient in the years to come.

  • Auto manufacturers have no option other than to realign their strategies in order to accommodate the looming revolution. Connected and electric cars are already on our roads. And the reality of fully-autonomous cars is coming closer and closer. Technology is helping auto companies to not only modernize their manufacturing processes but also to gather, manage, and analyze data. There’s also tons of data being generated by vehicles themselves All of this data will soon be the guiding factor for the automotive industry going forward.

    Here are five trends that will be playing a key role in making rides smoother, safer, and more efficient.

  • cybersecurity, automotive, MCU, networks, infotainment system

    1.) Vehicle VPNs and automotive cybersecurity

    We might not quite be there yet, but we are for sure on the verge of completely adopting autonomous vehicles. There has been a lot of talk surrounding self-driven vehicles, especially in regard to their safety and security. But the promise of connected and autonomous vehilces, and vehicle-to-everything (V2X) communcation, also opens up new avenues for hackers to attack our cars.

    Virtual Private Networks (VPNs), which allow users to create secure and private connections across even public networks, have been around for some time now. They even allow you to appear online as if you’re in another country. They have been successfully deployed by consumers and businesses as well as in many high-risk cybersecurity situations, including safeguarding government data.

    With the rise of connected vehicles, it is now clear that car owners and manufacturers are going to be adopting VPNs and other cybersecurity solutions to protect their connected and autonomous cars from cybersecurity threats.

    (Image source: Microchip Technology)

  • 2.) Multimodal mobility

    New options like ridesharing, e-scooters, and electric bikes are transforming the way we think about transportation. Powerful tools have made Big Data collection and analysis seamless. When this data is harnessed under a public-private partnership, it starts to bring flexible, multimodal mobility solutions to life. We are already witnessing this partnership change the travel and tourism industry through white-label journey planning apps. Going forward, urban transportation will get more efficient, streamlined, and, in the long run, sustainable thanks to the adoption of multimodal mobility.

    (Image source: VeoRide)

  • 3.) AI that understands drivers and passengers

    Real-time Big Data analysis enables vehicles to recognize user preferences and automatically adjust their settings in order to make rides more comfortable and customized. Image recognition and processing technologies are also being integrated into cars as a way of training vehicles to identify their owners and users without the need of car keys. Systems like the one being developed by Affectiva can even recognize the emotional states of drivers and passengers. Deep learning is already helping fleet operators monitor drivers remotely. Farther into the future, AI and brain-to-vehicle technologies will also be instrumental in the actualization of driverless car technology.

    (Image source: Affectiva)

  • 4.) Vehicle-to-everything (V2X) communication

    Decision making in our roads is now based on real-time, accurate, and well-analyzed data thanks to the Internet of Things (IoT). V2X technology is bringing connected cars to our roads that will have the ability to capture and digest data from other vehicles and infrastructure, and then act upon that data in order to make our roads safer and more efficient. IoT connectivity will allow vehicles to assess the effectiveness of different features such as their braking and steering systems, perform predictive maintenance, and even update a their firmware and software without human intervention. Experts agree, V2X will get a big boost from the emergence of 5G as well.

    (Image source: NXP Semiconductors

  • 5.) More sensors on the road

    Cars are already packed with sensors, and more and more advanced sensors such as LiDAR and even thermal are implemented into autonomous cars. But more sensors will also be coming to our roads. Road scanning will be using sensors and cameras to scan the road ahead, identifying any possible imperfections or hitches. Smart vehicles will then use that information to adjust their routes accordingly.WaveSense, a Boston-based sensor company, for example, is using ground-penetrating radar to help vehicles map topography.

    (Image source: WaveSense)

As a child, Ariana Merrill loved to figure out how cars worked, and this has translated into her love and passion for mechanical engineering.  For the past 12 years, Ariana has been helping communities thrive through careful monitoring and innovation of electrical and mechanical systems. Ariana also is a tech enthusiast living in New Jersey. She is a computer science and engineering graduate, specialized in artificial intelligence. She loves to write on how AI is paving all industries.  

10-semi-electronic-device-tech-reveals-from-ieee-iedm-2019

2019 IEEE IEDM event reveals latest node chips, chiplets, memories for AI, densest thin-film batteries, 400Gbits/s silicon photonics, quantum computing tools and much more.

  • The theme for this year’s 65th IEEE International Electron Devices Meeting (IEDM) was, “Innovative Devices for an Era of Connected Intelligence.” As in previous years, major semiconductor players including and international research organizations (e.g., imec, CEA-Leti, UC universities and others) presented the latest detailed technology for processors, memories, interfaces and power device devices. Additionally, the event included quantum computing advances, medical uses and other newer areas of application.

    Here are 10 of the major semiconductor “reveals” at the show for 2019.

  • Leading Edge 5nm Chip with Super Dense Memory

    Moore’s Law may be hitting the wall but it’s not dead yet. TSMC unveiled a complete 5nm technology platform that advanced silicon chip scaling (miniaturization) to the next process node. Reaching the 5nm node milestone was due in part to advances in lithography and improvements in process and packaging techniques.

    TSMC researchers described a 5nm CMOS process optimized for both mobile and high-performance computing. It offered nearly twice the logic density and a 15% speed gain or 30% power reduction over the company’s 7nm process. The process optimization incorporated extensive use of EUV lithography to replace immersion lithography at key points in the manufacturing process.

    TSMC’s 5nm platform also featured FinFETs and high-density SRAM cells. The SRAM could be optimized for low-power or high-performance applications, and the researchers say the high-density version was the highest-density SRAM ever reported. The researchers say high-volume production was targeted for 1H20.

  • Quantum computing 

    Great strides have been made in quantum computing. At the Semicon West/Electronic System Design (ESD) 2019 conference, IBM displayed it’s  IBM Q Experience, a cloud-based quantum computer available for free to anyone with a web browser and an internet connection.

    Creating a quantum computer has been an amazing technological achievement, but like any computer it needs software. Imec – the international Flemish R&D nanoelectronics organization – presented the first step toward developing a systematic approach to the design of quantum computing devices.

    EDA chip design software such as TCAD is necessary to produce highly accurate models of semiconductor devices and their operation. To date, no analogous tools exist to model qubits, the basis of quantum computing, because the field is so new and complex. If these design tools did exist, the development of quantum computers could take place much more quickly.

    The Imec team has taken a step to create such a software framework using multiphysics simulation methods to develop a comprehensive design methodology for qubits built in silicon. They modeled device electrostatics, stress, micro-magnetics, band structure and spin dynamics. Based on the results of these studies, they say that single-electron qubits in quantum dots can be induced and optimized in silicon MOSFETs with thin (<20nm) gate oxides. The researchers will discuss critical aspects of their methodology, the parameters they modeled, and next steps.

  • 3D Chiplets

    Intel presented a novel 3D heterogeneous integration process for chiplet creation. It is seen as an evolution of Moore’s Law, a way to keep the scaling, size and cost benefits continuing into the foreseeable future.

    Chiplets are a type of advanced packaging which offers a different way to integrate multiple dies into a package or system. There are a number of ways to make chiplets, but all use a library of modular chips – like Lego building blocks. These module chips are assembled in a package that connects them using a die-to-die interconnect scheme.

    There are many other approaches to combining chip dies, i.e., 2.5D dies that are stacked on top of an interposer. But the hope with a chiplet approach is that it’s a faster and less expensive way to assemble various types of third-party chips like processors, memory, interfaces and the like.

    Here are the details: Intel believes that heterogeneous 3D integration will drive scaling. CMOS technology requires both NMOS and PMOS devices. Intel researchers used 3D sequential stacking architecture to combine these different devices. They first built Si FinFET NMOS transistors on a silicon wafer. On a separate Si wafer they fabricated a single-crystalline Ge film for use as a buffer layer. They flipped the second wafer, bonded it to the first, annealed them both to produce a void-free interface, cleaved the second wafer away except for the Ge layer, and then built gate-all-around (GAA) Ge-channel PMOS devices on top of it. The researchers say these results show that heterogeneous 3D integration is promising for CMOS logic in highly scaled technology nodes.

    This images hows a schematic and a cross-section of a fully processed 3D CMOS transistor structure achieved by this process; in the middle is a thickness contour map of the Ge transfer layer, showing good uniformity; and at right is a 3D cross-sectional view of the completed 3D CMOS chip showing Ge-channel GAA transistors on top of Si FinFET NMOS transistors.

  • AI That Does’t Forget

    Embedded STT-MRAM and other non-volatile memories (NVMs) are getting a lot of attention lately. NVMs devices retain their memory even after the power is removed. Embedded SST-NRAM is one NVM that shows particular promise in the embedded memory space for cache memory in IoT and AI applications.

    At IEDM 2019, TSMC described a versatile 22nm STT-MRAM technology for AI while Intel talked about STT-MRAMs for use in L4 cache applications.

    In STT-RAM writing, an electric current is polarized by aligning the spin direction of the electrons flowing through a magnetic tunnel junction (MTJ) element. Data writing is performed by using the spin-polarized current to change the magnetic orientation of the information storage layer in the MTJ element. Intel improved the process and stack for L4 cache applications. STT-MRAM technology for L4 cache requires tighter bitcell pitches, which translate into smaller MTJ sizes and reduced available write current.

  • Organ Forceps With a Special Touch

    Our internal organs are slippery because they’re covered with blood and other body fluids, so grasping and pulling them with forceps can be challenging. Although contact-force sensors have been placed on the tips of forceps used in diagnostic laparoscopic and robotic surgeries, there currently is no way to know if they are slipping, other than visually via a monitor, which has limited usefulness. A Kagawa University team described a highly sensitive slip-sensing imager (sub-mm resolution) and novel algorithm that can, in effect, give forceps a sense of touch. The idea is to use the device to visualize the spatial distribution of the grasping force across the organ’s surface. The center of that distributed load is calculated, and as the forceps are moved the algorithm relates any corresponding movements of the load center to slippage. Built on an SOI wafer, the device’s force-sensor pixels consist of a 20µm–thick piezoelectric silicon diaphragm (400µm diameter) with a center contact, and with a force detection circuit integrated on the diaphragm. The diaphragm acts as a strain gauge as it flexes due to varying grasping force.

  • Impedance Sensor for Fingerprint Imaging

    Researchers led by Cornell discussed the monolithic integration of a piezoelectric aluminum nitride (AlN) resonator into a CMOS-controlled, GHz ultrasonic impedance sensor/imager. The device measures changes in surface properties such as surface oxidation, materials, liquid viscosity and others, and is meant for use in wearable, IoT and smartphone systems to detect fingerprints with high resolution, determine tissue states, and for other applications. This is the first time monolithic fabrication – all in one chip or die –  has been successfully demonstrated, and it led to small, power-efficient GHz sensing arrays with improved performance vs. the standard two-chip heterogeneous integration approach, thanks to less parasitic coupling and a higher signal-to-noise ratio.

  • Thin-Film Battery Goes High-Density

    The miniaturization of power sources hasn’t kept pace with the miniaturization of electronics. Although integrated electrochemical capacitors offer high power density, high frequency response and novel form factors, their low energy densities are of limited value for MEMS and autonomous device applications that require long periods between charging. CEA-Leti researchers discussed a thin-film battery (TFB) with the highest areal energy density yet reported (890 µAh/cm-2) and high-power density (450 µAh/cm-2). Built on silicon wafers using UV photolithography and etching for the successive deposition and patterning of each layer, the thin-film battery integrates a 20µm-thick LiCoO2 cathode in a Li-free anode configuration. It showed good cycling behavior over 100 cycles, and the fact it was built using a wafer-level process opens up the possibility to tightly integrate this battery technology with future electronic devices.

  • Physically Unclonable Function (PUF) for Mobile and Smart Devices

    The spread of networked mobile devices and smart gadgets in the IoT landscape has created an urgent need to protect them with lightweight and low-power cryptographic solutions. A physically unclonable function (PUF) is a hardware-intrinsic security primitive, or basic programming element. UC Santa Barbara researchers discussed an ultra-low-power PUF that operates on the varying electrical resistances and current leakages that arised from intrinsic process variations in ReRAM crossbar arrays. The team built 4K-ReRAM passive crossbar circuit arrays fabricated with a CMOS-compatible process suitable for back-end-of-the-line (BEOL) integration. The arrays allow for an extremely large number of challenge-response pairs (a common cryptographic protocol), as well as 4x better density vs. other ReRAM architectures plus a ~100x improvement in power efficiency and more robust security metrics.

  • Silicon photonics

    Very fast speed data races around within data centers via optical fiber, using silicon photonic (light-based) interfaces that operate at 100 Gb/s. But cloud data center traffic is growing at nearly 30% per year and there soon will be a need to increase the data rates. A STMicroelectronics-led team described a new silicon photonics technology platform built on 300mm Silicon-on-Insulator (SOI) wafers, yielding devices that operate at 400Gbits/s (each device has 4 channels, each of which operates at 100Gbits/s, for a total of 400Gbits/s).

    Optical coupling and polarization management are key requirements, and their devices incorporate a 60 GHz high-speed photodiode and a high-speed phase modulator. They also built devices with a supplementary SiN waveguide layer for higher coupling efficiency, to meet evolving data-transmission requirements. The researchers say the photonics platform has the potential to meet the requirements of applications other than data centers, too, such as automotive.

    The image is a photo of the chip-on-board assembly of an analog front-end (AFE) function implemented in a 400G-DR4 optical transceiver using the technology, and (2b) are PAM4 signal eye diagrams at 106 Gbits/s per channel, used to measure high-speed signal quality.

  • 5G and beyond

    One of the challenges for chip makers is how to integrate III-V materials with silicon to make ultra-fast devices for 5G and other uses, which are compatible with conventional CMOS technology.  In addition to silicon, III-V compound semiconductors are obtained by combining group III elements (essentially Al, Ga, In) with group V elements (essentially N, P , As, Sb). This gives us 12 possible combinations; the most important ones are probably GaAs, InP GaP and GaN.

    IOT and 5G applications typically use sensors that transmit wireless data to anedge or cloud network. This requires a combination of RF capabilities with a small form factor and low operating power. A promising approach to achieve this combination is to create single chips that combine the capabilities of silicon CMOS with those of III-V devices, such as gallium nitride (GaN) and indium gallium arsenide (InGaAs). The unique properties of III-V compounds make then well suited for optoelectronics (LEDs) and communications (5G).

    At IEDM, Intel talked described how low-leakage, high-k dielectric enhancement mode GaN NMOS and Si PMOS transistors were built monolithically on a 300mm Si substrate. The goal was to combine GaN’s high-frequency/-temperature/-power attributes with silicon CMOS circuitry’s digital signal processing, logic, memory and analog capabilities, to create compact devices for next-generation solutions for power delivery, RF and system-on-chip (SoC) applications. The researchers say both device types demonstrated excellent performance across a range of electrical specifications.

    III-V materials offer higher electron mobilities than silicon, and HBTs made from them are very fast transistors often used for RF and other high-frequency applications. A key goal is to build them on 300mm silicon wafers instead of other substrates, to take advantage of silicon’s lower manufacturing costs. A team led by imec described how they used a unique nano-ridge engineering technique to build GaAs/InGaP HBTs on a 300mm silicon substrate.

RELATED ARTICLES:

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

the-10-greatest-issues-ai-needs-to-face
  • There are a lot of reasons to be excited about artificial intelligence. AI is transforming industries in innovative ways and even enabling entirely new business models to emerge.

    But there are also a lot of reasons to be cautious about AI. The 2019 AI Now Report, created by the AI Now Institute, takes a look back on the social impact of AI in 2019, and some of the most important issues surrounding the technology as it moves forward. The AI Now Institute is a non-profit, interdisciplinary research institute “dedicated to understanding the social implications of AI technologies.”

    “This year we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI,” the report says.

    As AI moves into the next decade we’ve outlined some of the most important issues AI will have to grapple with in the coming years.

  • 1.) Algorithmic bias is already affecting us

    As more and more AI algorithms are implemented into decision making processes in everything from real estate to healthcare, it is important to for developers to be aware of the inherent biases within the datasets they use to train AI.

    Apple’s Apple Pay service recently came under fire from customers – including Apple’s co-founder Steve Wozniak – over allegations that the services approval system was assigning lower credit limits to female customers.

    Experts agree it will likely be impossible to completely safeguard systems again bias, but steps can be taken to mitigate the impact of bias.

    (Image source: Apple)

  • 2.) Facial recognition is watching us

    Facial recognition is already here and being widely deployed throughout the world. In China facial recognition technology has become a part of surveillance and security systems and even allowed customers to use their face to access ATMs.

    While there is an argument for convenience and security, there are also wide spread privacy and ethics concerns around using AI facial recognition. The city of Detroit is facing pushback over plans to add facial recognition to its Project Green Light – a camera system that allows police departments to monitor businesses and intersections in real time.

    In 2019 cities of Oakland, Calif., Somerville, Mass., and San Francisco voted to pass a ordinances banning municipal use of face recognition technology.

    By contrast however, the Department of Homeland Security (DHS) announced that it has plans to issue a proposed regulation that could require all travelers, including US citizens, to submit to face and other biometric scans at airports and other ports of entry.

    Regarding the DHS announcement, ACLU Senior Policy Analyst Jay Stanley had this to say:

    “Time and again, the government told the public and members of Congress that US citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling. This new notice suggests that the government is reneging on what was already an insufficient promise.”

    (Image source:  teguhjati pras from Pixabay )

  • 3.) Deepfakes are a reality

    If you want to see the power of deepfakes you only need to browse around YouTube, to channels like Ctrl Shift Face.

    This isn’t a special effect. With enough data (including images and audio) AI algorithms can actually reconstruct and superimpose individual’s faces onto existing video footage. It makes for some entertaining viral videos, but there are wider, more frightening implications for deepfakes as they can be used to create fraudulent videos of political figures, celebrities, and even private citizens. Left unchecked, deepfakes could become a powerful tool for the spread of misinformation.

    (Image source: Ctrl Shift Face)

  • 4.) Algorithms are ruining our social media experience

    Have you ever watched one video on YouTube or liked a post on Facebook or other social media only to be sent down a rabbit hole of increasingly questionable recommendations? That’s not an accident – that’s AI trying to predict what you’ll “like.” And by “like” we mean it’s trying to figure out what content you’re most likely to engage with – and that often means offending or shocking you. Algorithmic issues are being blamed for both a rise in the quantity of extremist content on social media as well as its proliferation. Google, Facebook, and others have pledged to search for ways to tamp down on the spread of dangerous and extremist content as well as misinformation. But many would argue the damage has already been done.

    (Image source: Pixelkult from Pixabay  )

  • 5.) AI is a powerful tool for hacking

    Automation is meant to handle the dirty, dangerous, and repetitive tasks humans can’t or don’t want to perform, right? Well the benefits go both ways. More and more malicious hackers are leveraging AI technology to assist with sophisticated cybersecurity attacks. A well-trained algorithm can attack a target with a level of speed and efficiency that would be difficult for one or even a larger group of hackers. Fortunately, cybersecurity companies like XM Cyber are fighting fire with fire and are also using machine learning algorithms to safeguard networks and sensitive systems as well.

    (Image source: XM Cyber)

  • 6.) AI developers lack diversity

    Issues with AI can be correlated to a lack of racial and gender diversity among the engineers and developers being hired at the top technology companies working on AI. The AI Now Institute has found that Black and Latinx workers are substantially underrepresented in the tech workforce, and women are particularly underrepresented as AI researchers.

    “Rather than recognizing the scale and systemic nature of the problem, tech companies have responded to mounting evidence of bias and misuse by primarily focusing on narrow diversity solutions,” the AI Now Institute said. “They have also attempted technical debiasing, working to ‘fix’ algorithms and diversify data sets, even though these approaches have proven insufficient and raise serious privacy and consent concerns. Notably, neither approach addresses underlying structural inequalities.”

    (Image source: PixLoger from Pixabay)

  • 7.) AI isn’t green

    As engineers come to terms with the realities of climate change and the need to develop greener technologies, AI is having its own energy crisis. The massive amount of compute power required for AI also comes with a massive energy bill.

    “As a whole, the industry’s energy dependence is on an exponential trajectory, with best estimates showing that its 2020 global footprint amounts to 3–3.6 percent of global greenhouse emissions, more than double what the sector produced in 2007,” the AI Now Institute said. “This is comparable to that of the aviation industry,and larger than that of Japan, which is the fifth biggest polluter in the world.”

    Tech companies are already implementing renewable energy sources and other means to make data centers more efficient. But the emergence of 5G and other advanced networking technologies only threatens to make the problem worse before it gets better. “In the worst-case scenario, this footprint could increase to 14 percent of global emissions by 2040,” the Institute warned.

    (Image source: Free-Photos from Pixabay )

  • 8.) AI helps privatize public infrastructure

    “Troubling partnerships between government and private tech companies also emerged as a trend this year, especially those that extended surveillance from public environments into private spaces like private properties and the home,” the AI Now Institute said.

    In 2019 the city of Detroit established the “Neighborhood Real-Time Intelligence Program,” a $9 million, state- and federally-funded initiative that would expand the city’s Project Green Light surveillance system to 500 intersections, in addition to the 500 businesses where it is already deployed, as well as add facial recognition technology to the system. The city has reported reduced crime in areas thanks to Project Green Light, but that hasn’t stopped privacy advocates from protesting the technology.

    In 2018, Amazon came under fire for offering to let police departments utilize its facial recognition software. The company has also negotiated with over 700 police departments in the US to give police access to videos from Ring smart home cameras if the footage can help with a criminal investigation, according to the AI Now Institute.

    (Image source: Pixabay)

  • 9.) Automation impacts people of color and the poor the most

    The debate about automation and labor likely won’t ever stop. But the narrative is taking new shape as more data emerges about specific groups affected by rapid automation due to AI.

    Depending on who you ask, automation will be a boon to the economy as well as personal productivity, or it will usher in a dystopian nightmare where humans struggle for basic needs while robots handle all of the jobs.

    “Both narratives are predicated on the assumption that automation in the workplace is inevitable and that automated systems are capable of performing tasks that had previously been the work of humans. What is missing from both conflicting narratives is the more nuanced prediction of who will be harmed and who will benefit from labor automation in the years to come,” the AI Now Institute said.

    The 2019 AI Now Report predicts that Black, Latinx, and low-wage workers in the US will be disproportionately impacted by increased levels of automation.

    (Image source: mohamed_hassan from Pixabay)

  • 10. ) AI is removing the ‘human’ from human resources

    More and more companies are using AI to manage and oversee workers. AI is even being implemented into the hiring process. Amazon, for example, uses an AI system to set shifting performance goals for its warehouse workers. Workers are assigned a daily “rate” of productivity to hit each day, based on their prior performance and the overall goals of the warehouse.

    “If a worker falls behind, they are subject to disciplinary action. In many warehouses, termination is an automated process (not unlike being “kicked off” a gig-economy platform),” the AI Now Institute said. “According to Abdi Muse, an organizer with Amazon warehouse workers in Minneapolis, if workers fall behind the algorithmically set productivity rate three times in one day, they are fired, however long they may have worked for the company, and irrespective of the personal circumstances that led to their ‘mistakes.’ ”

    “The introduction of AI-enabled labor-management systems raises significant questions about worker rights and safety. The use of these systems—from Amazon warehouses to Uber and InstaCart—pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color) by setting productivity targets linked to chronic injuries, psychological stress, and even death and by imposing unpredictable algorithmic wage cuts that undermine economic stability.”

    (Image source: iosphere / Freedigitalphotos.net)