the-12-best-innovations-of-ces-2020

Forget new TVs and smartphones. These are the real game changers introduced at CES 2020.

  • Now that the smoke is cleared from CES 2020, we can take a step back and see which technologies were the real innovations of 2020. Let’s be honest, CES can be a black hole of vaporware, false promises, and concepts intended to be just that.

    We’ve compiled a list of our favorite technologies introduced at CES 2020 – innovations that we’re sure will be having a lasting impact in 2020 and beyond.

  • AerNos AerSIP Gas Sensor

    The AerSIP from AerNos is a 5 x 5-mm, mulit-gas sensing module that combines nanotechnology and machine learning algorithms to monitor indoor and outdoor air quality. The system-in-package (SIP) is an embedded plug-and-play solution that can be integrated into wearables, mobile devices, and other IoT devices and is capable of detecting hazardous gases and other dangers at parts per billion levels.

    (Image source: AerNos/CES)

  • AMD Ryzen 4000 Series Mobile Processor

    AMD’s Ryzen 4000 could be a literal game changer for high-end laptops users – particularly gamers and designers. AMD says its new Ryzen 4000 series is the world’s first 7-nanometer laptop processor. Designed for ultra-thin laptops, the Ryzen 4000 series features up to 8 cores and 16 threads and configurable 15W thermal design power. AMD pledges the Ryzen 4000 series offers up to four percent greater single-thread performance and up to 90 percent faster multithreaded performance than its competitors, as well as up to 18 percent faster graphics performance over competing chips.

    (Image source: AMD)

  • Atmosic Technologies M3 Battery-Free Bluetooth 5 SoC

    Atmosic says its M3 Battery-Free Bluetooth 5 SoC uses so little power that it can even eliminate the need for battery power entirely in devices such as wearables, keyboards, mice, asset trackers, beacons, and remotes. The M3 integrates Atmosic’s Lowest Power Radio, On-demand Wake-Up, and Managed Energy Harvesting technologies to deliver what the company says is 10 to 100 times lower power than other SoCs, while still complying with Bluetooth standards. The M3’s radio uses two “ears” – one for listening in a low-power state to perceive incoming commands, and another that only wakes when alerted. The SoC uses energy harvesting technology to gather power from radio frequency, photovoltaic, thermal, and motion.

    (Image source: Atmosic)

  • Bot3 Zen-P VSLAM Deep Learning Module

    Bot3‘s Zen-P VSLAM Deep Learning module integrates visual simultaneous localization and mapping (VSLAM) technology (a version of the same technology used in autonomous vehicles) into mobile robots ranging from industrial machines to smart home products. Bot3’s image processing algorithm, Pascal, allows for autonomous navigation without tracks as well as indoor mapping and positioning. (for instances such as warehouse applications).

    (Image source: Bot3)

  • BrainCo BrainRobotics Prosthetic Hand

    Many companies have been developing mind-controlled prosthetics for amputees and other disabled patients. What separates the prosthetic hand developed by BrainRobotics is the integration of AI technology. The BrainRobotics hand utilizes machine learning to allow the hand and its user to learn from each other over time – leading to more lifelike movements. The company is aiming to provide accurate and reliable prosthetics and at affordable price for all patients. BrainRobotics is a subsidiary of BrainCo, a software developer focused on brainwave measuring and monitoring.

    (Image source: BrainCo/BrainRobotics)

  • Fluent.ai MultiWake Word and Voice Control Engine

    Fluent.ai is a technology company focused on AI for voice interface and speech recognition. The company’s Multi-Wake Word and Voice Control Engine is an edge-based, noise robust, and multilingual speech technology that consumes minimal power and storage, allowing it to be embedded in small devices. The solution is Cortex M4-based and supports four separate wake words and 100 multilingual commands, according to Fluent.ai.

    Fluent.ai has recently partnered with semiconductor designer Ambiq Micro to implement Fluent.ai’s software solutions into Ambiq’s ultra-small footprint, low-power microcontrollers. Ambiq’s MCU supports frequencies up to 96 MHz, and Fluent.ai’s solution requires only 16 MHz from the MCU. The new partnership means Fluent.ai and Ambiq will be releasing MCUs for OEMs looking for an easy way to add speech recognition and voice command functionality to their smart home devices and other products.

    (Image source: Fluent.ai / CES

  • Intel Tiger Lake Chip

    When Intel announces a new chip, the whole world takes notice. The chipmaking giant is launching its latest chip for consumers this year. Dubbed Tiger Lake, the new chip is said to be optimized for AI performance, graphics, and USB 3 throughput. Rather than desktops, the new chips will be focused on mobile devices such as ultra-thin laptops and tablets. The first products featuring Tiger Lake are expected to ship later in 2020.

    (Image source: Intel)

  • Monster MultiLink Bluetooth Technology

    Sometimes its the most straightforward ideas that can make the biggest difference. Most of us love our Bluetooth wireless headphones and earbuds. The problem is they don’t create a sharable experience. What if you want to show your friend the video you’re watching without disturbing the people around you? Monster has debuted a new technology called Music Share that uses MultiLink technology to allow devices to send Bluetooth audio to multiple devices in sync. The technology expands how Bluetooth headphones can be used and opens up new use cases ranging from air travel to fitness classes as well as new avenues for social interaction.

    (Image source: Bluetooth SIG)

  • Murata Coral Accelerator Module

    Working in partnership with Coral and Google, Murata Electronics has developed what it is calling the world’s smallest AI module. The Coral Accelerator Module packages Google’s Edge TPU ASIC into a miniaturized footprint to enable developers to embed edge-based AI into their products and devices. The new module forms an integral part of Coral’s integrated AI platform, which also includes a toolkit of software tools and pre-compiled AI models.

    (Image source: Murata Electronics Americas)

  • Pollen Robotics Reachy Open-Source Robot

    Reachy is a robot developed by Pollen Robotics, in collaboration with the INCIA Neuroscience Institute in France, that is fully open source. The robot, which can be programmed using Python, is modular – employing a variety of 3D-printed grippers – and comes with prepackaged AI algorithms to allow developers to customize it for a variety of applications ranging from customer service and assisting the elderly or disabled.

    Read more about Reachy, and the rise of open-source robotics, here.

    (Image source: Pollen Robotics)

  • VRgineers 8K XTAL Headset

    VRgineers, a maker of premium VR headsets for enterprise applications in industries ranging from automotive to defense and military, has released a major upgrade to its flagship XTAL headset. The latest version of XTAL features 8K resolution (4K per eye), improved lenses with a 180-degree field-of-view, and a new add-on module for augmented reality and mixed reality functionality. The headset also still includes eye tracking as well as integrated Leap Motion sensors to enable controller-free navigation and interactions.

    (Image source: VRgineers)

  • zGlue ChipBuilder

    zGlue is a software company that develops tools for chipmakers and designers. Its latest offering, ChipBuilder 3.0 is a design tool to for building custom silicon chips and accelerating time to market. The software suite features an expansive library of chipsets and allows engineers to capture schematics, route and verify designs, and download netlists. The tool allows engineers to create realistic 3D models and code their own chips and even place orders for physical chips via zGlue’s Shuttle Program.

    (Image source: zGlue / CES)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics

fiber-optic-sensor-moves-robot-in-near-real-time

Although not as prominent at this year’s 2020 CES, fiber optics sensing technology has been a highlight of past shows. Fiber optic sensing measures changes in the backscattered light in a fiber cable, which can happen when the fiber undergoes a vibration or strain. When attached to an opto-electrical connection, the fiber optic sensing can be used as a hyper-sensitive measurement device for electronic systems.

NASA, among other R&D agencies, began developing Fiber Optic Sensing Systems (FOSS) technologies over 5 years ago. Innovators at NASA’s Armstrong Flight Research Center began using FOSS to monitor the safety of aircraft structures in flight, but quickly found other uses for the technology in civil structures, transportation, oil and gas, medical, and many more spaces.

Image Source: Fraunhofer / SPIE Photonics / John Blyler

Germany’s Fraunhofer, one of Europe’s largest application-oriented research organizations, has been exploring the use of a related technology, namely, fiber optical 3D Shape Sensing. One application they have been studying is the real-time shape and position sensing of the fiber anywhere along the length of the optical fiber. Such sensors provide highly accurate measurements as the fibers twist and bend at every point along the sensor.

A few years back, Fraunhofer showed the value of using fiber optic sensing to accurately control the movements of a robot. The video below provides a convincing demonstration of this technology.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

human-augmentation-makes-list-of-2020-strategic-techs

Ever the adaptive species, human beings have long used technology to augment their natural capabilities. Even pre-historic cave dwellers used obsidian rocks to sharpen sticks into fine points, effectively extending their arms and thus the targets they could hit.

Today, humans use electronic and mechanical technology to physically augment their bodies. This is done attaching or implanting some type of device to improve their capability to go beyond the current human experience, e.g., 3D-printing an appendage or interfacing directly with the digital world through a brain-computer interface (BCI). The former is an example of how technology can enhance a person’s physical capabilities while the latter is related to cognitive improvements.

Like our cave dwelling ancestors, many of today’s augmentations offer fairly simple improvements. Consider the example of an extra thumb. That’s exactly what Dani Clode, a grad student at London’s Royal College of Art (RCA), has done with her third thumb project. This augmentation is a 3D-printed prosthetic that simply extends a user’s grip. The extra thumb straps onto the hand, which connects to a bracelet containing wires and servos. The wearer controls the thumb via pressure sensors located under the soles of their feet. Pressing down with one foot will sent a signal via a Bluetooth device that will cause the thumb to grasp.

Be it simple or complex, human augmentation has made the list of Gartner Top 10 Strategic Technology Trends for 2020. The report cites several growing market areas were implanted or hosted physical augmentations improve both workers health and the company’s financial bottom lines.  For example, the automotive or mining industries use wearables to improve worker safety. In other industries, such as retail and travel, wearables are used primarily to increase worker productivity. 

The report lists four main categories of physical augmentation: Sensory augmentation (hearing, vision, and perception), appendage and biological function augmentation (exoskeletons, prosthetics), brain augmentation (implants to treat seizures) and genetic augmentation (somatic gene and cell therapy). Each of these categories are worthy of a separate discussion. For now, the timeline below will focus on one portion of the second category, namely, prosthetics.

Modern human augmentation sometimes called “Human 2.0,” would not be possible with the advances offer by semiconductor electronics (e.g., processors, memory, sensors and wireless tech) and the related advancement in robotics. Thus, our brief timeline starts with the advent of the transistor.  

liberty-defense's-radar-can-detect-undetectable-ghost-guns
Hexwave can detect guns and other dangerous items on people in real time and at a walking pace. (Image source: Liberty Defense)

On the day of his 16th birthday, Nathaniel Berhow, a student at Saugus High School in Santa Clarita, Calif., shot and killed two of his classmates and wounded three others. The tragedy mirrored so many others that have been happening all over the US at an alarming rate in recent years. But one particular detail of the .45 caliber handgun Berhow used in the shooting caught the attention of law enforcement and media.

The gun was a “ghost gun,”so-called because it contained no serial number – making it difficult, if not impossible, to be traced by law enforcement. Ghost gun parts can even come from multiple sources further muddying the the chain of custody for law enforcement.

The gun was assembled from a kit of parts purchased online. Sellers of such kits argue that since they are only selling gun component rather than fully assembled firearms, they are legally in the clear.

Ghost guns can even be assembled using 3D-printed parts, printed by a seller or by the buyer in their own home. Files to print gun parts are readily available on the Internet and their legality is the subject of an ongoing legislative debate.

The modular nature of these guns, as well as their plastic components, also means it can be easy to sneak them past metal detectors.

The Saugus High School shooting was only the latest in a series of increasingly high-profile incidents involving ghost guns. But while law enforcement and government agencies are grappling with legislating and tracing ghost guns and 3D-printed guns, other groups and companies are working on the ground level to develop technologies to make these firearms easier to detect.

One such company is Vancouver, British Columbia-based Liberty Defense, a spin-off from MIT’s Lincoln Lab. Liberty Defense has developed a technology it calls Hexwave that combines machine learning and computer vision with an array of radar antennas to detect weapons regardless of their material composition. The goal is to augment existing security systems in airports, malls, stadiums, and other public spaces with a mobile system capable of detecting not only concealed weapons, but bullets, magazines, explosives, and even larger weapon components as well.

3D Printed Gun
A 3D-printed firearm, called “The Liberator,” made nearly entirely of plastic printed parts.

(Image source: Defense Distributed)

Bill Riker, CEO of Liberty Defense, told Design News that Hexwave is aimed at detecting weapons and other articles in dynamic, urban environments – places more chaotic and unpredictable than the queues at airports.

“In the airports and other places, it’s a very rigorous and effective process, but it takes a long time. And you’ve got a very captured audience,” Riker said. “In the urban environment – schools, stadiums, and malls – you’ve got a much less controlled environment, but you have an equally growing need for security.”

Riker said Liberty Defense’s technology allows for both indoor and outdoor detection zones, which is “important because of the need to be able to accommodate a layered defense when you are really trying to buy time for a security operation to proactively prevent a weapon from coming into a facility.” Hexwave can also be deployed covertly via kiosks or other means such as being hidden in walls.

Riker, who has over 35 years of experience working in security systems with Smiths Detention, DRS Technologies, General Dynamics, and the U.S. Department of Defense, likens Hexwave’s sensor technology to an insect’s eye – where a number of eyelets function together as one unit. Here, instead of eyelets, an array of low-frequency radar antennas (200 times lower power than Wi-Fi, according to the company) creates a three-dimensional point cloud of a person – capturing over 400,000 data points, according to Liberty Defense. “Think about it like a child’s coloring book. You’re connecting all the dots, except it’s in three dimensions,” Riker said.

“Part of the great work that MIT did here was to craft a design that can create multiple transmit and receive antennas,” Riker explained. “Typically, an antenna array has its transmitting and receiving off the same dish. This array actually separates them and you’re able to have essentially a multistatic display configured into a monostatic display.”

That point cloud is then converted into an image that a deep learning neural network analyzes to look for weapons and suspicious objects – all done in real time.

“We teach [the AI] a whole different series of weapons and as new weapons occur, we teach it and then we send that out by Wi-Fi into all the different systems to update them on a regular basis,” Riker said. “What’s important here is you don’t have to go from the cloud to a lookup table. All the analysis is done on the unit, therefore it’s super fast.”

Rather than distinguishing specific models or types of guns, what Hexwave looks for is identifiable shapes. “At this point, it’s general, but we can tell the difference between a plastic handgun and a metal metal handgun,” Riker said. “You can see if it’s a revolver or if it’s a semiautomatic pistol. You can also see explosives, including pyrotechnics. In that case you can actually you see the accelerates within the container.”

He continued. “There are all kinds of high-strength polymers that you can build 3D-printed guns out of. We can see that because it’s got a physical form to it, meaning, regardless of the material, there’s going to be some degree of reflectivity there. And so when you’re emitting energy against it that’s going to have a return that gives you that three-dimensional look.”

Once the AI has made its analysis, it can notify a human guard using a tablet. Riker said Hexwave can also be implemented into security systems to control things like door lock systems, video management systems, and command and control in cases where an immediate response is needed.

Hexwave uses an array of antennas to create a 3D point cloud image of people that pass by its towers. (Image source: Liberty Defense)

Liberty Defense says about 1,000 people can pass through a Hexwave-enabled security post. But there are limitations in terms of queuing people through the system. Entrance ways in typical stores, malls, and even schools can be chaotic – with lots of people entering and existing simultaneously in clusters. Hexwave requires each person be scanned one and a time – which means single file lines.

“The radar or the the energy we’re using is non-ionizing. So we’re out of the x-ray side, past millimeter wave, and into a different area,” Riker said. “And so what ends up occurring here is you can’t see through people. But, quite frankly, you can’t see through multiple people with x-ray systems either.”

Riker said the Hexwave system does offer an increased throughput over typical security stations and metal detectors – as well as more accuracy for security personnel – that Liberty Defense believes will ease any burden by requiring people to line up.

“So you get your increased throughput from two things really: Number one, you’re passing through at a walking pace and you typically should not have to divest of your typical articles on your body like a cell phone, a set of keys, or something like that. And then that’s also when it does identify something that’s either a known threat, clearly not a threat, or as an anomaly.

“Let’s say there’s an odd shaped object on you that’s really big that may look like a bar of soap. Why are you carrying a bar of soap on you? That’s kind of an odd thing to have, so the system will say something’s wrong here. It’s an anomaly. But the system also shows where that object is on your body so that a security official can go ahead and say, ‘Okay, please step over here for a second; what is that in your back pocket?’ ”

The real time speed of the system also offers an advantage over systems such as what the public may be used to in airports, for example.

“The airports do a great job. The technology is very effective. Airport scanners or the body scanners are typically using a four-millimeter wave and they are creating an image. It’s just that it’s not in real time. The reason why you can’t have little kids go through it is not because of the emission level. It’s because the kids typically can’t stay still. If you move ,you blur the image.

“For our system, it’s capturing the images several times a second. Little kids, adults, everyone can just keep walking through,” Riker said.

This anomaly detection feature becomes particularly important as criminals will often find ways around any new security system given enough time and experience. Different facilities can instruct the system to flag certain large items that shouldn’t be carried in (think large bags at sports stadiums) and human guards can intervene if something like a radar jammer or scrambler causes interference in the system. “If you’ve got a handgun or something like that in a radar-proof bag, it’s going to see a big bag on your body, flag the anomaly, and trigger the subsequent inspection,” Riker said.

In April 2019, Liberty Defense was granted an experimental license by the FCC to begin testing Hexwave. The company is planning to roll out beta tests in first half of 2020. As of writing 11 site have agreed to participate including Rogers Arena in Vancouver, the University of Wisconsin, and the Virginia state capital building in Richmond, VA. Utah Attorney General Sean Reyes has committed to testing Hexwave in his state. And the Maryland Stadium Authority will be testing the technology in various sporting venues.

Under federal law all firearms must be able to be detected by a metal detector, and all major components must be detectable by x-ray. The Undetectable Firearms Act of 1988 requires guns to be detectable by metal detectors even “after removal of grips, stocks, and magazines,” and the barrel, the slide or cylinder, or the frame or receiver must be detectable by x-ray.

However, the Giffords Law Center to Prevent Gun Violence cautions: “The Undetectable Firearms Act does not specify what portion of the firearm must be detectable by a metal detector. This could allow an individual to create a mostly plastic but technically compliant firearm, using a 3D printer or other technology, that contains metal in an extraneous part of the firearm that could be removed prior to entering a security area.”

Riker said Liberty Defense is hoping to provide the public with a degree of safety and peace of mind that legislation hasn’t been able to. “That peace of mind seems to be eroding right now. So there’s a great potential to help recover that in our society,” he said. “I think we’re pretty confident that we’ll be able to stay ahead of the threat curve”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? 

Register to attend!

best-ai-stories-of-2019
(Image source: Adobe Stock)

We’ve picked our favorite AI-related stories from 2019.

The 10 greatest issues AI needs to face

While we celebrate the positive impacts of artificial intelligence let’s not forget there’s also a lot to be concerned about.

The Apple Card Is the Most High-Profile Case of AI Bias Yet

Apple Card users have alleged that its credit decision algorithm discriminates against women.

How AI at the Edge Is Defining Next-Generation Hardware Platforms

Moving AI from the cloud to the edge was a big trend in 2019. Chris Cheng, distinguished technologist on the hardware machine learning team at Hewlett Packard, takes a look at some of the latest research being done on AI inference at the edge.

(Image source: OpenAI)

OpenAI’s Robot Hand Taught Itself How to Solve a Rubik’s Cube

Rubik’s Cube Solving Robot Hand Sparks Debate in the AI Community

Using novel neural networks, OpenAI enabled a robotic hand is able to learn how to solve a Rubik’s Cube on its own. Concerns regarding OpenAI’s robot hand that can solve a Rubik’s Cube have created a debate among engineers and AI experts on social media.

What’s the State of Emotional AI?

Artificial intelligence that can recognize human emotions – emotional AI – has been gaining momentum. But something’s missing. How long until we’ll be seeing it in our devices and cars?

(Image source: TuSimple)

UPS Has Invested in Autonomous Trucks After Ongoing Tests

TuSimple’s Autonomous Trucks Are Being Tested by the USPS

In 2019, TuSimple entered into partnerships with UPS and the US Postal Service to test self-driving trucks for hauling mail freight.

The New Raspberry Pi 4 Is All About AI and Embedded IoT

The Raspberry Pi has grown from a hobbyist machine to an IoT developer platform capable of even handling machine learning applications. Here’s our hands-on look.

A Look at the US/China Battle for AI Leadership

The US and China are waging a behind-the-scenes war over who will emerge as the global powerhouse of artificial intelligence. Where do each country’s strengths and weaknesses lie?

There’s a Diversity Crisis in the AI Industry

A lack of racial and gender diversity at the companies creating AI ties closely with issues of bias and racial discrimination in artificial intelligence algorithms, according to a new NYU study.

(Image source: Pixabay)

Can Trump’s New Initiative Make American AI Great Again?

A look at President Trump’s executive order aimed at accelerating America’s lead in artificial intelligence.

AI Could Make Quantum Computers a Reality

New research is examining the use of artificial intelligence to handle the calculations necessary for quantum computers to function.

cool-and-super-cool-3d-printed-projects

Here’s a look back at several cool hobbyist-level gadgets and a few super cool printed car projects.

The price of 3D printers has become reasonable enough to where hobbyist and businesses can own at least one machine. But once you’ve got it, what do you do with it? Engineers and techies will want to pursue DIY projects, repair machines and equipment, prototype their latest and greatest invention, or just have fun. All of these – but especially the latter – require a STL file and a 3D model. Here’s a very short list of places to get the coolest files for your 3D Printer (and many are free).

Thingiverse

One of the biggest content repositories for 3D printer models on the internet is Thingiverse – the site offers close to 2 million STL files. The website is operated by MakerBot Industries, the creators of the Replicator series of 3D printers. The Thingiverse community mostly share their STL files for free in varying categories and complexity.

Need a few gift ideas for the holiday? Try this imaginative bottle opener and cap gun, uploaded by 3Deddy via Thingiverse. Aside from the printed parts, all you’ll need are a set of M3 bolts, an elastic rubber band and a penny or 10 cent eruo coin.

Image Source: 3Deddy, via Thingiverse
the-history-and-future-of-the-bci

Mindflex, from Mattel and NeuroSky, used brainwaves to control a series of fans to navigate a floating ball through an obstacle course. (Image source: NeuroSky)

Remember about five years ago when several consumer headsets from NeuroSky and Emotiv were pitched as a computer game controllers? For example, NeuroSky promoted its EEG Headset as a training aid for young padawans to focus their minds through various exercises so they could control the movement of tablet-projected holograms. Based upon the wearer’s focus and concentration, the brainwave sensors on the EEG headset would allow them to interact with an application on a table.

NeuroSky and others (most notably, Emotiv) have created EEG headsets for gaming, neurological rehabilitation, and other applications. These headsets are examples of brain computer interface (BCI) technology – a computer-based system that acquires brain signals, analyzes them, and translates them into commands that are relayed to an output device like a computer, game console, medical prosthetic, or similar devices.

A BCI doesn’t actually read thoughts but rather detects the smallest of changes in the energy radiated by the brain when you think in a certain way. A BCI recognizes specific energy/frequency patterns in the brain. Some believe that the combination of humans and technology could be more powerful than artificial intelligence (AI) alone. For example, instead of relying on human perception and reasoning, one could use neurotechnologies to improve our perception, as in the case of interpreting a blurry security camera image before deciding whether to take action.

In the future, perhaps the carbon-based mind will connect directly with silicon-based artificial intelligence, robots, and other minds through BCI technologies to greatly extend our existing senses or maybe add new ones.

In order to appreciate what BCI technologies have to offer, it helps to understand a bit of their history.

Infographic – History of Brain Computer Interface (BCI) Tech

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

the-10-greatest-issues-ai-needs-to-face
  • There are a lot of reasons to be excited about artificial intelligence. AI is transforming industries in innovative ways and even enabling entirely new business models to emerge.

    But there are also a lot of reasons to be cautious about AI. The 2019 AI Now Report, created by the AI Now Institute, takes a look back on the social impact of AI in 2019, and some of the most important issues surrounding the technology as it moves forward. The AI Now Institute is a non-profit, interdisciplinary research institute “dedicated to understanding the social implications of AI technologies.”

    “This year we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI,” the report says.

    As AI moves into the next decade we’ve outlined some of the most important issues AI will have to grapple with in the coming years.

  • 1.) Algorithmic bias is already affecting us

    As more and more AI algorithms are implemented into decision making processes in everything from real estate to healthcare, it is important to for developers to be aware of the inherent biases within the datasets they use to train AI.

    Apple’s Apple Pay service recently came under fire from customers – including Apple’s co-founder Steve Wozniak – over allegations that the services approval system was assigning lower credit limits to female customers.

    Experts agree it will likely be impossible to completely safeguard systems again bias, but steps can be taken to mitigate the impact of bias.

    (Image source: Apple)

  • 2.) Facial recognition is watching us

    Facial recognition is already here and being widely deployed throughout the world. In China facial recognition technology has become a part of surveillance and security systems and even allowed customers to use their face to access ATMs.

    While there is an argument for convenience and security, there are also wide spread privacy and ethics concerns around using AI facial recognition. The city of Detroit is facing pushback over plans to add facial recognition to its Project Green Light – a camera system that allows police departments to monitor businesses and intersections in real time.

    In 2019 cities of Oakland, Calif., Somerville, Mass., and San Francisco voted to pass a ordinances banning municipal use of face recognition technology.

    By contrast however, the Department of Homeland Security (DHS) announced that it has plans to issue a proposed regulation that could require all travelers, including US citizens, to submit to face and other biometric scans at airports and other ports of entry.

    Regarding the DHS announcement, ACLU Senior Policy Analyst Jay Stanley had this to say:

    “Time and again, the government told the public and members of Congress that US citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling. This new notice suggests that the government is reneging on what was already an insufficient promise.”

    (Image source:  teguhjati pras from Pixabay )

  • 3.) Deepfakes are a reality

    If you want to see the power of deepfakes you only need to browse around YouTube, to channels like Ctrl Shift Face.

    This isn’t a special effect. With enough data (including images and audio) AI algorithms can actually reconstruct and superimpose individual’s faces onto existing video footage. It makes for some entertaining viral videos, but there are wider, more frightening implications for deepfakes as they can be used to create fraudulent videos of political figures, celebrities, and even private citizens. Left unchecked, deepfakes could become a powerful tool for the spread of misinformation.

    (Image source: Ctrl Shift Face)

  • 4.) Algorithms are ruining our social media experience

    Have you ever watched one video on YouTube or liked a post on Facebook or other social media only to be sent down a rabbit hole of increasingly questionable recommendations? That’s not an accident – that’s AI trying to predict what you’ll “like.” And by “like” we mean it’s trying to figure out what content you’re most likely to engage with – and that often means offending or shocking you. Algorithmic issues are being blamed for both a rise in the quantity of extremist content on social media as well as its proliferation. Google, Facebook, and others have pledged to search for ways to tamp down on the spread of dangerous and extremist content as well as misinformation. But many would argue the damage has already been done.

    (Image source: Pixelkult from Pixabay  )

  • 5.) AI is a powerful tool for hacking

    Automation is meant to handle the dirty, dangerous, and repetitive tasks humans can’t or don’t want to perform, right? Well the benefits go both ways. More and more malicious hackers are leveraging AI technology to assist with sophisticated cybersecurity attacks. A well-trained algorithm can attack a target with a level of speed and efficiency that would be difficult for one or even a larger group of hackers. Fortunately, cybersecurity companies like XM Cyber are fighting fire with fire and are also using machine learning algorithms to safeguard networks and sensitive systems as well.

    (Image source: XM Cyber)

  • 6.) AI developers lack diversity

    Issues with AI can be correlated to a lack of racial and gender diversity among the engineers and developers being hired at the top technology companies working on AI. The AI Now Institute has found that Black and Latinx workers are substantially underrepresented in the tech workforce, and women are particularly underrepresented as AI researchers.

    “Rather than recognizing the scale and systemic nature of the problem, tech companies have responded to mounting evidence of bias and misuse by primarily focusing on narrow diversity solutions,” the AI Now Institute said. “They have also attempted technical debiasing, working to ‘fix’ algorithms and diversify data sets, even though these approaches have proven insufficient and raise serious privacy and consent concerns. Notably, neither approach addresses underlying structural inequalities.”

    (Image source: PixLoger from Pixabay)

  • 7.) AI isn’t green

    As engineers come to terms with the realities of climate change and the need to develop greener technologies, AI is having its own energy crisis. The massive amount of compute power required for AI also comes with a massive energy bill.

    “As a whole, the industry’s energy dependence is on an exponential trajectory, with best estimates showing that its 2020 global footprint amounts to 3–3.6 percent of global greenhouse emissions, more than double what the sector produced in 2007,” the AI Now Institute said. “This is comparable to that of the aviation industry,and larger than that of Japan, which is the fifth biggest polluter in the world.”

    Tech companies are already implementing renewable energy sources and other means to make data centers more efficient. But the emergence of 5G and other advanced networking technologies only threatens to make the problem worse before it gets better. “In the worst-case scenario, this footprint could increase to 14 percent of global emissions by 2040,” the Institute warned.

    (Image source: Free-Photos from Pixabay )

  • 8.) AI helps privatize public infrastructure

    “Troubling partnerships between government and private tech companies also emerged as a trend this year, especially those that extended surveillance from public environments into private spaces like private properties and the home,” the AI Now Institute said.

    In 2019 the city of Detroit established the “Neighborhood Real-Time Intelligence Program,” a $9 million, state- and federally-funded initiative that would expand the city’s Project Green Light surveillance system to 500 intersections, in addition to the 500 businesses where it is already deployed, as well as add facial recognition technology to the system. The city has reported reduced crime in areas thanks to Project Green Light, but that hasn’t stopped privacy advocates from protesting the technology.

    In 2018, Amazon came under fire for offering to let police departments utilize its facial recognition software. The company has also negotiated with over 700 police departments in the US to give police access to videos from Ring smart home cameras if the footage can help with a criminal investigation, according to the AI Now Institute.

    (Image source: Pixabay)

  • 9.) Automation impacts people of color and the poor the most

    The debate about automation and labor likely won’t ever stop. But the narrative is taking new shape as more data emerges about specific groups affected by rapid automation due to AI.

    Depending on who you ask, automation will be a boon to the economy as well as personal productivity, or it will usher in a dystopian nightmare where humans struggle for basic needs while robots handle all of the jobs.

    “Both narratives are predicated on the assumption that automation in the workplace is inevitable and that automated systems are capable of performing tasks that had previously been the work of humans. What is missing from both conflicting narratives is the more nuanced prediction of who will be harmed and who will benefit from labor automation in the years to come,” the AI Now Institute said.

    The 2019 AI Now Report predicts that Black, Latinx, and low-wage workers in the US will be disproportionately impacted by increased levels of automation.

    (Image source: mohamed_hassan from Pixabay)

  • 10. ) AI is removing the ‘human’ from human resources

    More and more companies are using AI to manage and oversee workers. AI is even being implemented into the hiring process. Amazon, for example, uses an AI system to set shifting performance goals for its warehouse workers. Workers are assigned a daily “rate” of productivity to hit each day, based on their prior performance and the overall goals of the warehouse.

    “If a worker falls behind, they are subject to disciplinary action. In many warehouses, termination is an automated process (not unlike being “kicked off” a gig-economy platform),” the AI Now Institute said. “According to Abdi Muse, an organizer with Amazon warehouse workers in Minneapolis, if workers fall behind the algorithmically set productivity rate three times in one day, they are fired, however long they may have worked for the company, and irrespective of the personal circumstances that led to their ‘mistakes.’ ”

    “The introduction of AI-enabled labor-management systems raises significant questions about worker rights and safety. The use of these systems—from Amazon warehouses to Uber and InstaCart—pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color) by setting productivity targets linked to chronic injuries, psychological stress, and even death and by imposing unpredictable algorithmic wage cuts that undermine economic stability.”

    (Image source: iosphere / Freedigitalphotos.net)

keynotes-worth-seeing-at-designcon-2020

What do these topics have in common?

  1. The Future of Fiber Optic Communications: Datacenter and Mobile
  2. Design for Security: The Next Frontier of Smart Silicon
  3. Microchips in Space: How Device Design Enables Amazing Astronomy

The answer is that all use microchips and microsystems but in very different ways and for differing motivations.

In the first one, complex system-on-chips (SoC) are integrated with fiber optics to enable dizzyingly fast high-speed connections between processors, memory storage, and interfaces in data rooms and mobile devices across the world.

With so much going on in the world of fiber optic communications, it’s important for designers to keep up to date with the basic engineering issues. The catalyst for this interest is that the global fiber optics market is predicted to grow from 5 billion USD in 2018 to 9 billion USD by the end of 2025.

In his upcoming keynote at Designcon 2020, Chris Cole, VP of Advanced Development at II-VI, will discuss past trends and new developments in fiber optics for datacenter and mobile applications. Two ongoing trends are the replacement of copper wires by fiber optics in the data room as well as the replacement of direct detection by coherent detection in optical systems.

Cole will also explain the major limitations of power and density in communications, and new technologies like Silicon Photonics (SiPh) and co-packaging. Silicon photonics involves the study of optical properties of the group-IV semiconductor and how it can be used to generate, manipulate and detect light. Silicon is prevalent in photodetectors and solar cells, among other technologies.

To learn more, visit: The Future of Fiber Optic Communications: Datacenter

Image Source: Imec
hydrails-are-the-future-of-rail-transportation

Alstom Transport’s Coradia iLint hydrogen fuel cell passenger train in service in Germany. (Image source: Alstom Transport)

With the concern about climate change and proposed solutions such as the Green New Deal that would phase out fossil fuels, there is question of how freight and passenger trains could still operate. While conventional rail electrification could work in Europe and more dense parts of the US and Canada, the investment cost of an electrified infrastructure in vast portions of both countries could be prohibitive. The answer could be hydrogen fuel cells.

While hydrail, that is hydrogen-fueled locomotives and self propelled railcars, hasn’t got much press in the US, several hydrail projects have moved from the conceptual to demonstration phase in Europe and Asia. Hydrail includes both hydrogen fuel cells and combusting hydrogen in an internal combustion engine. Fuel cells are the more promising approach of the two because they can be a direct replacement for the diesel powered generators in a diesel-electric locomotive. The wheel’s traction motors don’t care whether the electricity comes from a generator or a fuel cell.

Hydrail vehicles would probably be hybrid vehicles in which electrical energy from the fuel cell and regenerative braking would be stored in batteries or ultracapacitors for use by the traction motors. Capturing the energy created during braking via regenerative braking rather than dissipating via resistors, the normal case today, reduces the amount of hydrogen that has to carried onboard the locomotive.

Hydrogen-fueled trains have a carbon-free footprint provided the hydrogen is produced by electrolysis using electrical power provided by wind or solar. Fuel cells do not emit anything but water.

One idea is to produce hydrogen track side in electrolysis plants along a rail line. This would be especially attractive in remote regions where there is ample room to build solar farms, or lots of wind for wind turbines. Since hydrogen can be easily stored, it can be produced whenever the sun is shining or the wind is blowing. Alternatively, if electricity comes from the electric grid, hydrogen could be produced during times of none peak electrical demands.

A fuel cell locomotive would be at least as efficient as a diesel-electric one. The efficiency of electrolysis to convert water into hydrogen is 70 to 80 percent, while the efficiency of fuel cells in converting hydrogen to electricity is 40 to 60 percent. Thus, a fuel cell locomotive would be between 28 and 48 percent efficient. The efficiency of a diesel locomotive in converting diesel fuel to electricity is about 30 percent.

The state of hydrails today

Fuel cell technology is ready to be used in fuel cell powered trains. There are several fuel cell, 18-wheel truck projects underway with some with trucks already on the road. The technology could be transferred to hydrail applications. Toyota, working with Kenworth, is building 10 hydrogen fuel-cell Kenworth T680 Class 8 drayage tractors to reduce emissions at the Ports of Los Angeles and Long Beach. Anheuser-Busch has ordered up to 800 hydrogen fuel cell-powered Class 8 trucks from startup Nikola Motor Co. Engine manufacturer, Cummins, has shown a concept Class 8 tractor featuring a 90-kilowatt fuel cell. This fuel cell system is scalable up to 180 kilowatts.

Of course, the power of a freight locomotive is much greater than an eighteen wheeler – 2000 to 4500 kW versus 565 hp (about 420 kW) for Kenworth’s hydrogen-fueled T680. Fortunately, fuel cell “engines” can be scaled up by adding more fuel cell modules.

Much larger fuel cell power plants are being planned for marine applications from research vessels to container ships. A fuel cell ferry and push boat are already under construction in Norway and France, respectively, as part of the FLAGSHIPS project. SW/TCH Maritime is building the Water Go Round e-ferry, a hydrogen fuel cell-powered ferry for deployment in San Francisco and New York City. PowerCell Sweden AB and Havyard Group ASA are developing a large fuel cell vessel that will service Norwegian fjords. It will use many 200 kW fuel cell system modules connected in parallel for a total output of 3.2 MW. PowerCell and Havyard Group say the first of the four ships should be operation in 2021.

The use of fuel cells to motivate passenger trains and shunting locomotives is less of a challenge than heavy freight locomotives as used in the US. Thus, in the over 20 demonstration of hydrail technology in 14 countries since 2005, most of the projects are people movers. However, in Topeka, Kans in 2009, BNSF Railway debuted its Vehicle Projects HH20B, a switcher-locomotive powered by hydrogen fuel cells producing 2000 hp (1,490 kW).

Alstom Transport’s Coradia iLint, built in Germany, is considered to be the world’s first hydrogen fuel cell passenger train. Two pre-production Coradia iLint trains began operating in Germany in September 2018. Deployment of fleet of some 60 trains is scheduled to commence in 2021. The current trains will be fueled at the world’s first hydrogen train refueling depot with hydrogen generated on-site using wind power.

By using wind power and electrolysis to produce hydrogen for the fuel cells, the Coradia iLint trains have no carbon foot print. (Image source: Alstom)

San Bernardino County Transportation Authority (SBCTA) is ordering four hydrogen fuel cell-powered, Fast Light Intercity and Regional Train (FLIRT) from Switzerland-based Stadler. The two-car, 108 passenger trains will operate at 79 mph between Redlands and San Bernardino (CA) Metrolink station starting in 2024.

HydroFLEX, the first full-sized hydrogen-powered train in the UK, is currently being tested. It uses an existing Class 319 train set fitted with Ballard FCveloCity-HD fuel cells.

The province of Ontario, Canada has contracted with Alstom and Siemans to create concept designs for a self-propelled hydrogen-powered coach to be used on GO Transit lines in the greater Toronto and Hamilton area as an alternative to installing traditional electrification using overhead wires. It has also requested the design of a hydrogen-powered locomotive to pull GO coaches.

CSR Sifang’s 380-passenger urban tram uses a Ballard Power Systems FCveloCity fuel cell engine. (Image source: Ballard)

There have been several hydrogen fuel cell rail prototypes in Asia. In 2006, East Japan Railway Co. developed the world’s first hydrail railcar. This year, it announced that it is investing in developing a two-car trainset using hydrogen fuel-cell technology from Toyota, hopingto have commercially-viable technology ready by 2024. CSR Sifang Co Ltd. in China has built eight 380-passenger urban trams that use 200 kilowatt Ballard Power Systems FCveloCity fuel cell engines. 

With hydrogen fuel cell technology being developed, and already used, in several transportation sectors, the long awaited “hydrogen economy” maybe just over the horizon.

Bill Siuru is a retired USAF colonel who has been writing about transportation technology for over 40 years. He has a bachelor’s degree in mechanical engineering from Wayne State University, a master’s degree in aeronautical engineering from the Air Force Institute of Technology, and a PhD in mechanical engineering from Arizona State University. He has taught engineering at West Point and the U.S. Air Force Academy. He has authored thousands of articles for automotive, aeronautical, and engineering publications.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? 

Register to attend!