top-10-electronic-enabled-tech-highlights-from-ces-2020

Not all cool tech involved robots and autonomous cars. Here’s a list of the other electronic tech featured at the show.

  • This year’s Consumer Electronics Show (CES) 2020 featured a range of marvals enabled by electronic technologies covering application areas from smart cities, AI edge intelligence, body haptics, security systems, real-time accident reports, uncooled thermo cameras, wearables and more.

    Here are the top 10 products and technologies that piqued the interest of the Design News editorial staff.

  • Smart Cities

    Why do major Japanese car manufacturers like to build smart homes and now cities? Several years ago, Honda built a zero-net energy smart home in partnership with UC-Davis. At this year’s CES, Toyota announced it will build a smart city to test their AI, robots and self-driving cars. Toyota’s Woven City will be built at the foothills of Mt. Fuji in Japan. The city will be the world’s first urban incubator dedicated to the advancement of all aspects of mobility, claims Toyota.

    The project is a collaboration between the Japanese carmaker and the Danish architecture firm Bjarke Ingels Group (BIG). Houses in Woven City will have in-home robotics to help with the more mundane tasks of daily life. The homes will have full-connectivity, which will be needed for the sensor-based AI to automate many household chores, like restocking the refrigerator and taking out the trash. Power storage units and water purification systems will be hidden beneath the ground.

  • Intelligence At The Edge

    Blaize is a computing company that optimizes AI at scale wherever data is collected and processed from the edge. The company enables a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments. The company claims that developers can create new classes of products to bring the benefits of AI and machine learning to broad markets.

    The company has developed a fully programmable GSP architecture that utilizes task-level parallelism and streaming execution processing to take advantage of very low energy consumption, high performance and scalability. Blaize claims that, in comparison, existing GPUs and FPGAs exert a much higher energy price, while CPUs cost more and scale poorly, and all are subject to excessive latency due to their sequential execution processing architectures.

  • Full-Body Haptics Suit

    Haptics are all about the sense of touch. Now you can immerse your entire body – or at least 70 tactile points mainly around your torso – into the world of artificial experiences. The BHaptics Tacksuit provides an audio-to-haptic feature that converts sound into haptic feedbacks that are felt real time around your torso. For example, when a bomb explodes or you hear footsteps during a PC/VR game, you’ll feel the experience from the right direction. You’ll even be able to feel Samurai cuts and friendly hugs.

  • Security Comes In Many Forms

    There are many ways to protect your PC data and applications, from hardware encrypted portable storage devices, backup solutions, file repair software, and data recovery, to digital forensics services. SecureData provides both products and services in these areas. At CES, the company demonstrated a secure UBS drive which they claimed was the only hardware encrypted flash drive in the world with keypad and Bluetooth authentication.

  • Wireless Six-Degrees Of Freedom (6DOF)

    Atraxa’s system tracks 6DOF motion without the need for optical cameras or infrared markers to be placed around the room, or mounted externally to the XR headset or controller. And no line of sight—or wires—are required between the headset and controllers. Unhindered by wires or line-of-sight constraints, users can move freely in large spaces. Even move from room to room without any room mapping, or controller orienting (or reorienting) is required. Tracking starts immediately and lasts without interruption.

    The tech combines electromagnetic (EM) and inertial technologies into a single sensor-fusion tracking platform. The IMU (inertial measurement unit) returns acceleration and angular velocity data. The EM tracker delivers true position and orientation data; it also establishes the tracking volume and local coordinate system. Atraxa is comprised of two main components: a tracker module and receiver module. The tracker module houses the IMU and an EM transmitter coil that generates the magnetic field (i.e. the tracking volume). The tracker modules are embedded into the handheld controllers (or other peripherals).

  • Real-Time Accident Report

    Sooner or later, all of us get into an automotive accident. When that occures, wouldn’t it be great to have a record of what happened? Through the use of embedded acceleration sensors, MDGo generates a real-time report in the case of a car crash, detailing each occupant’s injuries by body region. The company’s technology enables accurate delivery of needed services and support by providing optimal medical care in the case of an emergency and supporting the claim process.

  • Smart Factory

    Could a factory think for itself or autonomously design a better car or aircraft? Can it eliminate waste? All of these questions fit into the realm of manufacturing intelligence. One company with experience in this area is Hexagon, claiming that their technologies are used to produce 85% of smartphones, 75% of cars and 90% of aircraft.

    Their Smart Factory approach aims to have fewer inputs, zero waste and high quality. All this is achieved through sensor, software and autonomous solutions that incorporates data feedback to improve work to boost efficiency, productivity, and quality across industrial and manufacturing.

  • A Cool “Uncooled” Methane Gas Detector

    The FLIR GF77 Gas Find IR is the company’s first uncooled thermal camera designed for detecting methane. This handheld camera offers inspection professionals the features they need to find potentially dangerous, invisible methane leaks at natural gas power plants, renewable energy production facilities, industrial plants, and other locations along a natural gas supply chain. The gas detector provides methane gas detection capability at roughly half the price of cooled gas inspection thermal cameras, to empower the oil and gas industry to reduce emissions and ensure a safer work environment.

  • IoT Arduino Adds LoRaWAN Connectivity

    You can now connect your sensors and actuators over long distances via the LoRa wireless protocol or throughout LoRaWAN networks. The Arduino MKR WAN 1310 board provides a practical and cost effective solution to add LoRa connectivity to projects  requiring low power. This open source board can be connected to: the Arduino IoT Cloud, your own LoRa network using the Arduino LoRa PRO Gateway, existing LoRaWAN infrastructure like The Things Network, or even other boards using the direct connectivity mode.

  • Wearables, Ingestibles, Invisibles

    One of the keys to a healthy life is nutrition. But what exactly constitutes ‘healthy’ food for a specific person? To answer that question, you need to measure and analyze the processes inside the complex human digestive system. Imec is working on prototype technology that is up to that task. It’s called ingestible sensors.

    The company also develops wearables for medical and consumer applications that enable reliable, continuous, comfortable, and long-term health monitoring & management. This includes high-accuracy & low-power biomedical sensing technologies sometimes embedded into fabrics.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

hyper-automation,-multi-experience,-and-securing-ai-(or-baby-yoda)

If you google “technology trends,” one of the companies that will appear in the top 10 hits will be Gartner. The research and advisory firm not only analyzes numerous markets in terms of technical innovations but also covers business aspects of technology for C-suite professionals.

For 2020, Gartner has produced a number of predictive reports, including those covering digital and strategic technologies. From those lists, I’ve selected three trends that appear vaguely familiar from the recent past, albeit with new names. Do you agree? Don’t hesitate to ping me with your take on these trends at: [email protected]

Trend: Hyper Automation

Gartner: “Automation uses technology to automate tasks that once required humans. Hyper automation deals with the application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyper automation often results in the creation of a digital twin of the organization. As no single tool can replace humans, hyper automation today involves a combination of tools, including robotic process automation (RPA), intelligent business management software (iBPMS) and AI, with a goal of increasingly AI-driven decision making. 

My Take: Do we really need yet another word or phrase to represent the ongoing digitization process that will eventually enable a complete digital twin? One might just as well say that the creation of a digital twin – from improved modeling, simulations, sensors, etc. – have accelerated the pace of automation thus creating a new hypeautomoation or superautomation reality.

It’s really a chicken and egg perspective. Which came first – the creation of hyper automation systems that eventually result in a digital twin? Or did the creation of a digital twin from a sensor-rich ecosystem lead to improved automation of tasks previously performed by humans?

Regardless of the answer, there seems to be little doubt about the movement toward a more complete digital twin within the next decade. Mordor Intelligence predicts that the digital twin market is anticipated to witness a CAGR of 35.0% over the forecast period 2019 – 2024. Growth in IoT and cloud-based platforms, the surge in adoption of 3D printing technology in the manufacturing industry, and the objective to reduce project cost are some of the major factors, driving the growth for the digital twin market. Mordor notes that IoT sensors have created a potential space for engineers to test and communicate with sensors integrated with the operating products, hence delivering real-time prescriptive of system functioning and timely maintenance.

Which came first: Hyper automation or the digital twin? It’s your call.

top-10-tech-failures-from-2019-that-hint-at-2020-trends
  • As the last year of the last decade, 2019 had a lot to live up to. Within the span of 10 short years, service apps like Uber, Lyft, AirBnB and others on mobile phones became big business. Mobile phone companies introduced amazing personal features like voice assistance (e.g., Siri and Alexa), iCloud connections for fast video streaming, and very high-resolution HD cameras. Not to be outdone, the automobile was transformed with automation tech and electrification. A Tesla electric vehicle even made it into space.

    Space technology flourished in the last decade with the commercialization of space rockets, the launch of hundreds upon hundreds of communication satellites and the increasing popularity of Cubesats. Back on earth, homes and buildings became smarter while alternative forms of energy continued to improve in efficiency. And the list goes on.

    But there were several notable failures in the last decade, many seeming to culminate in 2019. Here is the short list of the 10 tech failures most worthy of mention, in no particular order.

  • #1 Glitchy Spacecraft Launch

    Boeing suffered several major setbacks this year. The first one was an incomplete demonstration flight of its new astronaut capsule. The mission of Boeing’s CST-100 Starliner spacecraft began successfully but suffered technical problems that prevented it from reaching the International Space Station (ISS). Many observers believe that the Starliner capsule on top of an Atlas rocket simply burned too much fuel as it climbed into space, leaving an insufficient amount to reach the ISS. Some have suggested the failure was from a glitchy timer system that turned off the rocket thrusters too soon.

    The demonstration test wasn’t a complete failure as the Starliner did land successfully in the deserts of New Mexico.

  • #2 Andromeda Strain revisited?

    Remember the Andromeda Strain? It was a techno-thriller novel from 1969 written by Michael Crichton that centered around the efforts of a team of scientists investigating the outbreak of a deadly extraterrestrial microorganism in Arizona.

    Fast forward to 2019. A company in Israel launched its first lunar lander that unfortunately crashed-landed on the moon. The small robotic spacecraft called Beresheet was created by the SpaceIL and Israel Aerospace Industries (IAI). It failed just moments before landing on the moon.

    This was an unmanned operation, but not one devoid of life. A US-based nonprofit had added tardigrades, or water bears, to the capsule. These microscopic, eight-legged creatures could survive in a dormant state through harsh conditions, and maybe even on the moon.

    In other words, earth-based lifeforms have now been introduced to the moon’s ecosystem. Without some water, the tardigrades aren’t likely to revive and spread. But this failure highlights the need for planetary protections – both on the moon and earth.

    It should be noted that the goal of the Arch Mission Foundation was not to contaminate the moon but rather to, “create multiple redundant repositories of human knowledge around the Solar System.” The foundation tests out technologies for long-lasting archives, like securing information in DNA strands or encapsulating insects in artificial amber. In addition to water bears, the Arch’s payload included nickel sheets nanopatterned with thousands of pages of Wikipedia and other texts.

    One of Arch’s first missions was launched by SpaceX on the Falcon Heavy rocket and is now entering an orbit around the Sun for millions of years.  The first books in the Solar Library were Isaac Asimov’s Foundation Trilogy. Can you guess where they are located? The books containing Asimov’s Foundation Trilogy were placed in the glovebox of the Cherry Red Tesla Roadster that will soon be orbiting the Sun.

  • #3 Communication Failures (again)

    Both Boeing and the FAA have been cited for oversight breakdowns that contributed to 737 Max failure. But the actual cause of the tragedy that resulted in the crash of two Boeing 737 Max aircrafts seems to be broad failures in the automated system that controls the new planes. The report by the Joint Authorities Technical Review panel said that assumptions about critical aspects of the plane’s design were “not adequately reviewed, updated, or validated.”

    This lack of communication and incorporation of warnings from the engineering teams is a common problem with very complex, modern systems, e.g., the Challenger Space Shuttle and others.

  • #4 Disappearing Bitcoin Miners

    While 2019 was overall a profitable year for the semiconductor chip development market, there were a few noticeable declines. One was the system-on-chip (SoC) devices made specifically for bitcoin mining. The cost of mining for bitcoins dramatically increased in 2019, leading to a drop in the need for hardware SoC-based equipment.

    In essence, it took much more effort for bitcoin miners to solve the equations required to validate transactions on the Bitcoin network. This increase in mining difficulty reflects the increased competition.

    Another slowdown was in the market for automotive chips and electronics, as companies and drivers realized that autonomous car technology won’t really be ready for several more years. This corresponds well to Gartner’s famous “trough of disappointment” portion in its hype cycle for emerging technologies.

  • #5 Cloud Buckets

    A new type of cybersecurity issue has emerged in which millions of people have had their personal information exposed through file storage systems known as cloud buckets. Such storage areas typically consist of public resources that are easily accessed by a variety of web service applications. Cloud buckets are like public file folders which contain user information.

    Placing sensitive user data information in the cloud offers companies the capability to offload their security to big firms like Google, Apple, Amazon or Microsoft. The problem is that the buckets are not configured by these firms but rather by the companies who use their cloud networks.

    Not all of these companies are storing their customer information properly. This lack of security is easy pickings for identity thieves. It is an example of readily available information that doesn’t require any hacking.

  • #6 Hacks of the Year

    Speaking of hacks, this year experienced even more cybersecurity breaches. In 2018, there were 500 million personal records stolen, according to the Identity Theft Resource Center. But that number was miniscule compared to the 7.9 billion records exposed in 2019 by over 5,000 breaches, as reported by Risk-Based Security. Compared to the 2018 Q3 report, the total number of 2019 breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent. Here’s just a small sampling of the more infamous breaches (more details here):

    > ElasticSearch Server Breach

    > Canva Data Breach

    > Facebook App Data Exposure 

    > Orvibo Leaked Database

    > Social Media Profiles Data Leak

    Sadly, the common theme in many of these data exposures is that data aggregators obtained and used personal information in a way the owners never imaged or gave their consented. This is a legal problem as much as a technical one.

  • #7 Google Glass

    In 2019, Google announced a new $999 Glass augmented reality headset that looked suspicious like the failed Google Glass from the past.

    Early in 2012, Google co-founder Sergey Brin debuted Google Glass. A year later, the founder and head of the Google Glass Project, Babak Parviz, delivered a keynote about the technology at the IEEE Hot Chips event at Stanford.

    One of the ongoing leading smart phone trends is the ever-improving screen resolution and larger screen size. During his keynote, Parviz argued that there was a physical limit to this trend, but glass offered the next display form factor evolution, i.e., immersion with one’s surroundings. This will be especially important in augmented reality applications.

    Originally, Google Glass was a standalone unit (not yet cloud-based) that included internet access, voice controls, and a camera for pictures and videos. It accomplished all of this with dual core processors running at more than 1 GHz. Five MEMS sensors capture all the environmental data. It had a two-dimensional touch panel on side of glass.

    Why was this technology a failure? It wasn’t because of the technology, but rather because it wasn’t clear to the customer what problem it solved or why they needed it. Additionally, many felt it was intrusive as a user of the device could take pictures and short film snippets of people without their knowledge.

    In January 2015, Google announced that they would no longer be developing Google Glass. But that wasn’t the end of the project. Instead, Google pivoted to the business sector by launching Glass Enterprise Edition for workplaces like factories in 2017. This year, Google announced the Glass augmented reality headset.

  • #8 Folding Phone

    Samsung’s Galaxy folding phone was billed as a new dawn in display technology. The phone levered open into a 7.3-inch dynamic AMOLED display.

    Unfortunately, the company had to postpone the launched of the folding phone after early review models broke, delaminated, and got filled with gunk. The problem seemed to be potential defects with a weak hinge as well as substances found inside the device.

    As with many new technologies, the price tag also presented a barrier to anyone but early adopters. A reengineered and improved version is now on sale for near $2,000.

  • #9 Machine-Bias or Garbage-in, Garbage-out

    The challenge of machine-bias came clearly into focus in 2019. Similar to human-bias, machine-bias occurs when the learning process for a Silicon-based machine makes erroneous assumptions due to the limitations of a data set and pre-programming criteria. One example of machine-bias was recently revealed in Apple’s new credit card, which contained an algorithm to decide how much trustworthy (or risky) a user might be. This evaluation used to be done by trained humans but now is often performed by AI based algorithms.

    Apple’s credit card was shown to have a gender bias. Males are more likely to get a higher credit line limit than females. This bias was highlighted when a male entrepreneur was assigned a spending limit 10 times higher than that of his wife, even though they have a common account.

    How does a machine get a bias? A report from IBM Research outlines two main ways AI systems could inherit biases. First, the AI software might contain errors and dependencies. Second, the data set from which AI learns its task may have flaws and bias. These data points come from the real world which contains many biases, e.g., favoring white men to the exclusion of women and minorities. Algorithms are only as smart as the data you feed them. This is a modern update of the old computer data expression, “garbage-in, garbage-out.”

  • #10 Software App Failures

    No list of tech failures would be complete without mention of the apps that didn’t make it. The range of the applications that failed is wide.

    Consider first British Airways (BA) glitch, whose computer system completely wend down during a peak travel season. Over a hundred flights of BA were cancelled and near to 300 delayed. Thousands of passengers were affected. Sadly, this wasn’t the first time the system had failed, which suggests a systemic problem that has not been properly addressed by management.

    Or how about the Facebook 2019 failure that prevented users from viewing or loading images form the newsfeed? Several other social media apps had a similar problem, including Instagram, WhatsApp and Messenger. In each case, users were prevented from sending messages, media files and the like.  Facebook claimed their problem was the result of an accident during routine maintenance.

    Several app failures or hacks from 2019 include Apple’s Facetime bug and the Ring security camera intrusions. The later may have been more of a customer problem as Ring notes that the system invasion was likely the result of the hacker gaining access to the family’s account through weak or stolen login credentials.

12-reasons-why-5g-and-wireless-dominated-the-airways-in-2019

The first wave of 5G-enabled devices dominated 2019, including the Galaxy S10, OnePlus 7, and Huawei P30 among others. Major infrastructure improvements continued to be rolled-out.

But 5G wasn’t the only new wireless tech hitting the market. Other wireless activities, like Wi-Fi 6, also began to appear. Below are 12 of the editor’s top picks for 2019.

Image source: Qualcomm

Qualcomm has big plans for 5G in 2020

From photos to gaming and AI applications, the Snapdragon 765 and 865 are both focused on bringing 5G to consumers at all levels.

Qualcomm’s latest Snapdragon platforms are aimed squarely at bringing 5G devices to consumers next year.

certification mark, 5G, Wi-Fi 6

Image source: WiFi Alliance

How Wi-Fi 6 and 5G will transform factory automation

A key technology trend for automation and control in 2020 and beyond is the emergence of wireless communications including 5G, Wi-Fi 6, LoRaWAN and more. An obvious benefit for factory automation is the use of wireless communication for remote monitoring and remote operation of physical assets but an equally important benefit is an ability to replace cables, unreliable WiFi and the many industrial standards in use today.

Image source: Veo Robotics

FreeMove turns any industrial robot into a cobot

Veo Robotics’ FreeMove platform gives standard industrial robots the ability to function as collaborative robots that work alongside human workers. Don’t get rid of your old industrial robot yet. You may be able to upgrade it into a collaborative robot.

Image source: Beckstrom.com

What are Beckstrom’s Laws of Cyber Security?

Prioritizing security efforts – critical for the IoT to survive – may best be done by considering the value of transactions over the size of the network.

The Internet of Things (IoT) has many defining characteristics, such as tiny, cheap and low power sensors, embedded computers, and connectivity. But one characteristic will rule them all, namely, security. In the very near future, the IoT will probably not exist if it isn’t secure.

5G, 5G networks, automation and control, industrial application, factories
Image source: Design News

5G May Soon Explode with New Factory Functionality

Just as the iPhone ushered in a new world of consumer connectivity, the ultra-fast 5G networking may jump-start new industrial capabilities.

Speedy 5G networking is beginning to show up; in some places it is already a reality. While we know it will speed up consumer applications such as streaming or downloads – like a movie in seconds – what will it mean for industrial settings? At a session at the Design and Manufacturing Minneapolis show last week, Joshua Ness, senior manager of 5G Labs at Verizon, explained the coming impact of 5G during his session, Prepping for the 5G Factory.

smart textiles, sensors, wearable technology, National University of Singapore, sensor networks, IoT
Image source: NUS

Smart’ Textiles Boost Data Connectivity Between Sensors for Wearable Tech

A new textile network of sensors can provide the basis for intelligent wearable technology with unprecedented communication capability.

Image source: AiFi MWCLA, Design News


AiFi Replaces Cashiers and Checkouts with AI

Walk in, get what you need, and walk out. AiFi uses AI to completely automate and streamline the shopping experience.

Imagine walking into a grocery store, picking up all the items you need, and walking right out the door. No need for a cashier or even a self-checkout, the store will automatically charge everything you take to your credit card.

Image source: WMG at the University of Warwick

A European University is Testing 5G for Self-Driving Vehicles

The University of Warwick, in collaboration with NI, will be conducting tests of 5G-enabled services for autonomous and connected vehicles.

Edge Devices, MQTT, IIoT Connectivity, MOXA, MQTT, AMQP, CoAP, broker/client design

Image source: MOXA

Edge Devices Leverage MQTT for IIoT Connectivity

The MQTT transport protocol has emerged as a top choice for implementing IoT connectivity, offering solutions using local gateways or direct to the cloud.

A primary challenge for IIoT automation and control applications is the ability to efficiently and effectively collect the data that becomes the grist for IoT enterprise-level, decision making and analytics. And while a wide variety and different types of edge devices have been introduced, a major concern is how to collect data from these devices.

Image source: Molex

The 4 Major Challenges of Wireless In-Vehicle Charging

Wireless charging is the best way for automakers to meet consumer demand for better charging performance in their vehicles. But there significant challenges to getting this innovation to market.

Today’s consumer expects to be connected via their mobile device wherever they go – especially in their vehicles, which have become more than transportation; they are a mobile extension of the owner’s living space.

Image source: LoRa Alliance

Everything You Need to Know about LoRa and the IoT

A guide to LoRa / LoRAWAN, the communications technology emerging as the leader among Low-Power Wide-Area Networks (LPWANs) entering the IoT space.

We’re entering a world in which WiFi and Bluetooth may no longer be the best communication technologies for Internet of Things (IoT) applications. The IoT is gaining more ground each year. Experts project there will be 75 billion connected devices by 2025.

Ethernet, edge computing, IEEE, standards, IoT, Ethernet, DeviceNet, EtherNet/IP

Image Source: ODVA

Ethernet at the Device-Level

IoT-drive technology standards are creating new possibilities for reducing the cost and complexity of integrating Industrial Ethernet into constrained networks and devices.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

best-ai-stories-of-2019
(Image source: Adobe Stock)

We’ve picked our favorite AI-related stories from 2019.

The 10 greatest issues AI needs to face

While we celebrate the positive impacts of artificial intelligence let’s not forget there’s also a lot to be concerned about.

The Apple Card Is the Most High-Profile Case of AI Bias Yet

Apple Card users have alleged that its credit decision algorithm discriminates against women.

How AI at the Edge Is Defining Next-Generation Hardware Platforms

Moving AI from the cloud to the edge was a big trend in 2019. Chris Cheng, distinguished technologist on the hardware machine learning team at Hewlett Packard, takes a look at some of the latest research being done on AI inference at the edge.

(Image source: OpenAI)

OpenAI’s Robot Hand Taught Itself How to Solve a Rubik’s Cube

Rubik’s Cube Solving Robot Hand Sparks Debate in the AI Community

Using novel neural networks, OpenAI enabled a robotic hand is able to learn how to solve a Rubik’s Cube on its own. Concerns regarding OpenAI’s robot hand that can solve a Rubik’s Cube have created a debate among engineers and AI experts on social media.

What’s the State of Emotional AI?

Artificial intelligence that can recognize human emotions – emotional AI – has been gaining momentum. But something’s missing. How long until we’ll be seeing it in our devices and cars?

(Image source: TuSimple)

UPS Has Invested in Autonomous Trucks After Ongoing Tests

TuSimple’s Autonomous Trucks Are Being Tested by the USPS

In 2019, TuSimple entered into partnerships with UPS and the US Postal Service to test self-driving trucks for hauling mail freight.

The New Raspberry Pi 4 Is All About AI and Embedded IoT

The Raspberry Pi has grown from a hobbyist machine to an IoT developer platform capable of even handling machine learning applications. Here’s our hands-on look.

A Look at the US/China Battle for AI Leadership

The US and China are waging a behind-the-scenes war over who will emerge as the global powerhouse of artificial intelligence. Where do each country’s strengths and weaknesses lie?

There’s a Diversity Crisis in the AI Industry

A lack of racial and gender diversity at the companies creating AI ties closely with issues of bias and racial discrimination in artificial intelligence algorithms, according to a new NYU study.

(Image source: Pixabay)

Can Trump’s New Initiative Make American AI Great Again?

A look at President Trump’s executive order aimed at accelerating America’s lead in artificial intelligence.

AI Could Make Quantum Computers a Reality

New research is examining the use of artificial intelligence to handle the calculations necessary for quantum computers to function.

10-semi-electronic-device-tech-reveals-from-ieee-iedm-2019

2019 IEEE IEDM event reveals latest node chips, chiplets, memories for AI, densest thin-film batteries, 400Gbits/s silicon photonics, quantum computing tools and much more.

  • The theme for this year’s 65th IEEE International Electron Devices Meeting (IEDM) was, “Innovative Devices for an Era of Connected Intelligence.” As in previous years, major semiconductor players including and international research organizations (e.g., imec, CEA-Leti, UC universities and others) presented the latest detailed technology for processors, memories, interfaces and power device devices. Additionally, the event included quantum computing advances, medical uses and other newer areas of application.

    Here are 10 of the major semiconductor “reveals” at the show for 2019.

  • Leading Edge 5nm Chip with Super Dense Memory

    Moore’s Law may be hitting the wall but it’s not dead yet. TSMC unveiled a complete 5nm technology platform that advanced silicon chip scaling (miniaturization) to the next process node. Reaching the 5nm node milestone was due in part to advances in lithography and improvements in process and packaging techniques.

    TSMC researchers described a 5nm CMOS process optimized for both mobile and high-performance computing. It offered nearly twice the logic density and a 15% speed gain or 30% power reduction over the company’s 7nm process. The process optimization incorporated extensive use of EUV lithography to replace immersion lithography at key points in the manufacturing process.

    TSMC’s 5nm platform also featured FinFETs and high-density SRAM cells. The SRAM could be optimized for low-power or high-performance applications, and the researchers say the high-density version was the highest-density SRAM ever reported. The researchers say high-volume production was targeted for 1H20.

  • Quantum computing 

    Great strides have been made in quantum computing. At the Semicon West/Electronic System Design (ESD) 2019 conference, IBM displayed it’s  IBM Q Experience, a cloud-based quantum computer available for free to anyone with a web browser and an internet connection.

    Creating a quantum computer has been an amazing technological achievement, but like any computer it needs software. Imec – the international Flemish R&D nanoelectronics organization – presented the first step toward developing a systematic approach to the design of quantum computing devices.

    EDA chip design software such as TCAD is necessary to produce highly accurate models of semiconductor devices and their operation. To date, no analogous tools exist to model qubits, the basis of quantum computing, because the field is so new and complex. If these design tools did exist, the development of quantum computers could take place much more quickly.

    The Imec team has taken a step to create such a software framework using multiphysics simulation methods to develop a comprehensive design methodology for qubits built in silicon. They modeled device electrostatics, stress, micro-magnetics, band structure and spin dynamics. Based on the results of these studies, they say that single-electron qubits in quantum dots can be induced and optimized in silicon MOSFETs with thin (<20nm) gate oxides. The researchers will discuss critical aspects of their methodology, the parameters they modeled, and next steps.

  • 3D Chiplets

    Intel presented a novel 3D heterogeneous integration process for chiplet creation. It is seen as an evolution of Moore’s Law, a way to keep the scaling, size and cost benefits continuing into the foreseeable future.

    Chiplets are a type of advanced packaging which offers a different way to integrate multiple dies into a package or system. There are a number of ways to make chiplets, but all use a library of modular chips – like Lego building blocks. These module chips are assembled in a package that connects them using a die-to-die interconnect scheme.

    There are many other approaches to combining chip dies, i.e., 2.5D dies that are stacked on top of an interposer. But the hope with a chiplet approach is that it’s a faster and less expensive way to assemble various types of third-party chips like processors, memory, interfaces and the like.

    Here are the details: Intel believes that heterogeneous 3D integration will drive scaling. CMOS technology requires both NMOS and PMOS devices. Intel researchers used 3D sequential stacking architecture to combine these different devices. They first built Si FinFET NMOS transistors on a silicon wafer. On a separate Si wafer they fabricated a single-crystalline Ge film for use as a buffer layer. They flipped the second wafer, bonded it to the first, annealed them both to produce a void-free interface, cleaved the second wafer away except for the Ge layer, and then built gate-all-around (GAA) Ge-channel PMOS devices on top of it. The researchers say these results show that heterogeneous 3D integration is promising for CMOS logic in highly scaled technology nodes.

    This images hows a schematic and a cross-section of a fully processed 3D CMOS transistor structure achieved by this process; in the middle is a thickness contour map of the Ge transfer layer, showing good uniformity; and at right is a 3D cross-sectional view of the completed 3D CMOS chip showing Ge-channel GAA transistors on top of Si FinFET NMOS transistors.

  • AI That Does’t Forget

    Embedded STT-MRAM and other non-volatile memories (NVMs) are getting a lot of attention lately. NVMs devices retain their memory even after the power is removed. Embedded SST-NRAM is one NVM that shows particular promise in the embedded memory space for cache memory in IoT and AI applications.

    At IEDM 2019, TSMC described a versatile 22nm STT-MRAM technology for AI while Intel talked about STT-MRAMs for use in L4 cache applications.

    In STT-RAM writing, an electric current is polarized by aligning the spin direction of the electrons flowing through a magnetic tunnel junction (MTJ) element. Data writing is performed by using the spin-polarized current to change the magnetic orientation of the information storage layer in the MTJ element. Intel improved the process and stack for L4 cache applications. STT-MRAM technology for L4 cache requires tighter bitcell pitches, which translate into smaller MTJ sizes and reduced available write current.

  • Organ Forceps With a Special Touch

    Our internal organs are slippery because they’re covered with blood and other body fluids, so grasping and pulling them with forceps can be challenging. Although contact-force sensors have been placed on the tips of forceps used in diagnostic laparoscopic and robotic surgeries, there currently is no way to know if they are slipping, other than visually via a monitor, which has limited usefulness. A Kagawa University team described a highly sensitive slip-sensing imager (sub-mm resolution) and novel algorithm that can, in effect, give forceps a sense of touch. The idea is to use the device to visualize the spatial distribution of the grasping force across the organ’s surface. The center of that distributed load is calculated, and as the forceps are moved the algorithm relates any corresponding movements of the load center to slippage. Built on an SOI wafer, the device’s force-sensor pixels consist of a 20µm–thick piezoelectric silicon diaphragm (400µm diameter) with a center contact, and with a force detection circuit integrated on the diaphragm. The diaphragm acts as a strain gauge as it flexes due to varying grasping force.

  • Impedance Sensor for Fingerprint Imaging

    Researchers led by Cornell discussed the monolithic integration of a piezoelectric aluminum nitride (AlN) resonator into a CMOS-controlled, GHz ultrasonic impedance sensor/imager. The device measures changes in surface properties such as surface oxidation, materials, liquid viscosity and others, and is meant for use in wearable, IoT and smartphone systems to detect fingerprints with high resolution, determine tissue states, and for other applications. This is the first time monolithic fabrication – all in one chip or die –  has been successfully demonstrated, and it led to small, power-efficient GHz sensing arrays with improved performance vs. the standard two-chip heterogeneous integration approach, thanks to less parasitic coupling and a higher signal-to-noise ratio.

  • Thin-Film Battery Goes High-Density

    The miniaturization of power sources hasn’t kept pace with the miniaturization of electronics. Although integrated electrochemical capacitors offer high power density, high frequency response and novel form factors, their low energy densities are of limited value for MEMS and autonomous device applications that require long periods between charging. CEA-Leti researchers discussed a thin-film battery (TFB) with the highest areal energy density yet reported (890 µAh/cm-2) and high-power density (450 µAh/cm-2). Built on silicon wafers using UV photolithography and etching for the successive deposition and patterning of each layer, the thin-film battery integrates a 20µm-thick LiCoO2 cathode in a Li-free anode configuration. It showed good cycling behavior over 100 cycles, and the fact it was built using a wafer-level process opens up the possibility to tightly integrate this battery technology with future electronic devices.

  • Physically Unclonable Function (PUF) for Mobile and Smart Devices

    The spread of networked mobile devices and smart gadgets in the IoT landscape has created an urgent need to protect them with lightweight and low-power cryptographic solutions. A physically unclonable function (PUF) is a hardware-intrinsic security primitive, or basic programming element. UC Santa Barbara researchers discussed an ultra-low-power PUF that operates on the varying electrical resistances and current leakages that arised from intrinsic process variations in ReRAM crossbar arrays. The team built 4K-ReRAM passive crossbar circuit arrays fabricated with a CMOS-compatible process suitable for back-end-of-the-line (BEOL) integration. The arrays allow for an extremely large number of challenge-response pairs (a common cryptographic protocol), as well as 4x better density vs. other ReRAM architectures plus a ~100x improvement in power efficiency and more robust security metrics.

  • Silicon photonics

    Very fast speed data races around within data centers via optical fiber, using silicon photonic (light-based) interfaces that operate at 100 Gb/s. But cloud data center traffic is growing at nearly 30% per year and there soon will be a need to increase the data rates. A STMicroelectronics-led team described a new silicon photonics technology platform built on 300mm Silicon-on-Insulator (SOI) wafers, yielding devices that operate at 400Gbits/s (each device has 4 channels, each of which operates at 100Gbits/s, for a total of 400Gbits/s).

    Optical coupling and polarization management are key requirements, and their devices incorporate a 60 GHz high-speed photodiode and a high-speed phase modulator. They also built devices with a supplementary SiN waveguide layer for higher coupling efficiency, to meet evolving data-transmission requirements. The researchers say the photonics platform has the potential to meet the requirements of applications other than data centers, too, such as automotive.

    The image is a photo of the chip-on-board assembly of an analog front-end (AFE) function implemented in a 400G-DR4 optical transceiver using the technology, and (2b) are PAM4 signal eye diagrams at 106 Gbits/s per channel, used to measure high-speed signal quality.

  • 5G and beyond

    One of the challenges for chip makers is how to integrate III-V materials with silicon to make ultra-fast devices for 5G and other uses, which are compatible with conventional CMOS technology.  In addition to silicon, III-V compound semiconductors are obtained by combining group III elements (essentially Al, Ga, In) with group V elements (essentially N, P , As, Sb). This gives us 12 possible combinations; the most important ones are probably GaAs, InP GaP and GaN.

    IOT and 5G applications typically use sensors that transmit wireless data to anedge or cloud network. This requires a combination of RF capabilities with a small form factor and low operating power. A promising approach to achieve this combination is to create single chips that combine the capabilities of silicon CMOS with those of III-V devices, such as gallium nitride (GaN) and indium gallium arsenide (InGaAs). The unique properties of III-V compounds make then well suited for optoelectronics (LEDs) and communications (5G).

    At IEDM, Intel talked described how low-leakage, high-k dielectric enhancement mode GaN NMOS and Si PMOS transistors were built monolithically on a 300mm Si substrate. The goal was to combine GaN’s high-frequency/-temperature/-power attributes with silicon CMOS circuitry’s digital signal processing, logic, memory and analog capabilities, to create compact devices for next-generation solutions for power delivery, RF and system-on-chip (SoC) applications. The researchers say both device types demonstrated excellent performance across a range of electrical specifications.

    III-V materials offer higher electron mobilities than silicon, and HBTs made from them are very fast transistors often used for RF and other high-frequency applications. A key goal is to build them on 300mm silicon wafers instead of other substrates, to take advantage of silicon’s lower manufacturing costs. A team led by imec described how they used a unique nano-ridge engineering technique to build GaAs/InGaP HBTs on a 300mm silicon substrate.

RELATED ARTICLES:

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

the-10-greatest-issues-ai-needs-to-face
  • There are a lot of reasons to be excited about artificial intelligence. AI is transforming industries in innovative ways and even enabling entirely new business models to emerge.

    But there are also a lot of reasons to be cautious about AI. The 2019 AI Now Report, created by the AI Now Institute, takes a look back on the social impact of AI in 2019, and some of the most important issues surrounding the technology as it moves forward. The AI Now Institute is a non-profit, interdisciplinary research institute “dedicated to understanding the social implications of AI technologies.”

    “This year we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI,” the report says.

    As AI moves into the next decade we’ve outlined some of the most important issues AI will have to grapple with in the coming years.

  • 1.) Algorithmic bias is already affecting us

    As more and more AI algorithms are implemented into decision making processes in everything from real estate to healthcare, it is important to for developers to be aware of the inherent biases within the datasets they use to train AI.

    Apple’s Apple Pay service recently came under fire from customers – including Apple’s co-founder Steve Wozniak – over allegations that the services approval system was assigning lower credit limits to female customers.

    Experts agree it will likely be impossible to completely safeguard systems again bias, but steps can be taken to mitigate the impact of bias.

    (Image source: Apple)

  • 2.) Facial recognition is watching us

    Facial recognition is already here and being widely deployed throughout the world. In China facial recognition technology has become a part of surveillance and security systems and even allowed customers to use their face to access ATMs.

    While there is an argument for convenience and security, there are also wide spread privacy and ethics concerns around using AI facial recognition. The city of Detroit is facing pushback over plans to add facial recognition to its Project Green Light – a camera system that allows police departments to monitor businesses and intersections in real time.

    In 2019 cities of Oakland, Calif., Somerville, Mass., and San Francisco voted to pass a ordinances banning municipal use of face recognition technology.

    By contrast however, the Department of Homeland Security (DHS) announced that it has plans to issue a proposed regulation that could require all travelers, including US citizens, to submit to face and other biometric scans at airports and other ports of entry.

    Regarding the DHS announcement, ACLU Senior Policy Analyst Jay Stanley had this to say:

    “Time and again, the government told the public and members of Congress that US citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling. This new notice suggests that the government is reneging on what was already an insufficient promise.”

    (Image source:  teguhjati pras from Pixabay )

  • 3.) Deepfakes are a reality

    If you want to see the power of deepfakes you only need to browse around YouTube, to channels like Ctrl Shift Face.

    This isn’t a special effect. With enough data (including images and audio) AI algorithms can actually reconstruct and superimpose individual’s faces onto existing video footage. It makes for some entertaining viral videos, but there are wider, more frightening implications for deepfakes as they can be used to create fraudulent videos of political figures, celebrities, and even private citizens. Left unchecked, deepfakes could become a powerful tool for the spread of misinformation.

    (Image source: Ctrl Shift Face)

  • 4.) Algorithms are ruining our social media experience

    Have you ever watched one video on YouTube or liked a post on Facebook or other social media only to be sent down a rabbit hole of increasingly questionable recommendations? That’s not an accident – that’s AI trying to predict what you’ll “like.” And by “like” we mean it’s trying to figure out what content you’re most likely to engage with – and that often means offending or shocking you. Algorithmic issues are being blamed for both a rise in the quantity of extremist content on social media as well as its proliferation. Google, Facebook, and others have pledged to search for ways to tamp down on the spread of dangerous and extremist content as well as misinformation. But many would argue the damage has already been done.

    (Image source: Pixelkult from Pixabay  )

  • 5.) AI is a powerful tool for hacking

    Automation is meant to handle the dirty, dangerous, and repetitive tasks humans can’t or don’t want to perform, right? Well the benefits go both ways. More and more malicious hackers are leveraging AI technology to assist with sophisticated cybersecurity attacks. A well-trained algorithm can attack a target with a level of speed and efficiency that would be difficult for one or even a larger group of hackers. Fortunately, cybersecurity companies like XM Cyber are fighting fire with fire and are also using machine learning algorithms to safeguard networks and sensitive systems as well.

    (Image source: XM Cyber)

  • 6.) AI developers lack diversity

    Issues with AI can be correlated to a lack of racial and gender diversity among the engineers and developers being hired at the top technology companies working on AI. The AI Now Institute has found that Black and Latinx workers are substantially underrepresented in the tech workforce, and women are particularly underrepresented as AI researchers.

    “Rather than recognizing the scale and systemic nature of the problem, tech companies have responded to mounting evidence of bias and misuse by primarily focusing on narrow diversity solutions,” the AI Now Institute said. “They have also attempted technical debiasing, working to ‘fix’ algorithms and diversify data sets, even though these approaches have proven insufficient and raise serious privacy and consent concerns. Notably, neither approach addresses underlying structural inequalities.”

    (Image source: PixLoger from Pixabay)

  • 7.) AI isn’t green

    As engineers come to terms with the realities of climate change and the need to develop greener technologies, AI is having its own energy crisis. The massive amount of compute power required for AI also comes with a massive energy bill.

    “As a whole, the industry’s energy dependence is on an exponential trajectory, with best estimates showing that its 2020 global footprint amounts to 3–3.6 percent of global greenhouse emissions, more than double what the sector produced in 2007,” the AI Now Institute said. “This is comparable to that of the aviation industry,and larger than that of Japan, which is the fifth biggest polluter in the world.”

    Tech companies are already implementing renewable energy sources and other means to make data centers more efficient. But the emergence of 5G and other advanced networking technologies only threatens to make the problem worse before it gets better. “In the worst-case scenario, this footprint could increase to 14 percent of global emissions by 2040,” the Institute warned.

    (Image source: Free-Photos from Pixabay )

  • 8.) AI helps privatize public infrastructure

    “Troubling partnerships between government and private tech companies also emerged as a trend this year, especially those that extended surveillance from public environments into private spaces like private properties and the home,” the AI Now Institute said.

    In 2019 the city of Detroit established the “Neighborhood Real-Time Intelligence Program,” a $9 million, state- and federally-funded initiative that would expand the city’s Project Green Light surveillance system to 500 intersections, in addition to the 500 businesses where it is already deployed, as well as add facial recognition technology to the system. The city has reported reduced crime in areas thanks to Project Green Light, but that hasn’t stopped privacy advocates from protesting the technology.

    In 2018, Amazon came under fire for offering to let police departments utilize its facial recognition software. The company has also negotiated with over 700 police departments in the US to give police access to videos from Ring smart home cameras if the footage can help with a criminal investigation, according to the AI Now Institute.

    (Image source: Pixabay)

  • 9.) Automation impacts people of color and the poor the most

    The debate about automation and labor likely won’t ever stop. But the narrative is taking new shape as more data emerges about specific groups affected by rapid automation due to AI.

    Depending on who you ask, automation will be a boon to the economy as well as personal productivity, or it will usher in a dystopian nightmare where humans struggle for basic needs while robots handle all of the jobs.

    “Both narratives are predicated on the assumption that automation in the workplace is inevitable and that automated systems are capable of performing tasks that had previously been the work of humans. What is missing from both conflicting narratives is the more nuanced prediction of who will be harmed and who will benefit from labor automation in the years to come,” the AI Now Institute said.

    The 2019 AI Now Report predicts that Black, Latinx, and low-wage workers in the US will be disproportionately impacted by increased levels of automation.

    (Image source: mohamed_hassan from Pixabay)

  • 10. ) AI is removing the ‘human’ from human resources

    More and more companies are using AI to manage and oversee workers. AI is even being implemented into the hiring process. Amazon, for example, uses an AI system to set shifting performance goals for its warehouse workers. Workers are assigned a daily “rate” of productivity to hit each day, based on their prior performance and the overall goals of the warehouse.

    “If a worker falls behind, they are subject to disciplinary action. In many warehouses, termination is an automated process (not unlike being “kicked off” a gig-economy platform),” the AI Now Institute said. “According to Abdi Muse, an organizer with Amazon warehouse workers in Minneapolis, if workers fall behind the algorithmically set productivity rate three times in one day, they are fired, however long they may have worked for the company, and irrespective of the personal circumstances that led to their ‘mistakes.’ ”

    “The introduction of AI-enabled labor-management systems raises significant questions about worker rights and safety. The use of these systems—from Amazon warehouses to Uber and InstaCart—pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color) by setting productivity targets linked to chronic injuries, psychological stress, and even death and by imposing unpredictable algorithmic wage cuts that undermine economic stability.”

    (Image source: iosphere / Freedigitalphotos.net)

keynotes-worth-seeing-at-designcon-2020

What do these topics have in common?

  1. The Future of Fiber Optic Communications: Datacenter and Mobile
  2. Design for Security: The Next Frontier of Smart Silicon
  3. Microchips in Space: How Device Design Enables Amazing Astronomy

The answer is that all use microchips and microsystems but in very different ways and for differing motivations.

In the first one, complex system-on-chips (SoC) are integrated with fiber optics to enable dizzyingly fast high-speed connections between processors, memory storage, and interfaces in data rooms and mobile devices across the world.

With so much going on in the world of fiber optic communications, it’s important for designers to keep up to date with the basic engineering issues. The catalyst for this interest is that the global fiber optics market is predicted to grow from 5 billion USD in 2018 to 9 billion USD by the end of 2025.

In his upcoming keynote at Designcon 2020, Chris Cole, VP of Advanced Development at II-VI, will discuss past trends and new developments in fiber optics for datacenter and mobile applications. Two ongoing trends are the replacement of copper wires by fiber optics in the data room as well as the replacement of direct detection by coherent detection in optical systems.

Cole will also explain the major limitations of power and density in communications, and new technologies like Silicon Photonics (SiPh) and co-packaging. Silicon photonics involves the study of optical properties of the group-IV semiconductor and how it can be used to generate, manipulate and detect light. Silicon is prevalent in photodetectors and solar cells, among other technologies.

To learn more, visit: The Future of Fiber Optic Communications: Datacenter

Image Source: Imec
the-9-most-disruptive-tech-trends-of-2019

What were the breakthrough technologies for 2019? The answer depends on who you ask. Several common themes have emerged such as cobots, emerging energy source, AI, and cybersecurity breaches. Let’s consider each in more detail.

1.) Robotics – collaborative robots (or cobots)

(Image source: OpenAI and Dactyl)

Remember Dum-E (short for dummy) from the first Iron Man movie? Dum-E was a cobot that helped Tony Stark created his flying robotic suit. It was a scaled down, more human, interactive version of the traditional industrial-grade manufacturing line arm robots.

Cobots are designed to collaboratively work alongside human with a gentle touch, i.e., to not smash fingers or step on the toes of their work buddies. Doing so requires that cobots be much more aware of their location in relation to the humans, via sensing and perception technologies. To achieve this goal, one company, Veo Robotics, uses a variety of 3D sensors placed around the robot’s workcell to aid in location awareness. The company’s sensors add an extra measure of safety by automatically slowing down the movement of the industrial cobots whenever a human co-worker comes close.

To help supplement actual human activity, cobots are becoming more dexterous and moving beyond merely picking components on an assembly line. Robots need greater dexterity to pick up objects that have moved even slightly beyond their programmed parameters. Cobots cannot yet grasp any object just by looking at it, but they can now learn to manipulate an object on their own. 

OpenAI, a nonprofit company, recently introduced Dactyl, a dexterous robotic arm that taught itself to flip a toy building block in its fingers. Dactyl uses neural network software to learn how to grasp and turn the block within a simulated environment before the hand tries it out for real. According to the company, they’ve been able to train neural networks to solve the Rubik’s Cube Problem using reinforcement learning and Kociemba’s algorithm for picking the solution steps.

growth-returns-to-semiconductor-and-eda-tools-m&a-markets-–-for-now

2019 was a great time for merger and acquisition business in the semiconductor and electronic design automation tools industries. But what will the future hold?

There was an uptick in M&A activity for the semiconductor space in the first eight months of 2019. This was a welcome change after the market slowing in 2017 and 2018. The combined value of 20-plus M&A agreement announcements reached over $28 billion for the purchase of chip companies, business units, product lines, intellectual property (IP), and wafer fabs between January and the end of August – according to the Fall 2019 IC Insight’s McClean Report. This amount does not include transactions between semiconductor capital equipment suppliers, material producers, chip packaging and testing companies, and electronic design automation (EDA) software firms.

The activity in the first eight months of 2019 have surpassed the $25.9 billion total for all of 2018.

Image Source: IC Insights