This week at CES, IBM announced that its newest quantum computer, Raleigh, doubled its Quantum Volume (QV). This is important because the QV is a measure of the increasing capability of quantum computers solve of complex, real-world problems. But how does an increase in QV relate to existing measures such as semiconductor performance as dictated by Moore’s Law? Before answering that question, it’s necessary to understand what is really meant by a Quantum Volume.
QV is a hardware-agnostic metric that IBM defined to measure the performance of quantum computers. It serves as a benchmark to the progress being made by quantum computers to solve real-world problems.
QV takes into account a number of factors effecting quantum computations including qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace, details the IBM website.
Raleigh reached a Quantum Volume of 32 this year, up from 16 last year. This improvement stems from an improved hexagonal lattice connectivity structure with improved coherence aspects. According to IBM, the lattice connectivity had an impact on reduced gate errors and exposure to crosstalk.
Over the last year, a number of quantum computing achievements have been reached, notes IBM. Among the highlights was the offering of quantum computing services by a number of traditional cloud providers. Naturally, IBM was on that list. Other notables were Amazon, which in December 2019 first offered select enterprise customers the ability to experiment with quantum-computing services over the cloud.
The Amazon platform will let clients explore different ways to benefit from quantum computers by developing and testing quantum algorithms in simulations. For example, quantum computers could be used for simulating climate change, solving optimization problems, cybersecurity and quantum chemistry, among others. Clients will also have access to early-stage quantum-computing hardware from providers including D-Wave Systems Inc., IonQ Inc. and Rigetti Computing.
Now let’s see have the Quantum Volume measurement relates to transistor performance as delineated by Moore’s Law.
Not all cool tech involved robots and autonomous cars. Here’s a list of the other electronic tech featured at the show.
This year’s Consumer Electronics Show (CES) 2020 featured a range of marvals enabled by electronic technologies covering application areas from smart cities, AI edge intelligence, body haptics, security systems, real-time accident reports, uncooled thermo cameras, wearables and more.
Here are the top 10 products and technologies that piqued the interest of the Design News editorial staff.
Smart Cities
Why do major Japanese car manufacturers like to build smart homes and now cities? Several years ago, Honda built a zero-net energy smart home in partnership with UC-Davis. At this year’s CES, Toyota announced it will build a smart city to test their AI, robots and self-driving cars. Toyota’s Woven City will be built at the foothills of Mt. Fuji in Japan. The city will be the world’s first urban incubator dedicated to the advancement of all aspects of mobility, claims Toyota.
The project is a collaboration between the Japanese carmaker and the Danish architecture firm Bjarke Ingels Group (BIG). Houses in Woven City will have in-home robotics to help with the more mundane tasks of daily life. The homes will have full-connectivity, which will be needed for the sensor-based AI to automate many household chores, like restocking the refrigerator and taking out the trash. Power storage units and water purification systems will be hidden beneath the ground.
Intelligence At The Edge
Blaize is a computing company that optimizes AI at scale wherever data is collected and processed from the edge. The company enables a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments. The company claims that developers can create new classes of products to bring the benefits of AI and machine learning to broad markets.
The company has developed a fully programmable GSP architecture that utilizes task-level parallelism and streaming execution processing to take advantage of very low energy consumption, high performance and scalability. Blaize claims that, in comparison, existing GPUs and FPGAs exert a much higher energy price, while CPUs cost more and scale poorly, and all are subject to excessive latency due to their sequential execution processing architectures.
Full-Body Haptics Suit
Haptics are all about the sense of touch. Now you can immerse your entire body – or at least 70 tactile points mainly around your torso – into the world of artificial experiences. The BHaptics Tacksuit provides an audio-to-haptic feature that converts sound into haptic feedbacks that are felt real time around your torso. For example, when a bomb explodes or you hear footsteps during a PC/VR game, you’ll feel the experience from the right direction. You’ll even be able to feel Samurai cuts and friendly hugs.
Security Comes In Many Forms
There are many ways to protect your PC data and applications, from hardware encrypted portable storage devices, backup solutions, file repair software, and data recovery, to digital forensics services. SecureData provides both products and services in these areas. At CES, the company demonstrated a secure UBS drive which they claimed was the only hardware encrypted flash drive in the world with keypad and Bluetooth authentication.
Wireless Six-Degrees Of Freedom (6DOF)
Atraxa’s system tracks 6DOF motion without the need for optical cameras or infrared markers to be placed around the room, or mounted externally to the XR headset or controller. And no line of sight—or wires—are required between the headset and controllers. Unhindered by wires or line-of-sight constraints, users can move freely in large spaces. Even move from room to room without any room mapping, or controller orienting (or reorienting) is required. Tracking starts immediately and lasts without interruption.
The tech combines electromagnetic (EM) and inertial technologies into a single sensor-fusion tracking platform. The IMU (inertial measurement unit) returns acceleration and angular velocity data. The EM tracker delivers true position and orientation data; it also establishes the tracking volume and local coordinate system. Atraxa is comprised of two main components: a tracker module and receiver module. The tracker module houses the IMU and an EM transmitter coil that generates the magnetic field (i.e. the tracking volume). The tracker modules are embedded into the handheld controllers (or other peripherals).
Real-Time Accident Report
Sooner or later, all of us get into an automotive accident. When that occures, wouldn’t it be great to have a record of what happened? Through the use of embedded acceleration sensors, MDGo generates a real-time report in the case of a car crash, detailing each occupant’s injuries by body region. The company’s technology enables accurate delivery of needed services and support by providing optimal medical care in the case of an emergency and supporting the claim process.
Smart Factory
Could a factory think for itself or autonomously design a better car or aircraft? Can it eliminate waste? All of these questions fit into the realm of manufacturing intelligence. One company with experience in this area is Hexagon, claiming that their technologies are used to produce 85% of smartphones, 75% of cars and 90% of aircraft.
Their Smart Factory approach aims to have fewer inputs, zero waste and high quality. All this is achieved through sensor, software and autonomous solutions that incorporates data feedback to improve work to boost efficiency, productivity, and quality across industrial and manufacturing.
A Cool “Uncooled” Methane Gas Detector
The FLIR GF77 Gas Find IR is the company’s first uncooled thermal camera designed for detecting methane. This handheld camera offers inspection professionals the features they need to find potentially dangerous, invisible methane leaks at natural gas power plants, renewable energy production facilities, industrial plants, and other locations along a natural gas supply chain. The gas detector provides methane gas detection capability at roughly half the price of cooled gas inspection thermal cameras, to empower the oil and gas industry to reduce emissions and ensure a safer work environment.
IoT Arduino Adds LoRaWAN Connectivity
You can now connect your sensors and actuators over long distances via the LoRa wireless protocol or throughout LoRaWAN networks. The Arduino MKR WAN 1310 board provides a practical and cost effective solution to add LoRa connectivity to projects requiring low power. This open source board can be connected to: the Arduino IoT Cloud, your own LoRa network using the Arduino LoRa PRO Gateway, existing LoRaWAN infrastructure like The Things Network, or even other boards using the direct connectivity mode.
Wearables, Ingestibles, Invisibles
One of the keys to a healthy life is nutrition. But what exactly constitutes ‘healthy’ food for a specific person? To answer that question, you need to measure and analyze the processes inside the complex human digestive system. Imec is working on prototype technology that is up to that task. It’s called ingestible sensors.
The company also develops wearables for medical and consumer applications that enable reliable, continuous, comfortable, and long-term health monitoring & management. This includes high-accuracy & low-power biomedical sensing technologies sometimes embedded into fabrics.
John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.
Forget new TVs and smartphones. These are the real game changers introduced at CES 2020.
Now that the smoke is cleared from CES 2020, we can take a step back and see which technologies were the real innovations of 2020. Let’s be honest, CES can be a black hole of vaporware, false promises, and concepts intended to be just that.
We’ve compiled a list of our favorite technologies introduced at CES 2020 – innovations that we’re sure will be having a lasting impact in 2020 and beyond.
AerNos AerSIP Gas Sensor
The AerSIP from AerNos is a 5 x 5-mm, mulit-gas sensing module that combines nanotechnology and machine learning algorithms to monitor indoor and outdoor air quality. The system-in-package (SIP) is an embedded plug-and-play solution that can be integrated into wearables, mobile devices, and other IoT devices and is capable of detecting hazardous gases and other dangers at parts per billion levels.
(Image source: AerNos/CES)
AMD Ryzen 4000 Series Mobile Processor
AMD’s Ryzen 4000 could be a literal game changer for high-end laptops users – particularly gamers and designers. AMD says its new Ryzen 4000 series is the world’s first 7-nanometer laptop processor. Designed for ultra-thin laptops, the Ryzen 4000 series features up to 8 cores and 16 threads and configurable 15W thermal design power. AMD pledges the Ryzen 4000 series offers up to four percent greater single-thread performance and up to 90 percent faster multithreaded performance than its competitors, as well as up to 18 percent faster graphics performance over competing chips.
(Image source: AMD)
Atmosic Technologies M3 Battery-Free Bluetooth 5 SoC
Atmosic says its M3 Battery-Free Bluetooth 5 SoC uses so little power that it can even eliminate the need for battery power entirely in devices such as wearables, keyboards, mice, asset trackers, beacons, and remotes. The M3 integrates Atmosic’s Lowest Power Radio, On-demand Wake-Up, and Managed Energy Harvesting technologies to deliver what the company says is 10 to 100 times lower power than other SoCs, while still complying with Bluetooth standards. The M3’s radio uses two “ears” – one for listening in a low-power state to perceive incoming commands, and another that only wakes when alerted. The SoC uses energy harvesting technology to gather power from radio frequency, photovoltaic, thermal, and motion.
(Image source: Atmosic)
Bot3 Zen-P VSLAM Deep Learning Module
Bot3‘s Zen-P VSLAM Deep Learning module integrates visual simultaneous localization and mapping (VSLAM) technology (a version of the same technology used in autonomous vehicles) into mobile robots ranging from industrial machines to smart home products. Bot3’s image processing algorithm, Pascal, allows for autonomous navigation without tracks as well as indoor mapping and positioning. (for instances such as warehouse applications).
(Image source: Bot3)
BrainCo BrainRobotics Prosthetic Hand
Many companies have been developing mind-controlled prosthetics for amputees and other disabled patients. What separates the prosthetic hand developed by BrainRobotics is the integration of AI technology. The BrainRobotics hand utilizes machine learning to allow the hand and its user to learn from each other over time – leading to more lifelike movements. The company is aiming to provide accurate and reliable prosthetics and at affordable price for all patients. BrainRobotics is a subsidiary of BrainCo, a software developer focused on brainwave measuring and monitoring.
(Image source: BrainCo/BrainRobotics)
Fluent.ai MultiWake Word and Voice Control Engine
Fluent.ai is a technology company focused on AI for voice interface and speech recognition. The company’s Multi-Wake Word and Voice Control Engine is an edge-based, noise robust, and multilingual speech technology that consumes minimal power and storage, allowing it to be embedded in small devices. The solution is Cortex M4-based and supports four separate wake words and 100 multilingual commands, according to Fluent.ai.
Fluent.ai has recently partnered with semiconductor designer Ambiq Micro to implement Fluent.ai’s software solutions into Ambiq’s ultra-small footprint, low-power microcontrollers. Ambiq’s MCU supports frequencies up to 96 MHz, and Fluent.ai’s solution requires only 16 MHz from the MCU. The new partnership means Fluent.ai and Ambiq will be releasing MCUs for OEMs looking for an easy way to add speech recognition and voice command functionality to their smart home devices and other products.
(Image source: Fluent.ai / CES
Intel Tiger Lake Chip
When Intel announces a new chip, the whole world takes notice. The chipmaking giant is launching its latest chip for consumers this year. Dubbed Tiger Lake, the new chip is said to be optimized for AI performance, graphics, and USB 3 throughput. Rather than desktops, the new chips will be focused on mobile devices such as ultra-thin laptops and tablets. The first products featuring Tiger Lake are expected to ship later in 2020.
(Image source: Intel)
Monster MultiLink Bluetooth Technology
Sometimes its the most straightforward ideas that can make the biggest difference. Most of us love our Bluetooth wireless headphones and earbuds. The problem is they don’t create a sharable experience. What if you want to show your friend the video you’re watching without disturbing the people around you? Monster has debuted a new technology called Music Share that uses MultiLink technology to allow devices to send Bluetooth audio to multiple devices in sync. The technology expands how Bluetooth headphones can be used and opens up new use cases ranging from air travel to fitness classes as well as new avenues for social interaction.
(Image source: Bluetooth SIG)
Murata Coral Accelerator Module
Working in partnership with Coral and Google, Murata Electronics has developed what it is calling the world’s smallest AI module. The Coral Accelerator Module packages Google’s Edge TPU ASIC into a miniaturized footprint to enable developers to embed edge-based AI into their products and devices. The new module forms an integral part of Coral’s integrated AI platform, which also includes a toolkit of software tools and pre-compiled AI models.
(Image source: Murata Electronics Americas)
Pollen Robotics Reachy Open-Source Robot
Reachy is a robot developed by Pollen Robotics, in collaboration with the INCIA Neuroscience Institute in France, that is fully open source. The robot, which can be programmed using Python, is modular – employing a variety of 3D-printed grippers – and comes with prepackaged AI algorithms to allow developers to customize it for a variety of applications ranging from customer service and assisting the elderly or disabled.
Read more about Reachy, and the rise of open-source robotics, here.
(Image source: Pollen Robotics)
VRgineers 8K XTAL Headset
VRgineers, a maker of premium VR headsets for enterprise applications in industries ranging from automotive to defense and military, has released a major upgrade to its flagship XTAL headset. The latest version of XTAL features 8K resolution (4K per eye), improved lenses with a 180-degree field-of-view, and a new add-on module for augmented reality and mixed reality functionality. The headset also still includes eye tracking as well as integrated Leap Motion sensors to enable controller-free navigation and interactions.
(Image source: VRgineers)
zGlue ChipBuilder
zGlue is a software company that develops tools for chipmakers and designers. Its latest offering, ChipBuilder 3.0 is a design tool to for building custom silicon chips and accelerating time to market. The software suite features an expansive library of chipsets and allows engineers to capture schematics, route and verify designs, and download netlists. The tool allows engineers to create realistic 3D models and code their own chips and even place orders for physical chips via zGlue’s Shuttle Program.
(Image source: zGlue / CES)
Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics
Standby power refers to the electronic and electrical power consumed by systems when the primary operational functions are waiting to be activated. Standby power needs are often overlooked by systems designers but are crucial considerations to ensure power is available for the smart devices that make up the Internet of Things (IoT).
Consider the design of a smart home, a dwelling that consumes zero net energy. To maintain zero net power consumption, the smart home must be capable of monitoring and controlling the main energy consumers – e.g., HVAC and lighting – as well as interfacing with energy sources such as solar panels/batteries and the power grid. Adding control and monitoring intelligence to the home will itself require energy. The trick is to make sure that the controlling and monitoring electronics don’t consume more power than the devices themselves. One part of this trick is to make sure that the smart systems pay attention to stand-by loads, those mischievous power draining loads consumed by electronics and electrical appliances even when they are turned off (but still drawing power in standby mode).
In addition to – or often part of – controlling and monitoring electronics, connectivity transceivers like RF and wireless are another reason why standby power awareness are so important. Most of our modern appliances and control devices constantly consume a trickle of power to be ready to perform updates, connect to edge or cloud servers, listen for our voice commands, and the like.
Numerous studies attest to the amount of energy lost from devices not in use due to standby power consumption. According to a report from the Natural Resources Defense Council (NRDC), an international nonprofit environmental organization, always-on but inactive devices can cost Americans $19B annually. That comes to about $165 per U.S. households on average—and 50 large (500-megawatt) power plants’ worth of electricity.
Further, Berkeley Labs notes that standby power is roughly responsible for 1% of global CO2 emissions.
What are the best ways to reduce the impact of standby power? Let’s consider one approach that looked promising but so far has failed and another more integrated approach that has proven to be successful.
Image source: Natural Resources Defense Council (NRDC)
Last week, the Industrial Internet Consortium (IIC) and the Trusted IoT Alliance (TIoTA) announced they are joining forces and combining their memberships. The organizations will work together under the IIC umbrella to drive industry collaboration and research, foster open systems development, and promote best practices for trusted IoT systems such as blockchain and related distributed ledger technologies. The first formal meeting of the combined organization will be held in Athens, Greece on March 9-12, 2020.
The Industrial Internet Consortium has joined forces with the Trusted IoT Alliance. (Image source: IIC)
Blockchain Is Fully Engaging With IoT
While the two organizations were not necessarily conducting overlapping development of guidance and best practices, their work was closely related and ultimately directed toward a similar industrial membership. “The merger just made sense,” Richard Soley, executive director of the IIC, told Design News. “The IIC has been publishing lately about trustworthiness of identity and data, and also about distributed ledger technologies (DLTs) The DLTs, including blockchain, are an obvious way to achieve trustworthiness in a distributed fashion.”
Soley also noted that the work of the two organizations was moving closer and closer. “TIOTA has likewise been exploring how to build industrial internet systems using DLT’s, and there was significant membership in common,” said Soley. “We were already working closely on the challenges and now we will be doing that as one organization. We look forward to welcoming TIOTA member to the comprehensive IIC program.”
Blending Guidance and Best Practices
The two organizations have separately made significant headway in creating guidance and best practices in IoT deployment and security. “Over its two-year lifespan, TIoTA has impacted the intersection of DLT and IoT, while in its five years, the IIC has established major influence in IoT across industries,” said Csilla Zsigri, a senior analyst at 451 Research. “This consolidation will strengthen the ability of the IIC to provide guidance and advance best practices on the uses of distributed ledger technology across industries, while also boosting the commercialization of these products and services.”
Recent interest in Blockchain and DLT prompted companies in a wide range of industrial verticals to create an organization that can provide guidance in how to deploy this emerging security functionality. “TIoTA has built a community of technology leaders focused on bringing blockchain/DLT solutions to market,” said John Calian, head of Telekom Innovation Laboratories (T-Labs), an enterprise member of TIoTA. “This community is focused on leveraging the power of blockchain/DLT and the concept of decentralized trust to achieve greater security, scalability, and interoperability within existing and future IoT ecosystems. This direction will help the IIC continue its market momentum.”
One of the benefits of bringing TIoTA into the IIC is the use cases that have been developed. “We welcome the trusted systems expertise that TIoTA brings to IIC,” said Soley. “Since IIC began in 2014, the use cases for distributed ledger technology have grown exponentially. The combined organization will offer a single stop for IoT industry guidance and a larger ecosystem for end users looking to improve their bottom line with IoT and DLT. This will enable the IIC to become the center of gravity for the future of industrial IoT systems across industry verticals.”
The Blending of Purpose and Membership
The TIoTA was created as a result of the collaboration among technologists working to leverage blockchain infrastructure to secure and scale IoT ecosystems. TIoTA’s goal was to enable trust in the data produced by such IoT systems in a distributed ledger/blockchain agnostic fashion.
The Industrial Internet Consortium has grown to be the leading membership group involved in accelerating the Industrial Internet of Things. The IIC’s goal is to deliver guidance and best practices that ensure devices are securely connected and controlled to deliver successful outcomes.
Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.
Why have a plain old boring stationary cylindrical Amazon Alexa when you could have a wedge-shaped Alexa packing 640 horsepower and the ability to rocket to more than 200 mph? That’s what you get with the 2020 Lamborghini Huracan EVO, which adds Alexa integration to its 5.2-liter V10 powerplant, all-wheel drive and dynamic suspension set up.
While other carmakers have already installed Alex artificial intelligence, this is the first time it will be available in a super sports car. Also, this version will be the first to give drivers control of the car’s systems through Alexa.
Others will let you adjust your connected home thermostat using voice commands while driving, but the Huracan EVO lets you do the same thing with the car’s own climate control system. You can also cabin lighting, seat heaters, and the setting of Lamborghini Dinamica Veicolo Integrata (LDVI), Lamborghini’s dynamic suspension system.
Of course, the usual Alexa capabilities are there too, so you can play music or ask about the weather as with any Alexa-enabled device. But the companies say they have ambitious plans to expand the collaboration, so not only will Alexa’s capabilities be updateable in the Huracan, but they are working on further connectivity and integration with Amazon Web Services for still more features in the future.
Image source: Automobili Lamborghini
“Our vision is for Alexa to become a natural, intuitive part of the driving experience, and Lamborghini has embraced that by integrating Alexa directly into its onboard infotainment systems,” adds Ned Curic, vice president of Alexa Auto at Amazon. “The integration will enable Lamborghini owners to enjoy the convenience of an intelligent voice service while focusing on the joy of the Lamborghini driving experience, and we expect it to set a new standard for in-car voice experiences when it ships this year.”
This doesn’t mean the Huracan is reduced to a mere vessel for delivery of Alexa services, fortunately, Lamborghini promised. “The Huracan EVO is an outstanding driver’s car, and connectivity enables our customers to focus on the driving, thus enhancing their Lamborghini experience,” says Stefano Domenicali, Chairman and Chief Executive Officer of Automobili Lamborghini.
Image source: Automobili Lamborghini
Lamborghini has also announced that it will introduce a $208,571 rear-drive version of the Huracan EVO to appeal to purists, so we look forward to put the Raging Bull’s latest developments to the test soon.
Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.
If you google “technology trends,” one of the companies that will appear in the top 10 hits will be Gartner. The research and advisory firm not only analyzes numerous markets in terms of technical innovations but also covers business aspects of technology for C-suite professionals.
For 2020, Gartner has produced a number of predictive reports, including those covering digital and strategic technologies. From those lists, I’ve selected three trends that appear vaguely familiar from the recent past, albeit with new names. Do you agree? Don’t hesitate to ping me with your take on these trends at: [email protected]
Trend: Hyper Automation
Gartner: “Automation uses technology to automate tasks that once required humans. Hyper automation deals with the application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyper automation often results in the creation of a digital twin of the organization. As no single tool can replace humans, hyper automation today involves a combination of tools, including robotic process automation (RPA), intelligent business management software (iBPMS) and AI, with a goal of increasingly AI-driven decision making.
My Take: Do we really need yet another word or phrase to represent the ongoing digitization process that will eventually enable a complete digital twin? One might just as well say that the creation of a digital twin – from improved modeling, simulations, sensors, etc. – have accelerated the pace of automation thus creating a new hypeautomoation or superautomation reality.
It’s really a chicken and egg perspective. Which came first – the creation of hyper automation systems that eventually result in a digital twin? Or did the creation of a digital twin from a sensor-rich ecosystem lead to improved automation of tasks previously performed by humans?
Regardless of the answer, there seems to be little doubt about the movement toward a more complete digital twin within the next decade. Mordor Intelligence predicts that the digital twin market is anticipated to witness a CAGR of 35.0% over the forecast period 2019 – 2024. Growth in IoT and cloud-based platforms, the surge in adoption of 3D printing technology in the manufacturing industry, and the objective to reduce project cost are some of the major factors, driving the growth for the digital twin market. Mordor notes that IoT sensors have created a potential space for engineers to test and communicate with sensors integrated with the operating products, hence delivering real-time prescriptive of system functioning and timely maintenance.
Which came first: Hyper automation or the digital twin? It’s your call.
According to Statista, the Internet of Things (IoT) market is to reach $212 billion by the end of 2019 with an industry market value of $1.6 trillion by 2025. This technology will continue to impact the consumer, industrial, medical, agricultural, and pharmaceutical markets as big data demands drive the IoT industry. With such a demand, a variety of innovative and wireless technologies are being developed daily. One silicon device that has made inroads into this growing technology field is the ESP8266 WiFi microcontroller. The small microcontroller’s system architecture provides a wealth of circuit peripherals and technologies that appeals to designers, developers, engineers, and makers of WiFi enabled products.
The ESP8266 WiFi microcontroller. (Image source: Espressif)
The ESP8266 Specifications
The ESP8266 microcontroller developed by Espressif Systems is a 32-pin quad flat no-lead (QFN) WiFi system on a chip (SOC) package. The ESP8266 integrates a radio circuit consisting of antenna switches, a power amplifier, a low noise receiver amplifier, filters, a power management module, and a RF balun. The RF balun is an electrical component capable of taking a balance signal and converting it to an unbalanced signal.
A balance signal has equal line impedances. An unbalanced signal has unequal impedances. The RF balun ensures the antenna works properly with the RF transmit and receive circuits through the internal network switch and proper impedance matching. The balanced and unbalanced impedances are achieved with the RF balun. At the heart of the ESP8266 is a 32-bit microprocessor manufactured by Tensilica. The Tensilica microprocessor manages the binary data and coding language information required to work with the ESP8266’s physical layer electrical interfaces. Such physical layer electrical interfaces include the general-purpose input/output (GPIO), the universal asynchronous receiver transmitter (UART), pulse width modulation (PWM), serial peripheral interface (SPI), and the analog to digital converter (ADC). With these physical layer electrical interfaces, the ESP8266 can be wired to external sensors, transistor driven actuators, discrete LEDs, and displays (LCD, OLED, and LED). In addition, the ESP8266 microcontroller supports the WiFi specification of 802.11 b/g/n and the 2.4 GHz operating frequency. Besides supporting a variety of electrical interfaces, the small QFN WiFi SoC microcontroller can be operated from a voltage supply range of 2.5V- 3.6V. The average operating current of the SoC microcontroller is 80 milliamperes (mA).
The System Block Diagram of the ESP8266 WiFi enabled microcontroller. (Image source: Espressif)
Creating an IoT Infrastructure
The design concept idea behind the ESP8266 WiFi microcontroller is to develop low cost wireless networks. These networks can assist in obtaining vital data from industrial machines on the plant floor and allowing wireless control of Mechatronics based systems found in manufacturing and process facilities. Also, home automation systems can easily be deployed using the wireless SoC architecture of the ESP8266. Such a wireless infrastructure can allow the creation of low-cost networks to send data to a variety of mobile and laptop devices assisted by the cloud. To allow access to these wireless nodes through the cloud requires a hosting webpage. A unique feature of the ESP8266 is the internal hosting webpage that can be configured using a programming language like MicroPython. Also, the ESP8266 SoC architecture provides a unique internet protocol (IP) address that works in unison with the Media Access Control (MAC) and network interface controller (NIC) subcircuit integrated within the microcontroller.
A low cost IoT network using the ESP8266. (Image source: Don Wilcher)
Ease of Prototyping A Wireless PoC
Espressif’s low cost wireless microcontroller chip has allow new development platforms for designers, engineers, makers, and developers to rapidly explore IoT product concepts. Popular prototyping platforms being used by designers and makers are the printed circuit board (PCB) module and the development board. Both prototyping platforms allow rapid Proof of Concept (PoC) product, system monitoring, and controller device ideas to fruition. The pcb module provides copper circular pads that allow soldering of external circuits and sensors to the WiFi microcontroller pins. The development board is an extension of the pcb, whereby the WiFi module is soldered to the prototyping platform. Also, male header pins are soldered to the development board’s pcb. These male header pins allow insertion into a solderless breadboard for ease of circuit prototyping.
ESP8266 development platforms: pcb module and the development board. (Image source: Don Wilcher)
An example PoC concept using the development board is the creation of a WiFi enabled Audio Signal Generator. With software created in MicroPython, a webpage hosting site for a simple user interface (UI) will allow the wireless operation of an audio signal generator. Such a device will aid in injecting signals into malfunctioning amplifier units remotely using a laptop computer, smartphone, or tablet accessing the ESP8266’s web hosting page through an IP address.
Simple UI hosted by the ESP8266’s webpage IP address: (Image source: Don Wilcher)
The ESP8266 WiFi microcontroller’s digital pin enables the audio signal generator upon receiving a wireless command from the simple UI. An LED provides a visual indicator of the device being enabled by the UI. Also, the measured range of audible tones measured with an oscilloscope is 657Hz – 1.70KHz.
A WiFi enabled Audio Signal Generator circuit schematic diagram. (Image source: Don Wilcher)
The development of this WiFi test gear was prototyped using a retro Radio Shack Science Fair 200-in-1 electronics project lab and an ESP8266 development board. This product concept scenario provides benefits for electronics troubleshooting in the field by a technician or service engineer. As illustrated in this example, a variety of IoT products can be rapidly developed using the ESP8266 WiFi microcontroller. Additional technical specifications of the ESP8266 WiFi microcontroller can be found at Espressif’s website.
A prototyped WiFi enabled Audio Signal Generator. (Image source: Don Wilcher)
Don Wilcher is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He’s worked on industrial robotics systems, automotive electronic modules/systems, and embedded wireless controls for small consumer appliances. He’s also a book author, writing DIY project books on electronics and robotics technologies.
I’m watching a clip from the movie The Shining. Shelly Duvall is hiding from her crazed husband as he chops down the door with an axe. Jim Carrey sticks his head through the opening and cackles the iconic line: “Here’s Johnny!”
…Jim Carrey is not in The Shining.
What you’re seeing is not a Hollywood special effect. It wasn’t done with After Effects, green screen, or with costuming and makeup. The video is a fake created by deep learning artificial intelligence – a deepfake. And anyone with a powerful computer and enough time can make one.
You might have heard of deepfakes before, or glimpsed headlines discussing the technology. You might even have laughed at various YouTube videos on channels such as Ctrl Shift Face that have swapped faces of celebrities in iconic roles to some humorous and sometimes unsettling results (once you’ve seen any of the bizarre deepfakes involving Nicolas Cage you can never un-see them.)
But deepfakes, once confined to darker corners of the internet, are becoming a serious threat. In the US, particularly as the 2020 election season rapidly approaches, AI experts are warning that deepfakes could become a powerful tool for spreading misinformation and manipulating the public. With enough effort a bad actor could create a video of any political candidate saying nearly anything. And in today’s climate of social media outrage and algorithm-driven content distribution, there’s no telling how far it could spread before someone caught it.
It’s time engineers, developers, and technologists all had a serious discussion about deepfakes.
(Image source: Adobe Stock)
The Origin Of Deepfakes
There’s no one particular person that has taken credit for originally developing deepfakes. Their existence owes to a confluence of technologies ranging from ever-more sophisticated computer vision algorithms and neural networks, to increasingly powerful GPU hardware.
The first deepfakes to emerge on the internet seem to have emerged in 2017, when an anonymous Reddit user called “Deepfakes” began distributing illicit, altered videos of celebrities online. Other Reddit users followed suit and it wasn’t long before a community had sprung up around distributing both deepfakes themselves as well as tutorials and software tools to create them.
In an interview with Vice, [NSFW link] one of the first outlets to take an extensive look at deepfakes, the Reddit user outlined how comparatively easy the process is:
“I just found a clever way to do face-swap. With hundreds of face images, I can easily generate millions of distorted images to train the network. After that if I feed the network someone else’s face, the network will think it’s just another distorted image and try to make it look like the training face.”
But it wasn’t all fun and games. Far from it. When they first appeared, deepfakes had one particularly popular and disturbing use case – pornography. Much of the early deepfake content available was pornographic films created using the faces of celebrities like Gal Gadot, Scarlett Johansson, and Taylor Swift without their consent.
As the videos proliferated, there was an crackdown with Reddit itself shutting down its deepfakes-related communities, pornographic websites removing the content, and sites like GitHub refusing to distribute deepfake software tools.
If private citizens weren’t that concerned yet it was probably because sites got somewhat ahead of the problem. Left unchecked it wouldn’t have been long before deepfake pornography spread from celebrities to every day people. Anyone with enough publically available photos or video of themselves on a platform like Facebook or Instagram could potentially become a victim of deepfake revenge porn.
In 2018, Rana Ayyub, and investigative journalist from India, fell victim to a deepfakes plot intended to discredit her as a journalist. Ayyub detailed her ordeal in an article for The Huffington Post:
“From the day the video was published, I have not been the same person. I used to be very opinionated, now I’m much more cautious about what I post online. I’ve self-censored quite a bit out of necessity.
“Now I don’t post anything on Facebook. I’m constantly thinking what if someone does something to me again. I’m someone who is very outspoken so to go from that to this person has been a big change.
“I always thought no one could harm me or intimidate me, but this incident really affected me in a way that I would never have anticipated…
“…[Deepfakes] is a very, very dangerous tool and I don’t know where we’re headed with it.”
How Deepfakes Work
On the surface the process of creating a deepfake is fairly straightforward. First, you need enough images (hundreds or more ideally) of your target – showing their face in as many orientations as possible (the more images you can get, the better the results – hence why celebrities and public figures are an easy target). If you think it might be difficult to get hundreds or thousands of images of someone remember that a single second of video could contain 60 frames of someone’s face.
Then you need a target video. The AI can’t change skin tone or structure so it helps to pick a target and source with similar features. Once a deep learning algorithm is trained on a person’s facial features, additional software can then superimpose that face onto another person’s in your target video. The results can be spotty at times, as many videos online will attest to, but done right, and with enough attention to detail, the results can be seamless.
In an interview with Digital Trends, the anonymous owner of the Ctrl Shift Face YouTube channel (the channel responsible for the Jim Carrey/The Shining videos, among others) discussed how simple, yet time-consuming the process is:
“I’m not a coder, just a user. I don’t know the details about exactly how the software works. The workflow works like this: You add source and destination videos, then one neural network will detect and extract faces. Some data cleanup and manual extraction is needed. Next, the software analyzes and learns these faces. This step can sometimes take a few days. The more the network learns, the more detailed the result will be. In the final step, you combine these two and the result is your deepfake. There’s sometimes a bit of post-process needed as well.”
On one hand, the relative ease at which this can be done with little to no coding experience is certainly disconcerting. On the other however, deepfakes are an impressive demonstration of the sophistication of AI today.
At the core of deepfakes is a neural network called an autoencoder. Put simply, an autoencoder is designed to learn the important features of a dataset so it can create a representation of it on its own. If you feed a face into an autoencoder its job is then to learn the distinguishing characteristics that make up a face and then construct a lower-dimensional representation of that face – in this case called a latent face.
Deepfakes work by having a single encoder train to create a generalized representation of a face and then have two decoders share that representation. If you have two decoders – one trained on Person A’s face, the other on Person B’s – then feed the encoder either face you can transpose Person A’s face onto Person B’s (or vice versa). If the encoder is trained well enough, and the representation is generalized enough, it can handle facial expressions and orientations in a very convincing way.
Since faces in general are very similar in their overall shape and structure, a latent face created by an encoder using Person A’s face, can be passed to a decoder trained on Person B’s face to good effect. The result at the other end is a video of Person B, but with Person A’s face.
As long as you have two subjects similar enough and a computer with enough processing power, the rest just takes time. Faceswap – one of the more readily available deepfakes apps – can run on a Windows 10, Linux, or MacOS computer and recommends a newer Nvidia GPU for processing. “Running this on your CPU means it can take weeks to train your model, compared to several hours on a GPU,” according to Faceswap’s documentation.
The first wave of 5G-enabled devices dominated 2019, including the Galaxy S10, OnePlus 7, and Huawei P30 among others. Major infrastructure improvements continued to be rolled-out.
But 5G wasn’t the only new wireless tech hitting the market. Other wireless activities, like Wi-Fi 6, also began to appear. Below are 12 of the editor’s top picks for 2019.
A key technology trend for automation and control in 2020 and beyond is the emergence of wireless communications including 5G, Wi-Fi 6, LoRaWAN and more. An obvious benefit for factory automation is the use of wireless communication for remote monitoring and remote operation of physical assets but an equally important benefit is an ability to replace cables, unreliable WiFi and the many industrial standards in use today.
Veo Robotics’ FreeMove platform gives standard industrial robots the ability to function as collaborative robots that work alongside human workers. Don’t get rid of your old industrial robot yet. You may be able to upgrade it into a collaborative robot.
Prioritizing security efforts – critical for the IoT to survive – may best be done by considering the value of transactions over the size of the network.
The Internet of Things (IoT) has many defining characteristics, such as tiny, cheap and low power sensors, embedded computers, and connectivity. But one characteristic will rule them all, namely, security. In the very near future, the IoT will probably not exist if it isn’t secure.
Just as the iPhone ushered in a new world of consumer connectivity, the ultra-fast 5G networking may jump-start new industrial capabilities.
Speedy 5G networking is beginning to show up; in some places it is already a reality. While we know it will speed up consumer applications such as streaming or downloads – like a movie in seconds – what will it mean for industrial settings? At a session at the Design and Manufacturing Minneapolis show last week, Joshua Ness, senior manager of 5G Labs at Verizon, explained the coming impact of 5G during his session, Prepping for the 5G Factory.
Walk in, get what you need, and walk out. AiFi uses AI to completely automate and streamline the shopping experience.
Imagine walking into a grocery store, picking up all the items you need, and walking right out the door. No need for a cashier or even a self-checkout, the store will automatically charge everything you take to your credit card.
The MQTT transport protocol has emerged as a top choice for implementing IoT connectivity, offering solutions using local gateways or direct to the cloud.
A primary challenge for IIoT automation and control applications is the ability to efficiently and effectively collect the data that becomes the grist for IoT enterprise-level, decision making and analytics. And while a wide variety and different types of edge devices have been introduced, a major concern is how to collect data from these devices.
Wireless charging is the best way for automakers to meet consumer demand for better charging performance in their vehicles. But there significant challenges to getting this innovation to market.
Today’s consumer expects to be connected via their mobile device wherever they go – especially in their vehicles, which have become more than transportation; they are a mobile extension of the owner’s living space.
A guide to LoRa / LoRAWAN, the communications technology emerging as the leader among Low-Power Wide-Area Networks (LPWANs) entering the IoT space.
We’re entering a world in which WiFi and Bluetooth may no longer be the best communication technologies for Internet of Things (IoT) applications. The IoT is gaining more ground each year. Experts project there will be 75 billion connected devices by 2025.
IoT-drive technology standards are creating new possibilities for reducing the cost and complexity of integrating Industrial Ethernet into constrained networks and devices.
John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.