fundamentals-of-signal-integrity

As the name suggests, signal integrity deals with the integrity of an electrical signal. It all stems from the fact that digital signals are not really binary values of “1” or “0” but are analog voltage (or current) waveforms. As such, these waveforms are subject to the real-world, physical effects of noise, distortion, and loss. If the distances are short and at low bit rates, then a simple conductor will transmit a waveform with acceptable fidelity. However, at high bit rates and over greater distances or through different mediums, then several effects can degrade the electrical signal to the point where errors occur, data is compromised, and devices fail.

In practice, signal integrity consists of a set of measurements that determine the quality of a signal as a way to analyze and mitigate the effects of noise, distortion and loss. It is a set of design practices and test that address how the electrical properties of almost any interconnect cab mess-up the (relatively) pristine signals that come from integrated circuit chip and how these problems can be fixed. There are two common signal integrity electrical design concerns, namely, the timing and the quality of the signal. Does the signal reach its destination when it is supposed to? Is it in good condition when it gets there?

Electronic and electrical packages are full of interconnects that can affect signal integrity within a chip and throughout a printed circuit board (PCB). For example, consider the changes that a signal may experience when traveling through even a short connector. If there are instantaneous impedance changes, then some of the signal will reflect and the rest will probably have some distortion. In simple terms, there may be ringing in the circuit, often due to multiple reflections between impedance discontinuities at various interface ends.

Image Source: Design News / John Blyler
will-quantum-volume-be-the-next-moore’s-law?

This week at CES, IBM announced that its newest quantum computer, Raleigh, doubled its Quantum Volume (QV). This is important because the QV is a measure of the increasing capability of quantum computers solve of complex, real-world problems. But how does an increase in QV relate to existing measures such as semiconductor performance as dictated by Moore’s Law? Before answering that question, it’s necessary to understand what is really meant by a Quantum Volume.

QV is a hardware-agnostic metric that IBM defined to measure the performance of quantum computers. It serves as a benchmark to the progress being made by quantum computers to solve real-world problems.

QV takes into account a number of factors effecting quantum computations including qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace, details the IBM website.

Raleigh reached a Quantum Volume of 32 this year, up from 16 last year. This improvement stems from an improved hexagonal lattice connectivity structure with improved coherence aspects.  According to IBM, the lattice connectivity had an impact on reduced gate errors and exposure to crosstalk.

Over the last year, a number of quantum computing achievements have been reached, notes IBM. Among the highlights was the offering of quantum computing services by a number of traditional cloud providers. Naturally, IBM was on that list. Other notables were Amazon, which in December 2019 first offered select enterprise customers the ability to experiment with quantum-computing services over the cloud.

The Amazon platform will let clients explore different ways to benefit from quantum computers by developing and testing quantum algorithms in simulations. For example, quantum computers could be used for simulating climate change, solving optimization problems, cybersecurity and quantum chemistry, among others. Clients will also have access to early-stage quantum-computing hardware from providers including D-Wave Systems Inc., IonQ Inc. and Rigetti Computing.

Now let’s see have the Quantum Volume measurement relates to transistor performance as delineated by Moore’s Law.

Image Source: IBM / Quantum Volume Growth Chart
top-10-electronic-enabled-tech-highlights-from-ces-2020

Not all cool tech involved robots and autonomous cars. Here’s a list of the other electronic tech featured at the show.

  • This year’s Consumer Electronics Show (CES) 2020 featured a range of marvals enabled by electronic technologies covering application areas from smart cities, AI edge intelligence, body haptics, security systems, real-time accident reports, uncooled thermo cameras, wearables and more.

    Here are the top 10 products and technologies that piqued the interest of the Design News editorial staff.

  • Smart Cities

    Why do major Japanese car manufacturers like to build smart homes and now cities? Several years ago, Honda built a zero-net energy smart home in partnership with UC-Davis. At this year’s CES, Toyota announced it will build a smart city to test their AI, robots and self-driving cars. Toyota’s Woven City will be built at the foothills of Mt. Fuji in Japan. The city will be the world’s first urban incubator dedicated to the advancement of all aspects of mobility, claims Toyota.

    The project is a collaboration between the Japanese carmaker and the Danish architecture firm Bjarke Ingels Group (BIG). Houses in Woven City will have in-home robotics to help with the more mundane tasks of daily life. The homes will have full-connectivity, which will be needed for the sensor-based AI to automate many household chores, like restocking the refrigerator and taking out the trash. Power storage units and water purification systems will be hidden beneath the ground.

  • Intelligence At The Edge

    Blaize is a computing company that optimizes AI at scale wherever data is collected and processed from the edge. The company enables a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments. The company claims that developers can create new classes of products to bring the benefits of AI and machine learning to broad markets.

    The company has developed a fully programmable GSP architecture that utilizes task-level parallelism and streaming execution processing to take advantage of very low energy consumption, high performance and scalability. Blaize claims that, in comparison, existing GPUs and FPGAs exert a much higher energy price, while CPUs cost more and scale poorly, and all are subject to excessive latency due to their sequential execution processing architectures.

  • Full-Body Haptics Suit

    Haptics are all about the sense of touch. Now you can immerse your entire body – or at least 70 tactile points mainly around your torso – into the world of artificial experiences. The BHaptics Tacksuit provides an audio-to-haptic feature that converts sound into haptic feedbacks that are felt real time around your torso. For example, when a bomb explodes or you hear footsteps during a PC/VR game, you’ll feel the experience from the right direction. You’ll even be able to feel Samurai cuts and friendly hugs.

  • Security Comes In Many Forms

    There are many ways to protect your PC data and applications, from hardware encrypted portable storage devices, backup solutions, file repair software, and data recovery, to digital forensics services. SecureData provides both products and services in these areas. At CES, the company demonstrated a secure UBS drive which they claimed was the only hardware encrypted flash drive in the world with keypad and Bluetooth authentication.

  • Wireless Six-Degrees Of Freedom (6DOF)

    Atraxa’s system tracks 6DOF motion without the need for optical cameras or infrared markers to be placed around the room, or mounted externally to the XR headset or controller. And no line of sight—or wires—are required between the headset and controllers. Unhindered by wires or line-of-sight constraints, users can move freely in large spaces. Even move from room to room without any room mapping, or controller orienting (or reorienting) is required. Tracking starts immediately and lasts without interruption.

    The tech combines electromagnetic (EM) and inertial technologies into a single sensor-fusion tracking platform. The IMU (inertial measurement unit) returns acceleration and angular velocity data. The EM tracker delivers true position and orientation data; it also establishes the tracking volume and local coordinate system. Atraxa is comprised of two main components: a tracker module and receiver module. The tracker module houses the IMU and an EM transmitter coil that generates the magnetic field (i.e. the tracking volume). The tracker modules are embedded into the handheld controllers (or other peripherals).

  • Real-Time Accident Report

    Sooner or later, all of us get into an automotive accident. When that occures, wouldn’t it be great to have a record of what happened? Through the use of embedded acceleration sensors, MDGo generates a real-time report in the case of a car crash, detailing each occupant’s injuries by body region. The company’s technology enables accurate delivery of needed services and support by providing optimal medical care in the case of an emergency and supporting the claim process.

  • Smart Factory

    Could a factory think for itself or autonomously design a better car or aircraft? Can it eliminate waste? All of these questions fit into the realm of manufacturing intelligence. One company with experience in this area is Hexagon, claiming that their technologies are used to produce 85% of smartphones, 75% of cars and 90% of aircraft.

    Their Smart Factory approach aims to have fewer inputs, zero waste and high quality. All this is achieved through sensor, software and autonomous solutions that incorporates data feedback to improve work to boost efficiency, productivity, and quality across industrial and manufacturing.

  • A Cool “Uncooled” Methane Gas Detector

    The FLIR GF77 Gas Find IR is the company’s first uncooled thermal camera designed for detecting methane. This handheld camera offers inspection professionals the features they need to find potentially dangerous, invisible methane leaks at natural gas power plants, renewable energy production facilities, industrial plants, and other locations along a natural gas supply chain. The gas detector provides methane gas detection capability at roughly half the price of cooled gas inspection thermal cameras, to empower the oil and gas industry to reduce emissions and ensure a safer work environment.

  • IoT Arduino Adds LoRaWAN Connectivity

    You can now connect your sensors and actuators over long distances via the LoRa wireless protocol or throughout LoRaWAN networks. The Arduino MKR WAN 1310 board provides a practical and cost effective solution to add LoRa connectivity to projects  requiring low power. This open source board can be connected to: the Arduino IoT Cloud, your own LoRa network using the Arduino LoRa PRO Gateway, existing LoRaWAN infrastructure like The Things Network, or even other boards using the direct connectivity mode.

  • Wearables, Ingestibles, Invisibles

    One of the keys to a healthy life is nutrition. But what exactly constitutes ‘healthy’ food for a specific person? To answer that question, you need to measure and analyze the processes inside the complex human digestive system. Imec is working on prototype technology that is up to that task. It’s called ingestible sensors.

    The company also develops wearables for medical and consumer applications that enable reliable, continuous, comfortable, and long-term health monitoring & management. This includes high-accuracy & low-power biomedical sensing technologies sometimes embedded into fabrics.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

researchers-are-looking-for-ways-to-make-vr-less-painful
Researchers are hoping to develop standards and guidelines for developers to reduce the risk of physical discomfort and injuries among VR users. (Image source: Oregon State University) 

Can the physical risk of using VR be reduced or eliminated altogether?

Ask anyone who has used VR for a prolonged period of time and they’ll tell you a big issue with a lot of VR hardware is comfort level. Bulky headsets and controllers with no ergonomic design can turn a fun immersive experience into a literal pain in no time. This becomes a big concern, not only to developers who want their hardware and software to be used for extended periods, but for users themselves who risk long term injury and health complications ranging from musculoskeletal issues to more commonly reported issues such as eye strain, nausea, and motion sickness.

Hardware developers have put a premium on ensuring comfort with their latest generation headsets, using techniques ranging from better balancing of internal components to using lighter-weight materials. But while other industries have guidelines and standards to fall back on, nothing of the sort exists for virtual and augmented reality.

Researchers from Oregon State University (OSU) and Northern Illinois University have been examining how common movements done in VR are contributing to muscle strain and discomfort in users. Their goal is to establish baselines for optimal object placement and angles in virtual environments so that developers can design games and other applications that cause minimum discomfort in users.

The results of their work are published in a recent study in the journal Applied Ergonomics.

“In computer users, the relationship between awkward postures or repeated movements and musculoskeletal disorders is well known, researcher Jay Kim of OSU’s College of Public Health and Human Sciences, said in a press statement. “We wanted to see how the VR compares to conventional computer-human interactions…We wanted to evaluate the effects of the target distances, locations, and sizes so we can better design these interfaces to reduce the risk for potential musculoskeletal injuries.”

According to Kim it can take as little as three minutes for shoulder discomfort to occur as a result of having to extend your arm straight out, as in many VR applications. He said that prolonged use of VR can lead to a range of problems from gorilla arm syndrome, to rotator cuff injuries, to neck strain and cervical spine damage.

For their study, Kim and his team focused primarily on neck and shoulder movements. The researchers had participants wearing Oculus Rift headsets perform tasks that involved them pointing to specific dots around a circle, or coloring in a designated area with their fingers. The participants’ movements were monitored using motion capture cameras. They were also outfitted with sensors measure electrical activity in their muscles.

The tests were repeated with the visuals placed at eye level, 15 degrees above and below eye level, and 30 degrees below eye level.

Results showed performance at the color task worsened when participants had to tilt their heads down at either 15 or 30 degrees. At 15 degrees above eye level, researchers noted the greatest degree of muscle activation, with the most discomfort occurring during the pointing task at 15 degrees above eye level.

“This result indicates that excessive vertical target locations should be avoided to reduce musculoskeletal discomfort and injury risks during VR interactions,” the study reads. “Based on relatively lower biomechanical exposures and trade-off between neck and shoulder postures, vertical target location between eye height and 15-degrees below eye height could be recommended for VR use.”

“Based on this study, we recommend that objects that are being interacted with more often should be closer to the body,” Kim said. “And objects should be located at eye level, rather than up and down.”

Kim said research like this is going to become increasingly important as VR and AR technologies proliferate into more and more industries. While entertainment remains VR’s largest use case – and the biggest area of concern in terms of safety and discomfort – more and more enterprise deployments of VR in areas such as industrial and medical training as well as in intricate applications such as virtual prototyping and design mean that VR-related injuries could quickly transition into a workplace hazard.

Data released by Statista forecasts the number of active VR users worldwide to be around 171 million as of 2018.

Kim and his team hope their work can help avoid the same mistakes that occurred as personal computers became more mainstream in the 80s and 90s, where a lack of awareness lead to a variety of health issues such as carpal tunnel syndrome and vision issues. “ With VR, we’d like to learn now rather than later,” he said.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics

the-12-best-innovations-of-ces-2020

Forget new TVs and smartphones. These are the real game changers introduced at CES 2020.

  • Now that the smoke is cleared from CES 2020, we can take a step back and see which technologies were the real innovations of 2020. Let’s be honest, CES can be a black hole of vaporware, false promises, and concepts intended to be just that.

    We’ve compiled a list of our favorite technologies introduced at CES 2020 – innovations that we’re sure will be having a lasting impact in 2020 and beyond.

  • AerNos AerSIP Gas Sensor

    The AerSIP from AerNos is a 5 x 5-mm, mulit-gas sensing module that combines nanotechnology and machine learning algorithms to monitor indoor and outdoor air quality. The system-in-package (SIP) is an embedded plug-and-play solution that can be integrated into wearables, mobile devices, and other IoT devices and is capable of detecting hazardous gases and other dangers at parts per billion levels.

    (Image source: AerNos/CES)

  • AMD Ryzen 4000 Series Mobile Processor

    AMD’s Ryzen 4000 could be a literal game changer for high-end laptops users – particularly gamers and designers. AMD says its new Ryzen 4000 series is the world’s first 7-nanometer laptop processor. Designed for ultra-thin laptops, the Ryzen 4000 series features up to 8 cores and 16 threads and configurable 15W thermal design power. AMD pledges the Ryzen 4000 series offers up to four percent greater single-thread performance and up to 90 percent faster multithreaded performance than its competitors, as well as up to 18 percent faster graphics performance over competing chips.

    (Image source: AMD)

  • Atmosic Technologies M3 Battery-Free Bluetooth 5 SoC

    Atmosic says its M3 Battery-Free Bluetooth 5 SoC uses so little power that it can even eliminate the need for battery power entirely in devices such as wearables, keyboards, mice, asset trackers, beacons, and remotes. The M3 integrates Atmosic’s Lowest Power Radio, On-demand Wake-Up, and Managed Energy Harvesting technologies to deliver what the company says is 10 to 100 times lower power than other SoCs, while still complying with Bluetooth standards. The M3’s radio uses two “ears” – one for listening in a low-power state to perceive incoming commands, and another that only wakes when alerted. The SoC uses energy harvesting technology to gather power from radio frequency, photovoltaic, thermal, and motion.

    (Image source: Atmosic)

  • Bot3 Zen-P VSLAM Deep Learning Module

    Bot3‘s Zen-P VSLAM Deep Learning module integrates visual simultaneous localization and mapping (VSLAM) technology (a version of the same technology used in autonomous vehicles) into mobile robots ranging from industrial machines to smart home products. Bot3’s image processing algorithm, Pascal, allows for autonomous navigation without tracks as well as indoor mapping and positioning. (for instances such as warehouse applications).

    (Image source: Bot3)

  • BrainCo BrainRobotics Prosthetic Hand

    Many companies have been developing mind-controlled prosthetics for amputees and other disabled patients. What separates the prosthetic hand developed by BrainRobotics is the integration of AI technology. The BrainRobotics hand utilizes machine learning to allow the hand and its user to learn from each other over time – leading to more lifelike movements. The company is aiming to provide accurate and reliable prosthetics and at affordable price for all patients. BrainRobotics is a subsidiary of BrainCo, a software developer focused on brainwave measuring and monitoring.

    (Image source: BrainCo/BrainRobotics)

  • Fluent.ai MultiWake Word and Voice Control Engine

    Fluent.ai is a technology company focused on AI for voice interface and speech recognition. The company’s Multi-Wake Word and Voice Control Engine is an edge-based, noise robust, and multilingual speech technology that consumes minimal power and storage, allowing it to be embedded in small devices. The solution is Cortex M4-based and supports four separate wake words and 100 multilingual commands, according to Fluent.ai.

    Fluent.ai has recently partnered with semiconductor designer Ambiq Micro to implement Fluent.ai’s software solutions into Ambiq’s ultra-small footprint, low-power microcontrollers. Ambiq’s MCU supports frequencies up to 96 MHz, and Fluent.ai’s solution requires only 16 MHz from the MCU. The new partnership means Fluent.ai and Ambiq will be releasing MCUs for OEMs looking for an easy way to add speech recognition and voice command functionality to their smart home devices and other products.

    (Image source: Fluent.ai / CES

  • Intel Tiger Lake Chip

    When Intel announces a new chip, the whole world takes notice. The chipmaking giant is launching its latest chip for consumers this year. Dubbed Tiger Lake, the new chip is said to be optimized for AI performance, graphics, and USB 3 throughput. Rather than desktops, the new chips will be focused on mobile devices such as ultra-thin laptops and tablets. The first products featuring Tiger Lake are expected to ship later in 2020.

    (Image source: Intel)

  • Monster MultiLink Bluetooth Technology

    Sometimes its the most straightforward ideas that can make the biggest difference. Most of us love our Bluetooth wireless headphones and earbuds. The problem is they don’t create a sharable experience. What if you want to show your friend the video you’re watching without disturbing the people around you? Monster has debuted a new technology called Music Share that uses MultiLink technology to allow devices to send Bluetooth audio to multiple devices in sync. The technology expands how Bluetooth headphones can be used and opens up new use cases ranging from air travel to fitness classes as well as new avenues for social interaction.

    (Image source: Bluetooth SIG)

  • Murata Coral Accelerator Module

    Working in partnership with Coral and Google, Murata Electronics has developed what it is calling the world’s smallest AI module. The Coral Accelerator Module packages Google’s Edge TPU ASIC into a miniaturized footprint to enable developers to embed edge-based AI into their products and devices. The new module forms an integral part of Coral’s integrated AI platform, which also includes a toolkit of software tools and pre-compiled AI models.

    (Image source: Murata Electronics Americas)

  • Pollen Robotics Reachy Open-Source Robot

    Reachy is a robot developed by Pollen Robotics, in collaboration with the INCIA Neuroscience Institute in France, that is fully open source. The robot, which can be programmed using Python, is modular – employing a variety of 3D-printed grippers – and comes with prepackaged AI algorithms to allow developers to customize it for a variety of applications ranging from customer service and assisting the elderly or disabled.

    Read more about Reachy, and the rise of open-source robotics, here.

    (Image source: Pollen Robotics)

  • VRgineers 8K XTAL Headset

    VRgineers, a maker of premium VR headsets for enterprise applications in industries ranging from automotive to defense and military, has released a major upgrade to its flagship XTAL headset. The latest version of XTAL features 8K resolution (4K per eye), improved lenses with a 180-degree field-of-view, and a new add-on module for augmented reality and mixed reality functionality. The headset also still includes eye tracking as well as integrated Leap Motion sensors to enable controller-free navigation and interactions.

    (Image source: VRgineers)

  • zGlue ChipBuilder

    zGlue is a software company that develops tools for chipmakers and designers. Its latest offering, ChipBuilder 3.0 is a design tool to for building custom silicon chips and accelerating time to market. The software suite features an expansive library of chipsets and allows engineers to capture schematics, route and verify designs, and download netlists. The tool allows engineers to create realistic 3D models and code their own chips and even place orders for physical chips via zGlue’s Shuttle Program.

    (Image source: zGlue / CES)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics

pico-interactive-is-one-of-the-best-kept-secrets-in-vr
The Pico G2 4K (Image source: Design News)

Unless you’ve been deeply entrenched in the VR space for years you might be forgiven for not knowing about Pico Interactive. While big names in VR hardware like HTC and Oculus have only recently shifted their focus to creating standalone headsets (ones that do not need external sensors or a PC to operate), standalone has been Pico’s focus from the very beginning.

The San Francisco-based company, made a quiet, but significant, showing at the 2016 Electronic Entertainment Expo (E3) with a small, standalone headset called the Pico Neo. The Neo was an Android-based gaming headset that actually boasted some impressive specs for its size. It was based on the same Snapdragon 820 chipset behind the early Oculus Rift and HTC Vive headsets, offered a 90 Hz refresh rate, and a 110-degree field of view.

When we spoke to Pico in 2016, Ennin Huang, Pico’s VP of Design, said the company’s vision was pretty straightforward – “We really wanted to push VR and make it affordable for everyone without comprising the graphics and experience.”

The way to do that, Huang said, was by offering users a lightweight, untethered experience.

The Pico Neo didn’t make a big splash in the US. But it turned out Huang was right.

In recent years VR hardware makers have pivoted into offering headsets that are not only lighter and more powerful, but, most importantly, standalone. The latest hardware from Oculus, the Oculus Quest, is an all-in-one headset targeted at gaming with six degrees of freedom (6DoF) tracking. The Quest, the less robust Oculus Go, other standalone models such as the HTC Vive Focus, and the (fortunate) death of smartphone-based VR headsets point to one clear trend for future of VR hardware – wireless, standalone, and powerful.

But Pico Interactive hasn’t stood still. In recent years the company has pivoted into targeting its headsets specifically at engineers, designers, and other enterprise users – with the aim of providing a convenient and mobile experience for applications ranging from virtual prototyping and design, to education, and even medical applications.

Design News had a chance to go hands-on with one of Pico Interactive’s flagship enterprise headsets, the G2 4k, and found it to be one of the best overall user experiences to date. The G2 4K is very light (276 grams according to company specs) and very well-balanced. The 4K resolution, comes through crystal clear thanks to LCD displays and the use of fresnel lenses (which also help contribute to the unit’s light weight).

In terms of overall image quality, the G2 4k rivaled high-end PC-based enterprise headsets like HP’s Reverb, despite having a lower-resolution (3840 x 2160, or roughly 1920 x 1080 per eye).

“We conducted a lot of human-factors study for the G2 4K,” Huang told Design News in a recent interview. “There are two main strategies for tackling the overall weight issue: First, the material for the G2 4k is carefully selected to achieve the lightest weight possible while still keeping it affordable for everyone. Second is the weight distribution – we want to make sure the overall center of the mass is somewhere close to the center of the user’s head when the user is wearing the headset. To achieve that we have moved some of the components to the back of the head-padding while still keeping the form factor as compact as possible.”

The G2 4K’s fresnel lenses lend to its light weight and image quality, while its foam face cushion and adjustable Velcro staps support overall comfort. (Image source: Design News) 

With a 101-degree field of view and a 75 Hz refresh rate, the G2 4K does fall below the specs of more entertainment-focused headsets. But then again, enterprise software apps don’t usually require the same high frame rates as, say, the latest action game.

The G2 4K is built on the Snapdragon 835, Qualcomm’s first chipset offering targeted at mobile VR and augmented reality. It’s the same chipset behind the highly-praised Oculus Quest.

Aside from the headset having its own online store (the Pico Store), the company also offers an SDK for the G2 4K that we found supports both Unreal and Unity engines. For those who might be put off by the thought of learning a new SDK or having to purchase apps within a proprietary store, the headset is also compatible with the Wave SDK for the HTC Vive line of headsets and also supports apps from HTC’s Viveport. We tried running a few enterprise apps from Viveport on the G2 4K and didn’t notice any difference in performance.

Where one might find disappointment with the G2 4K is that it is only offers three degrees of freedom (3DoF) tracking for both the controller and the headset, which can significantly limit user mobility depending on the application. Some enterprise users who prefer a more fixed design space won’t notice the difference at all, but someone like an automotive engineer or architect for example, who might prefer to be able to walk through a virtual space at room scale, might be frustrated at having to use point-and-click navigation to move around.

The G2 4K’s controller is compact and comfortable, but only offers 3DoF tracking. (Image source: Design News)

Asked about the decision to give such a visually powerful headset only 3DoF tracking, Huang said the decision came down to offering a product with the right specs for enterprise users but would also provide a comfortable barrier to entry for new users. “We think 3DoF and 6DoF both have their niche in enterprise solutions,” Huang said. “While 6DOF is great for a little more advanced users, the 3DoF VR training and tutorials experience is great for someone who has never had or had a very little VR experience. In fact, many users of our enterprise-customers have never used VR before.”

Very serious enterprise users will probably opt for a PC-based setup along the lines of the HP Reverb or HTC’s Vive Pro. But smaller organizations or those looking to get their feet wet in implementing VR into their workflow, and who hold high value in an untethered experience, could find the G2 4K a more economical option capable of still delivering in terms of image quality and overall performance.

The G2 4K features two external speakers, audio jack, a micro SD card slot, USB-C port, and a built-in microphone. (Image source: Design News) 

Enter the Neo 2

At the time we spoke, Pico Interactive was shipping another headset, the Neo 1, which featured 6DoF head tracking and a 3DoF controller, in Japan, China, and parts of Europe. Huang teased that the company’s R&D team was working on prototypes for headsets that offers 6DoF head and controller tracking, but declined to provide any further details.

However at CES 2020, Pico made another announcement that should please users who demand a 6DoF experience with their VR.

The company’s latest headset, announced at CES 2020, is the Neo 2, a standalone enterprise headset with 4K resolution and 6DoF inside-out tracking. A second version, the Neo 2 Eye, features eye tracking and foveated rendering capabilities courtesy of Tobii, the go-to supplier of eye tracking sensors and analytics technologies for VR.

The Pico Neo 2 debuted at CES 2020. (Image source: Pico Interactive)

Based on the Qualcomm Snapdragon 845 platform, the Neo 2, is a bit heavier than the G2 4K (350 grams, according to specs), and features the same resolution, lenses, and refresh rate. Where the headset takes a step up from previous models in utilizing the 845’s integrated simultaneous localization and mapping (SLAM) technology for room-scale 6DoF tracking. Both models of the Neo 2 also feature two mono fisheye external cameras.

For its part, Tobii says the foveated rendering technology integrated into the Neo 2 allows it to increase framerates by up to 66 percent and reduces shading loads by up to 72 percent, which gives the headset improved visual performance without sacrificing performance or draining battery life.

The addition of eye tracking also gives Neo 2 users a novel control scheme (navigating apps and menus via eye movement) and can also be used to capture gaze data to assist with training applications as well as capturing user behavior insights.

It’s a pretty big leap for a company that started with mobile gaming, though Huang said Pico hasn’t left its roots behind. And while enterprises may be the first to take note of what Pico is offering, Huang said the company believes first in serving VR users in any application they desire.

“Mobile gaming and entertainment are still part of our focus, and in fact, we have been doing pretty well in Asia in places like Korea, Japan, and China. In addition to the consumer market, we also see great opportunities where Pico can provide enterprise solutions for many of our business partners,” Huang said.

“The motivation behind all the Pico products has always been the same since we’ve started back in 2016; it has always been user-first design.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics

want-to-build-an-open-source-hardware-and-software-robot?

The 2020 Consumer Electronics Show (CES) is full of engineering marvels. Many of these marvels are manifested as advances in robots. For example, consider UBTech highlights are this year’s show. The company’s intelligent humanoid service robot named “Walker” won the Best of CES 2019 and will be back with additional features at the 2020 show. According to the company, Walker will be faster and demonstrate more human-like walking as well as yoga poses that show its huge improvement in motion control. The robot will also demonstrate the ability to push a cart, draw pictures, and write characters, plus showing improved static balance with full-body compliance control.

There’s another robot system that the technical community might find equally interesting if a bit less flashy. France’s Pollen Robotics is displaying their “Reachy” robot at CES2020. In collaboration with the INCIA Neuroscience Institute in France, the company has developed a 3D-printed robot arm that’s 100% open source. Reachy is billed as an expressive humanoid service robot specializing in interacting with people and manipulating objects. This robot is built with prepackaged AI and modular robotics that should easily accommodate many real-world applications, such as extensions for disabled people (human augmentation), helping out at events (like CES), assisting small businesses and even as a receptionist.

According to the company, Reachy can be easily programmed in Python and offers ready-to-use operational environments for game play, serving coffee, making music, handing out specific objects, and more. The robot is also fully customizable with open hardware, software and data!

The company invites developers to join their open source community and participate on Github, although the company explains that Reachy is currently still under development. The open source hardware, software and data won’t be released until the robot is ready, toward the end Q1-2020.

Image source: Pollen Robots / Reachy

But what does it really mean to say a company or platform supports open hardware and software/  

deepfakes:-the-looming-threat-of-2020

I’m watching a clip from the movie The Shining. Shelly Duvall is hiding from her crazed husband as he chops down the door with an axe. Jim Carrey sticks his head through the opening and cackles the iconic line: “Here’s Johnny!”

…Jim Carrey is not in The Shining.


What you’re seeing is not a Hollywood special effect. It wasn’t done with After Effects, green screen, or with costuming and makeup. The video is a fake created by deep learning artificial intelligence – a deepfake. And anyone with a powerful computer and enough time can make one.

You might have heard of deepfakes before, or glimpsed headlines discussing the technology. You might even have laughed at various YouTube videos on channels such as Ctrl Shift Face that have swapped faces of celebrities in iconic roles to some humorous and sometimes unsettling results (once you’ve seen any of the bizarre deepfakes involving Nicolas Cage you can never un-see them.)

But deepfakes, once confined to darker corners of the internet, are becoming a serious threat. In the US, particularly as the 2020 election season rapidly approaches, AI experts are warning that deepfakes could become a powerful tool for spreading misinformation and manipulating the public. With enough effort a bad actor could create a video of any political candidate saying nearly anything. And in today’s climate of social media outrage and algorithm-driven content distribution, there’s no telling how far it could spread before someone caught it.

It’s time engineers, developers, and technologists all had a serious discussion about deepfakes.

(Image source: Adobe Stock)

The Origin Of Deepfakes

There’s no one particular person that has taken credit for originally developing deepfakes. Their existence owes to a confluence of technologies ranging from ever-more sophisticated computer vision algorithms and neural networks, to increasingly powerful GPU hardware.

The first deepfakes to emerge on the internet seem to have emerged in 2017, when an anonymous Reddit user called “Deepfakes” began distributing illicit, altered videos of celebrities online. Other Reddit users followed suit and it wasn’t long before a community had sprung up around distributing both deepfakes themselves as well as tutorials and software tools to create them.

In an interview with Vice, [NSFW link] one of the first outlets to take an extensive look at deepfakes, the Reddit user outlined how comparatively easy the process is:

“I just found a clever way to do face-swap. With hundreds of face images, I can easily generate millions of distorted images to train the network. After that if I feed the network someone else’s face, the network will think it’s just another distorted image and try to make it look like the training face.”

But it wasn’t all fun and games. Far from it. When they first appeared, deepfakes had one particularly popular and disturbing use case – pornography. Much of the early deepfake content available was pornographic films created using the faces of celebrities like Gal Gadot, Scarlett Johansson, and Taylor Swift without their consent.

As the videos proliferated, there was an crackdown with Reddit itself shutting down its deepfakes-related communities, pornographic websites removing the content, and sites like GitHub refusing to distribute deepfake software tools.

If private citizens weren’t that concerned yet it was probably because sites got somewhat ahead of the problem. Left unchecked it wouldn’t have been long before deepfake pornography spread from celebrities to every day people. Anyone with enough publically available photos or video of themselves on a platform like Facebook or Instagram could potentially become a victim of deepfake revenge porn.

In 2018, Rana Ayyub, and investigative journalist from India, fell victim to a deepfakes plot intended to discredit her as a journalist. Ayyub detailed her ordeal in an article for The Huffington Post:

“From the day the video was published, I have not been the same person. I used to be very opinionated, now I’m much more cautious about what I post online. I’ve self-censored quite a bit out of necessity.

“Now I don’t post anything on Facebook. I’m constantly thinking what if someone does something to me again. I’m someone who is very outspoken so to go from that to this person has been a big change.

“I always thought no one could harm me or intimidate me, but this incident really affected me in a way that I would never have anticipated…

“…[Deepfakes] is a very, very dangerous tool and I don’t know where we’re headed with it.”

How Deepfakes Work

On the surface the process of creating a deepfake is fairly straightforward. First, you need enough images (hundreds or more ideally) of your target – showing their face in as many orientations as possible (the more images you can get, the better the results – hence why celebrities and public figures are an easy target). If you think it might be difficult to get hundreds or thousands of images of someone remember that a single second of video could contain 60 frames of someone’s face.

Then you need a target video. The AI can’t change skin tone or structure so it helps to pick a target and source with similar features. Once a deep learning algorithm is trained on a person’s facial features, additional software can then superimpose that face onto another person’s in your target video. The results can be spotty at times, as many videos online will attest to, but done right, and with enough attention to detail, the results can be seamless.

In an interview with Digital Trends, the anonymous owner of the Ctrl Shift Face YouTube channel (the channel responsible for the Jim Carrey/The Shining videos, among others) discussed how simple, yet time-consuming the process is:

“I’m not a coder, just a user. I don’t know the details about exactly how the software works. The workflow works like this: You add source and destination videos, then one neural network will detect and extract faces. Some data cleanup and manual extraction is needed. Next, the software analyzes and learns these faces. This step can sometimes take a few days. The more the network learns, the more detailed the result will be. In the final step, you combine these two and the result is your deepfake. There’s sometimes a bit of post-process needed as well.”

On one hand, the relative ease at which this can be done with little to no coding experience is certainly disconcerting. On the other however, deepfakes are an impressive demonstration of the sophistication of AI today.

At the core of deepfakes is a neural network called an autoencoder. Put simply, an autoencoder is designed to learn the important features of a dataset so it can create a representation of it on its own. If you feed a face into an autoencoder its job is then to learn the distinguishing characteristics that make up a face and then construct a lower-dimensional representation of that face – in this case called a latent face.

Deepfakes work by having a single encoder train to create a generalized representation of a face and then have two decoders share that representation. If you have two decoders – one trained on Person A’s face, the other on Person B’s – then feed the encoder either face you can transpose Person A’s face onto Person B’s (or vice versa). If the encoder is trained well enough, and the representation is generalized enough, it can handle facial expressions and orientations in a very convincing way.

Since faces in general are very similar in their overall shape and structure, a latent face created by an encoder using Person A’s face, can be passed to a decoder trained on Person B’s face to good effect. The result at the other end is a video of Person B, but with Person A’s face.

As long as you have two subjects similar enough and a computer with enough processing power, the rest just takes time. Faceswap – one of the more readily available deepfakes apps – can run on a Windows 10, Linux, or MacOS computer and recommends a newer Nvidia GPU for processing. “Running this on your CPU means it can take weeks to train your model, compared to several hours on a GPU,” according to Faceswap’s documentation.

19-in-2019:-best-electronic-bits-from-the-design-news-vault

Check out the best curated technical content from the editors at Design News.

For electronic developers, 2019 was another stellar year. In addition to ongoing challenges and successes in areas such as embedded systems, hardware chips-board systems and system-of-systems (like satellites), there were new materials, evolving design approaches and insights to be considered. Here are the best stories that covered these issues from the 2019 Design News arhives.

  1. Top 10 2019 engineering, science and technology awards – Each year reveals further advances in the disciplines of technology, engineering and science. This year, luminaries were awarded for their work in cosmology, photonics, GPS systems, video processing, semiconductors, brain neurons and more.
  2. Who’s left to make chip development tools? –Electronic design automation (EDA) are the software tools used for designing electronic systems, such as system-on-chip (SoC) integrated circuits and printed circuit boards. The tools work in a design-verification flow that chip designers use to analyze and develop semiconductor chips. Here’s a look at the remaining major EDA tool companies after years of consolidation.
  3. Complete integrated circuits fabricated using printing press – Researchers have for the first time printed complete integrated circuits that have more than 100 organic transistors, a breakthrough in the quest to use printing to create complex next-generation electronic and computing devices. The breakthrough eliminates the need to use multiple manufacturing methods to create an integrated circuit with more than 100 organic transistors.
  4. 2 game-changing trends that will define embedded systems in the 2020s – The last decade has seen an amazing advancement in embedded system development techniques, tools and technologies. The next decade has the potential to dramatically change the way that products and embedded systems are developed.
  5. Developing an embedded software build pipeline – One interesting fact that I’ve noticed about embedded software development is that development processes and techniques tend to lag the general software industry. Developing a more sophisticated build pipeline can have dramatic effects on the embedded software development life cycle.
  6. 8 criteria to evaluate when selecting an RTOS – Real-time operating systems (RTOS) are finding their way into nearly two-thirds of all applications. Cost is a factor. But there are more important things to consider when choosing a real-time operating system.
  7. Old 3G battle shifts to 5G struggle – The old 3G battle between communication and computational industries has been replaced with the 5G struggle between nations and sub-6 vs mmWave global spectrums.
  8. Internet of Space or Space Junk?– When bad movies make good predictions and how to lessen the Kessler Syndrome with everything from AI to space harpoons.
  9. Did Edison Really Lose a Non-Existent ‘Current War?’ – The recent movie, The Current War, dramatizes the struggles between Edison, Westinghouse, and Tesla to bring electrical power to the US. But was the “war” actually fabricated?
  10. 3 Do’s and Don’ts for Medical Device Software Development – Medical devices are one of the fastest growing areas of embedded hardware and software development. Here are some successful strategies – and potential pitfalls – gleaned from real-world medical device development projects.
  11. Microorganisms Provide Solar Energy Independent of Using Solar Cells – The concept of solar energy usually inspires images of long rows of solar panels lined up in a vast field. Researchers in Sweden achieved production-potential amounts of butanol using carbon dioxide and sunlight
  12. Beware the Hidden Costs of a Free Embedded RTOS – If you’re basing your selection of a real-time operating system (RTOS) solely on initial cost, then you may be in for a rude awakening.
  13. 8 RISC-V Companies to Watch – The open source nature of RISC-V promises to enable companies to create custom chip hardware specifically tailored to their products and devices. These eight companies are developing their own RISC-V technologies and are committing to helping third parties do the same to help push adoption of the open-source chip architecture. April 2019
  14. New Material Could Transform How Electronics Are Built – A new family of crystal materials can serve a dual purpose in electron movement in electronic devices, potentially changing how they will be designed in the future.
  15. Biocompatible Transistor Invented for New Devices – Researchers have developed what they said is the first biocompatible ion-driven transistor fast enough to enable real-time signal sensing and stimulation of brain signals.
  16. Efficient Fabrication Method Achieved for Nano-Sized Processors – A new rapid fabrication method for nano-scale semiconductors could help advance the design of next-generation processors.
  17. The Biggest Embedded Software Issue Is … – There are many different problems and challenges that embedded software developers are facing today. One of the biggest and least spoken about issues is that too many developers are writing software code without considering what could go wrong.
  18. Smart Manufacturing Expert Says It’s Time to Embrace Fuzziness – Combining fuzzy sensing technologies with artificial intelligence, manufacturers can learn more about their enterprise for less cost.
  19. 2019 Will Be the Year of Open Source – After decades of being looked at as more of a subculture (or arguably counter-culture) in the larger technology landscape, open source is finally getting its due. From software and even hardware, we saw more activity in open source than ever before in 2018. 

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

top-10-2019-engineering,-science,-and-technology-awards

Luminaries from 2019 were awarded for their work in cosmology, photonics, GPS systems, video processing, semiconductors, brain neurons and more.

  • Each year reveals further advances in the disciplines of technology, engineering and science. In recognition of these yearly advances, key individuals are awarded with honors and prizes from a variety of very different organizations. These organizations include the National Academy of Sciences, the Nobel Prizes, various IEEE societies, Queen Elizabeth bi-annual prize, the Emmy’s Engineering award – and more.  Ten of the best of these awards have been collected to highlight the broad range of the achievements in 2019.

  • 2019 Nobel Prize in Physics

    “The Royal Swedish Academy of Science Nobel Prize in Physics recognizes both theoretical and experimental contributions to understanding the universe. This year, the prize is awarded to APS Fellow James Peebles (Princeton University), Michel Mayor (University of Geneva), and Didier Queloz (University of Geneva; University of Cambridge). 

    Half of the prize is awarded to Peebles for his theoretical insights into physical cosmology that have impacted the trajectory of cosmology research for the past 50 years and form the basis of the current ideas about the universe. The other half of the prize is awarded jointly to Mayor and Queloz for the first discovery of an exoplanet orbiting a solar-type star in the Milky Way in 1995.”

    Image Source: MLA style: The Nobel Prize in Physics 2019. NobelPrize.org. Nobel Media AB 2019. Thu. 19 Dec 2019. https://www.nobelprize.org/prizes/physics/2019/summary/

  • Comstock Prize in Physics

    “Michal Lipson, Columbia University, received the 2019 Comstock Prize in Physics. Her pioneering research established the groundwork for silicon photonics, a growing field in which she remains a pioneer and leader. The technology, which uses optical rays to transfer data among computer chips, is now considered to be one of the most promising directions for solving major bottlenecks in microelectronics.

    Lipson developed techniques to tailor the electro-optic properties of silicon that led to the first advances in silicon photonics, including demonstrating the ability to confine light well beyond the traditional diffraction limit using what she termed “slot waveguides.” These waveguides are being applied for many applications, including telecommunications, bio-sensing, and on-chip transport of nanoparticles.”

  • Queen Elizabeth Prize for Engineering

    “The Queen Elizabeth Prize for Engineering, also known as the QEPrize, is a global engineering prize that rewards and celebrates the engineers responsible for a ground-breaking innovation in engineering that has been of global benefit to humanity. The £1 million prize is awarded biennially in the name of Queen Elizabeth II. The 2019 prize went to four US engineers – Dr Bradford Parkinson, Professor James Spilker, Hugo Fruehauf, and Richard Schwartz for the creation of the first truly global, satellite-based positioning system (GPS). Parkinson won the prize for leading the development, design, and testing of key GPS components. James Spilker, Jr was awarded for developing the L-band GPS civil signal structure using CDMA. Hugo FrueHauf was honored for his instrumental role creating a highly accurate miniaturized atomic clock using a rubidium oscillator. Finally, Richard Schwartz won the prize for leading the design and development of the highly robust, long-lasting Block I satellites.”

  • Engineering Emmy Awards

    The Television Academy awarded Hugo Gaggioni with the Charles F. Jenkins Lifetime Achievement Award for 2019. This award honors a living individual whose ongoing contributions have significantly affected the state of television technology and engineering.

    In his storied 31-year tenure at Sony Electronics, Hugo Gaggioni has achieved many scientific and technical accomplishments and is widely known for his unique ability to present and explain complex technical concepts to professionals and civilians alike. He now serves as chief technology officer of the broadcast and production systems division, pursuing his research interests in digital video image processing, information theory, audio/video bandwidth compression, HDTV devices and systems, digital filter banks, and multidimensional signal processing.  He is a Society of Motion Picture and Television Engineers (SMPTE) fellow and recipient of both the David Sarnoff and the Leitch Medals.

  • IEEE History Committee Honors Jimmy Soni and Rob Goodman

    Jimmy Soni and Rob Goodman have been honored by IEEE History Committee for their book, “A Mind at Play: How Claud Shannon Invented the Information Age” (Simon and Schuster, 2017). Both authors were chosen as the winners of the 2019 IEEE William and Joyce Middleton Electrical Engineering History Award. Established in 2014, this award recognizes annually the author of a book (published within the previous three years) in the history of an IEEE-related technology that both exemplifies exceptional scholarship and reaches beyond academic communities toward a broad public audience. Most of the Center’s resources are available online at the Engineering and Technology History Wiki

    “In their second collaboration, the biographers present the story of Claude Shannon—one of the foremost intellects of the twentieth century and the architect of the Information Age, whose insights stand behind every computer built, email sent, video streamed, and webpage loaded. Claude Shannon was a groundbreaking polymath, a brilliant tinkerer, and a digital pioneer. He constructed the first wearable computer, outfoxed Vegas casinos, and built juggling robots. He also wrote the seminal text of the digital revolution, which has been called “the Magna Carta of the Information Age.” In this elegantly written, exhaustively researched biography, Soni and Goodman reveal Claude Shannon’s full story for the first time.”

  • IEEE Robert Noyce Awards

    The IEEE Robert N. Noyce Medal honors of Robert N. Noyce, founder of Intel Corporation. He was renowned for his 1959 invention of the integrated circuit and for leadership in the research and development of advanced microelectronic design automation tools. The 2019 recipient of this award was Antun Domic, Chief Technical Officer, at Synopsys.

    “The electronic design automation (EDA) tools, methodologies, and flows developed under Antun Domic’s leadership have driven the state of the art in digital microelectronics for almost three decades and have enabled the continued miniaturization of the electronic components that power today’s applications. EDA makes possible the creation of complex electronic systems with computer software that aids in the design, verification, and testing processes and helps detect and eliminate bugs and defects in chips and circuit boards. The EDA tools developed under Domic’s leadership while at Synopsys and other companies have impacted the creation of a large number of the world’s most advanced microelectronic components by enabling the design of chips containing billions of gates from high-level synthesis through physical layout and verification, including timing, power, area, and test optimization, all the way to the final design result.”

  • Global Semiconductor Alliance (GSA) Awards

    Global Semiconductor Alliance (GSA) recognizes semiconductor companies that have demonstrated excellence through their success, vision, strategy and future opportunities in the industry at its annual Awards Dinner Celebration. The Dr. Morris Chang Exemplary Leadership Award, sponsored by the Global Semiconductor Alliance (GSA), recognizes individuals for their exceptional contributions to drive the development, innovation, growth and long-term opportunities for the semiconductor industry. The recipient of this award for 2019 was James (Jim) C. Morgan, Chairman Emeritus of Applied Materials, Inc.

    “Applied Materials develops technology and equipment used to produce virtually every new chip and advanced display in the world. Morgan ran Applied for nearly three decades— one of the longest tenures of a Fortune 500 CEO. He has been acknowledged for his numerous contributions within the technology industry and around the world for his leadership within the community. He advised three U.S. Presidents on matters of trade and competitiveness, serving on the National Advisory Committee on Semiconductors. He is also a recipient of the IEEE Robert N. Noyce Medal for his vision and leadership that transformed Applied Materials into an innovation leader and global partner for advancing microelectronics manufacturing technology.”

  • EDA Phil Kaufman Award

    Presented by the Electronic System Design (ESD) Alliance and the IEEE Council on Electronic Design Automation, this award honors an individual who has had demonstrable impact on electronic design through contributions in the field of EDA. The recipient for 2019 is Dr. Mary Jane Irwin.

    “Dr. Irwin has been honored for her extensive contributions to EDA through her technical efforts, service to the community and leadership. During her tenure at Pennsylvania State University, she mentored countless students and contributed to technology through her substantial research and numerous publications. Her research included creating EDA tools then using them in computer architecture research, an approach that gave Dr. Irwin influence in both academia and industry.”

  • National Academy of Sciences

    Liqun Luo, Stanford University, received the 2019 Pradel Research Award. This award is presented annually to recognize mid-career neuroscientists whose work is making major contributions to our understanding of the nervous system.

    “Luo has conducted pioneering research, often using techniques of his own invention, to improve our understanding of how neural circuits in the brain assemble and how they are organized to allow information processing. One of Luo’s early achievements was the development of Mosaic Analysis with a Repressible Cell Marker (MARCM), a genetics technique for specifically labelling isolated mutant cells within an otherwise normal fruit fly, a model organism for genetics research. This allowed Luo to study single neurons within a complex brain, which enabled him to make fundamental breakthroughs in our understanding of the fly’s brain development. In particular, Luo focused on the olfactory system and has identified numerous genetic elements that control neuronal wiring and made the olfactory system as a model for understanding neural circuit assembly in all species.”

  • DesignCon Engineer-of-the-Year Award

    DesignCon’s “Engineer of the Year” Award is given out each year during the DesignCon event. The award seeks to recognize the best of the best in engineering and new product advancements at the chip, board, or system level. The award winner will be selected based on his or her leadership, creativity, and out-of-the-box thinking brought to design/test of chips, boards, or systems, with particular attention paid to areas of signal and power integrity.

    This year’s award went to Vishram Pandit for his ability to share knowledge with the technical community that will one day create the CPUs for next-generation cars, phones, and servers. He has co-authored a book on Power Integrity for I/O Interfaces, and is co-author of approximately 30 conference and journal publications, out of which 19 were presented at DesignCon. Those papers have received 3 best paper awards and 3 finalist awards. Vishram received the 2018 Albert Nelson Marquis Lifetime Achievement Award for his contributions to the Signal and Power integrity field.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.