an-electronic-co-pilot-makes-the-damon-hypersport-the-motorcycle-of-the-future

Image source: Damon Motorcycles

The founders of Vancouver’s Damon Motorcycles sought to use technology to make motorcycling safer. Plan A was to develop a sensor-laden helmet to aid the rider’s situational awareness, reports chief technical officer and Damon co-founder Dom Kwong.

But upon consideration, Kwong and co-founder and CEO Jay Girard concluded that it would be better to build safety tech into the motorcycles themselves rather than the helmet. Which meant they’d have to actually build those bikes with the integrated safety systems they imagined.

The resulting Damon Hypersport debuted to acclaim at the Consumer Electronics Show, a venue made appropriate not only by the presence of the electronic safety systems but also by the fact that the Hypersport is a battery electric motorcycle. They decided to go with electric drive because they observed that electric power would be the expected standard going forward, Kwong explained.

Image source: Damon Motorcycles

For a machine that could be seen as the delivery mechanism for safety systems, the Hypersport is impressively executed and boasts amazing specifications. To start: 200 horsepower and 200 miles of highway riding range. In urban riding, with lower speeds and the opportunity to frequently recover energy from regenerative braking extends the riding range on a charge to 300 miles, according to Kwong. Aerodynamic drag at increased speed kills EV range, and Damon concedes that when riding at a more realistic 70 mph highway speed, the Hypersport’s range drops to a still-healthy 160 miles.

Dom Kwong. Image source: Damon Motorcycles

Even with a 21.5 kilowatt-hour lithium-ion battery pack providing that range, the Hypersport weighs a reasonable 440 lbs. For comparison, the original Nissan Leaf electric car had a 24 kWh battery pack. As an EV, there’s no opportunity to cook the books with a bogus “dry” weight in which the bike is weighed sans fluids. It does have hydraulic brakes, so technically they could shave a few ounces by draining the brake fluid!

As with other EVs, acceleration is a strong point for the Hypersport, with the bike reaching 60 mph in less than 3.0 seconds. The Hypersport’s unique quality as a motorcycle is its Shift system for adjustability, which provides for raising the handlebars, angling the windshield upward, lowering the seat and raising the footpegs.

This means that the Hypersport can switch from track attack mode to commuting mode on the fly, as all the adjustments are electrically powered. This range of adjustability also lets the bike’s dimensions change to exactly suit the rider, making it a better fit for people throughout the size spectrum.

All of this exists in service to what Damon terms CoPilot. CoPilot is a network of sensors connected to an onboard neural net that scans ahead and behind for hazards and warns the rider through an array of LEDs on the trailing edge of the windscreen as well as through haptic feedback in the handlebars.

In addition to scanning in the vicinity of the bike with 1080p cameras and 77 Ghz radar, CoPilot also scans the rider, looking at the grip force on the handgrips, the rider’s position on the seat and the smoothness of control inputs to gauge the rider’s comfort level and expertise. These factors could be used to govern the Hypersport’s power or to apply more aggressive assistance via traction control and antilock braking.

But those kinds of systems are reactive, and CoPilot’s purpose is to proactively help the rider, explained Kwong. Realizing that distracting or confusing riders with feedback from sensors would be even worse than having no information at all, Damon aimed to make the information presented to riders as simple and obvious as possible, he said. That means a solid bar of red LEDs and vibration through the handlebars if a forward collision is imminent. Amber LEDs on the left and right sides of the windscreen provide blind spot warning.

Image source: Damon Motorcycles

The rear-facing camera replaces the frequently useless rear-view mirrors that adorn sport motorcycles with a wide-angle display on the instrument panel that helps inform the rider of what’s happening behind, while the rear radar will call the rider’s attention to the screen if it detects a fast-closing object that threatens to rear-end the motorcycle.

“Anti-lock brakes and traction control are reactive systems,” said Kwong. “That’s already happened. We’re providing information to the rider so they can avoid the accident.”

This includes when the bike is stationary, because stopped motorcyclists seem to be the frequent victims of impact by drivers who never see them right ahead. “Stopped in traffic you’re a sitting duck,” said Kwong. Such riders typically never even realize that the car is bearing down on them, but with the Hypersport’s sensors, they can get an early warning.

Image source: Damon Motorcycles

“Now I have three options,” said Kwong. “I can move the bike, jump out of the way or at least brace for impact. At least you have an awareness that something is going to happen.”

The Hypersport’s simple LED indicators and vibrating hand grips are designed so that riders will process their meaning instantly, Kwong explained. “I don’t want my riders to have to think, ‘What does this alert mean?’”

That’s why he chose the haptic feedback for collision warning. “Vibration is a very visceral experience,” he said.

Image source: Damon Motorcycles

And the color and location of the LED warning lights contribute to the clarity of those alerts too. “These are very simple visual cues. I want the rider not to have to think about what is going on.”

Damon, which draws its name from the co-founder Girard’s first name, will start delivering motorcycles to customers from its Vancouver headquarters in 2021, according to Kwong, with a starting base price of $24,995. 

A limited-edition model outfitted with premium components from Brembo and Ohlins will be available, and the brake and suspension suppliers for the regular base model have not yet been settled, he said.

In either case, the Hypersport will make a very interesting platform for the delivery of Damon’s CoPilot safety technology.

Image source: Damon Motorcycles

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.

10-new-auto-tech-products-to-watch-in-2020

The year 2020 is bringing in a slew of innovative products set to transform vehicles themselves, as well as the automotive experience. Here are 10 products to watch.

  • Every year brings plenty of new vehicles, but there are also even more technologies behind those vehicles. Now more than ever technology companies are releasing new technologies to make vehicles safer, more connected, and more autonomous.

    Here are some new innovations – from chips, to headlights, and even sensors for infrastructure – that will be transforming vehicles in 2020 and the years to come.

  • Adasky Viper

    More and more engineers are coming to believe that autonomous vehicles should integrate thermal imagining and sensing capabilities into their sensor array. Adasky has released Viper, a long-wave infrared (LWIR) thermal camera system for autonomous vehicles and ADAS that integrates both an automotive-grade image signal processor and edge-based computer vision algorithms – allowing it to recognize vehicles, pedestrians, animals, and other objects on the road on its own.

    The ISO 26262 ASIL-B ready camera consumes less than 750mW of power, according to the company, and captures VGA images at up to 60 frames per second. Viper can also be integrated directly into vehicles’ headlights – reducing their visible footprint for automotive designers.

    (Image source: Adaksy)

  • Boréas Technologies BOS1211 Haptic Feedback Chip

    Haptic feedback is looking to become the next frontier in automotive interfacing. Touchscreens after all have some of the same disadvantages of a mechanical dashboard. Haptics would allow drivers and passengers easy control of dashboard functions with less distraction.

    Haptic technology developer Boréas Technologies, has announced the BOS1211, a low-power, high-voltage, piezoelectric driver integrated circuit for enabling high-definition haptic feedback in vehicle interfaces such as infotainment screens and steering wheels. Boréas is partnering with TDK to make the BOS1211 compatible with TDK’s PowerHap family of piezo actuators and to meet the standards of the automotive market.

    The BOS1211 is based on the company’s proprietary CapDrive technology, a scalable piezo driver architecture optimized for energy efficiency, low heat dissipation, and rapid response times. Boréas is planning to launch a plug-and-play development kit for automotive haptic feedback in February 2020.

    (Image source: Boréas Technologies)

  • Bosch 3D Display For Automotive

    Bosch captured a lot of attention at CES 2020 with a handful of new automotive new technology announcements. Among the company’s new offerings is a 3D display that uses passive multi-view 3D technology to generate three-dimensional graphics in a vehicle’s cockpit – without the need for 3D glasses or special cameras. Bosch says the 3D effect is visible for multiple people inside the vehicle from multiple angles without shaking or blurring and is adjustable to the user’s preference.

    The company believes its 3D displays can enhance safety by pushing important information and alerts right into a driver’s field of vision and reduce overall driver distraction.

    (Image source: Bosch)

  • Bosch Virtual Visor

    Bosch want to replace your car’s boring, traditional visor with a transparent LCD that can keep the sun out of your eyes without reducing your ability to see the road. The company’s Virtual Visor uses a camera that tracks the driver’s face and eyes and utilizes computer vision technology to only block the portion of the visor where the sun would be hitting the driver’s eyes – leaving the rest of the visor transparent. The result is more of a floating point effect in blocking the light, rather than having a chunk of your windshield completely blocked out.

    (Image source: Bosch)

  • Koito Manufacturing  BladeScan ADB

    High beams are an important safety feature. But we all hate that person who pulls up behind us or comes at us head-on with their high beams blazing.

    Koito Manufacturing‘s Adaptive Driving Beam (ADB) technology is a headlight upgrade that selectively dims and brightens areas of the road to improve driver visibility. Using a camera sensor that provides information to the headlight LEDs, the BladeScan ADB can selectively dim the high beams to low beams for oncoming traffic to prevent glare, for example.

    The BladeScan ADB creates what the company calls a “controlled, high-resolution photometry pattern” in front of the vehicle by emitting LED light onto rotating reflectors (“blades”) and then reflecting it at an angle and pulsing it on and off through a plastic lens and onto the roadway. Doing this the company says BladeScan minimizes the dimmed area in front of the vehicle and can increase the visibility of other vehicles, pedestrians, and other potential road hazards without causing annoying glare to surrounding vehicles.

    BladeScan ADB has already been integrated into the 2020 Toyota Lexus RX.

    (Image source: Kioto Manufacturing)

  • Outsight 3D Semantic Camera

    The 3D Semantic Camera from Outsight aims to “bring full situational awareness to smart machines,” according to the company. The Outsight camera is capable of detecting, tracking, and classifying objects with up to centimeter accuracy and relaying that information to other smart devices – including autonomous and connected vehicles. Utilizing a low-power, long-range broadband laser also allows the camera to identify material composition of objects via hyperspectral analysis under any lighting conditions – adding a new level of confidence to determining what the camera is seeing.

    The camera also uses 3D Simultaneous Localization and Mapping (SLAM) technology for positional data. Outsight says its camera does all of this via edge-based processing through an onboard SoC that does not rely on machine learning. By taking a machine learning-free approach Outsight says it is able to reduce energy consumption and bandwidth needs and also eliminate the need for massive data sets to train the cameras.

    Outsight’s cameras will be deployed at Paris-Charles de Gaulle airport. The company also offers a vehicle-specific version of its cameras.

    (Image source: Outsight)

  • Qualcomm Snapdragon Ride

    Chipmaker Qualcomm has unveiled the first generation of a new SoC targeted at autonomous driving. The Snapdragon Ride platform will come in versions focused on safety and autonomy respectively, with the aim of providing automakers a scalable solution designed to support Level 1 and 2 autonomy – with features including automatic emergency braking, traffic sign recognition, lane keeping assistance, automated highway driving, and self-parking as well as Level 4 and 5 full autonomy.

    The Snapdragon Ride SoCs are capable of performing 30 Tera Operations Per Second (TOPS) for Level 1 and 2 applications and up to over 700 TOPS for Level 4 and 5 applications and are designed for functional safety ASIL-D systems.

    Qualcomm says the platform will be available for pre-development to automakers and Tier-1 supplies in the first half of 2020. The first vehicles to utilize Snapdragon Ride are expected in 2023.

    (Image source: Qualcomm)

  • RoboSense RS-LiDAR-M1 Smart LiDAR

    RoboSense is releasing what it calls the world’s first smart solid-state LiDAR for autonomous vehicles. The company says its RS-LiDAR-M1 line of LiDAR products offer several advantages over mechanical LiDAR systems. The RS-LiDAR-M1 has a 120 x 25-degree field of view, a 15Hz frame rate, and a detection range of up to 150m at 10% NIST target. Its solid-state design also means fewer parts and a more modular design, making it easier for automakers to integrate and scale. In tests conducted by the company, Robosense reports that the RS-LiDAR-M1 met standards of performance for rain and fog and under different light and wind speed conditions and can adapt to all climatic and working conditions. The first version, the RS-LiDAR-M1Simple, is currently available.

    (Image source: RoboSense)

  • Siemens PAVE360 Automotive Digital Twin Platform

    Siemens has announced a new digital twin solution for the automotive industry. PAVE360 allows automakers and OEMs to simulate and validate automotive SoCs and other systems in the context of the vehicle, before the vehicle is built. Developed in collaboration with Arm, PAVE360 is able to model sensors, ICs, as well as other systems related to vehicle dynamics and the overall vehicle environment. Engineers can use the solution to create simulations for systems related to safety, ADAS, infotainment, digital cockpits, V2V and V2X, and even autonomous driving applications.

    (Image source: Siemens PLM)

  • Valerann Smart Roads System

    The emergence of smart cities is rapidly making infrastructure technologies as important as those inside of automobiles. Valerann has developed a sensor, the Valerann Stud, that can replace standard road pavement markers, transforming roads into an IoT sensor network. The solar-powered sensors use LoRA communication to relay information to each other and can track road conditions – including accidents and weather – in real time. The company says it can even track the exact driving pattern of every single vehicle on the road, right down to each vehicle’s specific lane location, in real time.

    The sensors also come equipped with LEDs and can change color to alert drivers of hazardous conditions such as ice, let them know to slow down or stop, and even indicate if they are driving in the wrong direction down a one-way road. The Valerann Smart Roads System is currently deployed various locations in the UK and Europe.

    (Image source: Valerann)

Chris Wiltz is a Senior Editor at   Design News   covering emerging technologies including AI, VR/AR, blockchain, and robotics.

top-10-electronic-enabled-tech-highlights-from-ces-2020

Not all cool tech involved robots and autonomous cars. Here’s a list of the other electronic tech featured at the show.

  • This year’s Consumer Electronics Show (CES) 2020 featured a range of marvals enabled by electronic technologies covering application areas from smart cities, AI edge intelligence, body haptics, security systems, real-time accident reports, uncooled thermo cameras, wearables and more.

    Here are the top 10 products and technologies that piqued the interest of the Design News editorial staff.

  • Smart Cities

    Why do major Japanese car manufacturers like to build smart homes and now cities? Several years ago, Honda built a zero-net energy smart home in partnership with UC-Davis. At this year’s CES, Toyota announced it will build a smart city to test their AI, robots and self-driving cars. Toyota’s Woven City will be built at the foothills of Mt. Fuji in Japan. The city will be the world’s first urban incubator dedicated to the advancement of all aspects of mobility, claims Toyota.

    The project is a collaboration between the Japanese carmaker and the Danish architecture firm Bjarke Ingels Group (BIG). Houses in Woven City will have in-home robotics to help with the more mundane tasks of daily life. The homes will have full-connectivity, which will be needed for the sensor-based AI to automate many household chores, like restocking the refrigerator and taking out the trash. Power storage units and water purification systems will be hidden beneath the ground.

  • Intelligence At The Edge

    Blaize is a computing company that optimizes AI at scale wherever data is collected and processed from the edge. The company enables a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments. The company claims that developers can create new classes of products to bring the benefits of AI and machine learning to broad markets.

    The company has developed a fully programmable GSP architecture that utilizes task-level parallelism and streaming execution processing to take advantage of very low energy consumption, high performance and scalability. Blaize claims that, in comparison, existing GPUs and FPGAs exert a much higher energy price, while CPUs cost more and scale poorly, and all are subject to excessive latency due to their sequential execution processing architectures.

  • Full-Body Haptics Suit

    Haptics are all about the sense of touch. Now you can immerse your entire body – or at least 70 tactile points mainly around your torso – into the world of artificial experiences. The BHaptics Tacksuit provides an audio-to-haptic feature that converts sound into haptic feedbacks that are felt real time around your torso. For example, when a bomb explodes or you hear footsteps during a PC/VR game, you’ll feel the experience from the right direction. You’ll even be able to feel Samurai cuts and friendly hugs.

  • Security Comes In Many Forms

    There are many ways to protect your PC data and applications, from hardware encrypted portable storage devices, backup solutions, file repair software, and data recovery, to digital forensics services. SecureData provides both products and services in these areas. At CES, the company demonstrated a secure UBS drive which they claimed was the only hardware encrypted flash drive in the world with keypad and Bluetooth authentication.

  • Wireless Six-Degrees Of Freedom (6DOF)

    Atraxa’s system tracks 6DOF motion without the need for optical cameras or infrared markers to be placed around the room, or mounted externally to the XR headset or controller. And no line of sight—or wires—are required between the headset and controllers. Unhindered by wires or line-of-sight constraints, users can move freely in large spaces. Even move from room to room without any room mapping, or controller orienting (or reorienting) is required. Tracking starts immediately and lasts without interruption.

    The tech combines electromagnetic (EM) and inertial technologies into a single sensor-fusion tracking platform. The IMU (inertial measurement unit) returns acceleration and angular velocity data. The EM tracker delivers true position and orientation data; it also establishes the tracking volume and local coordinate system. Atraxa is comprised of two main components: a tracker module and receiver module. The tracker module houses the IMU and an EM transmitter coil that generates the magnetic field (i.e. the tracking volume). The tracker modules are embedded into the handheld controllers (or other peripherals).

  • Real-Time Accident Report

    Sooner or later, all of us get into an automotive accident. When that occures, wouldn’t it be great to have a record of what happened? Through the use of embedded acceleration sensors, MDGo generates a real-time report in the case of a car crash, detailing each occupant’s injuries by body region. The company’s technology enables accurate delivery of needed services and support by providing optimal medical care in the case of an emergency and supporting the claim process.

  • Smart Factory

    Could a factory think for itself or autonomously design a better car or aircraft? Can it eliminate waste? All of these questions fit into the realm of manufacturing intelligence. One company with experience in this area is Hexagon, claiming that their technologies are used to produce 85% of smartphones, 75% of cars and 90% of aircraft.

    Their Smart Factory approach aims to have fewer inputs, zero waste and high quality. All this is achieved through sensor, software and autonomous solutions that incorporates data feedback to improve work to boost efficiency, productivity, and quality across industrial and manufacturing.

  • A Cool “Uncooled” Methane Gas Detector

    The FLIR GF77 Gas Find IR is the company’s first uncooled thermal camera designed for detecting methane. This handheld camera offers inspection professionals the features they need to find potentially dangerous, invisible methane leaks at natural gas power plants, renewable energy production facilities, industrial plants, and other locations along a natural gas supply chain. The gas detector provides methane gas detection capability at roughly half the price of cooled gas inspection thermal cameras, to empower the oil and gas industry to reduce emissions and ensure a safer work environment.

  • IoT Arduino Adds LoRaWAN Connectivity

    You can now connect your sensors and actuators over long distances via the LoRa wireless protocol or throughout LoRaWAN networks. The Arduino MKR WAN 1310 board provides a practical and cost effective solution to add LoRa connectivity to projects  requiring low power. This open source board can be connected to: the Arduino IoT Cloud, your own LoRa network using the Arduino LoRa PRO Gateway, existing LoRaWAN infrastructure like The Things Network, or even other boards using the direct connectivity mode.

  • Wearables, Ingestibles, Invisibles

    One of the keys to a healthy life is nutrition. But what exactly constitutes ‘healthy’ food for a specific person? To answer that question, you need to measure and analyze the processes inside the complex human digestive system. Imec is working on prototype technology that is up to that task. It’s called ingestible sensors.

    The company also develops wearables for medical and consumer applications that enable reliable, continuous, comfortable, and long-term health monitoring & management. This includes high-accuracy & low-power biomedical sensing technologies sometimes embedded into fabrics.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

researchers-are-looking-for-ways-to-make-vr-less-painful
Researchers are hoping to develop standards and guidelines for developers to reduce the risk of physical discomfort and injuries among VR users. (Image source: Oregon State University) 

Can the physical risk of using VR be reduced or eliminated altogether?

Ask anyone who has used VR for a prolonged period of time and they’ll tell you a big issue with a lot of VR hardware is comfort level. Bulky headsets and controllers with no ergonomic design can turn a fun immersive experience into a literal pain in no time. This becomes a big concern, not only to developers who want their hardware and software to be used for extended periods, but for users themselves who risk long term injury and health complications ranging from musculoskeletal issues to more commonly reported issues such as eye strain, nausea, and motion sickness.

Hardware developers have put a premium on ensuring comfort with their latest generation headsets, using techniques ranging from better balancing of internal components to using lighter-weight materials. But while other industries have guidelines and standards to fall back on, nothing of the sort exists for virtual and augmented reality.

Researchers from Oregon State University (OSU) and Northern Illinois University have been examining how common movements done in VR are contributing to muscle strain and discomfort in users. Their goal is to establish baselines for optimal object placement and angles in virtual environments so that developers can design games and other applications that cause minimum discomfort in users.

The results of their work are published in a recent study in the journal Applied Ergonomics.

“In computer users, the relationship between awkward postures or repeated movements and musculoskeletal disorders is well known, researcher Jay Kim of OSU’s College of Public Health and Human Sciences, said in a press statement. “We wanted to see how the VR compares to conventional computer-human interactions…We wanted to evaluate the effects of the target distances, locations, and sizes so we can better design these interfaces to reduce the risk for potential musculoskeletal injuries.”

According to Kim it can take as little as three minutes for shoulder discomfort to occur as a result of having to extend your arm straight out, as in many VR applications. He said that prolonged use of VR can lead to a range of problems from gorilla arm syndrome, to rotator cuff injuries, to neck strain and cervical spine damage.

For their study, Kim and his team focused primarily on neck and shoulder movements. The researchers had participants wearing Oculus Rift headsets perform tasks that involved them pointing to specific dots around a circle, or coloring in a designated area with their fingers. The participants’ movements were monitored using motion capture cameras. They were also outfitted with sensors measure electrical activity in their muscles.

The tests were repeated with the visuals placed at eye level, 15 degrees above and below eye level, and 30 degrees below eye level.

Results showed performance at the color task worsened when participants had to tilt their heads down at either 15 or 30 degrees. At 15 degrees above eye level, researchers noted the greatest degree of muscle activation, with the most discomfort occurring during the pointing task at 15 degrees above eye level.

“This result indicates that excessive vertical target locations should be avoided to reduce musculoskeletal discomfort and injury risks during VR interactions,” the study reads. “Based on relatively lower biomechanical exposures and trade-off between neck and shoulder postures, vertical target location between eye height and 15-degrees below eye height could be recommended for VR use.”

“Based on this study, we recommend that objects that are being interacted with more often should be closer to the body,” Kim said. “And objects should be located at eye level, rather than up and down.”

Kim said research like this is going to become increasingly important as VR and AR technologies proliferate into more and more industries. While entertainment remains VR’s largest use case – and the biggest area of concern in terms of safety and discomfort – more and more enterprise deployments of VR in areas such as industrial and medical training as well as in intricate applications such as virtual prototyping and design mean that VR-related injuries could quickly transition into a workplace hazard.

Data released by Statista forecasts the number of active VR users worldwide to be around 171 million as of 2018.

Kim and his team hope their work can help avoid the same mistakes that occurred as personal computers became more mainstream in the 80s and 90s, where a lack of awareness lead to a variety of health issues such as carpal tunnel syndrome and vision issues. “ With VR, we’d like to learn now rather than later,” he said.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics

the-12-best-innovations-of-ces-2020

Forget new TVs and smartphones. These are the real game changers introduced at CES 2020.

  • Now that the smoke is cleared from CES 2020, we can take a step back and see which technologies were the real innovations of 2020. Let’s be honest, CES can be a black hole of vaporware, false promises, and concepts intended to be just that.

    We’ve compiled a list of our favorite technologies introduced at CES 2020 – innovations that we’re sure will be having a lasting impact in 2020 and beyond.

  • AerNos AerSIP Gas Sensor

    The AerSIP from AerNos is a 5 x 5-mm, mulit-gas sensing module that combines nanotechnology and machine learning algorithms to monitor indoor and outdoor air quality. The system-in-package (SIP) is an embedded plug-and-play solution that can be integrated into wearables, mobile devices, and other IoT devices and is capable of detecting hazardous gases and other dangers at parts per billion levels.

    (Image source: AerNos/CES)

  • AMD Ryzen 4000 Series Mobile Processor

    AMD’s Ryzen 4000 could be a literal game changer for high-end laptops users – particularly gamers and designers. AMD says its new Ryzen 4000 series is the world’s first 7-nanometer laptop processor. Designed for ultra-thin laptops, the Ryzen 4000 series features up to 8 cores and 16 threads and configurable 15W thermal design power. AMD pledges the Ryzen 4000 series offers up to four percent greater single-thread performance and up to 90 percent faster multithreaded performance than its competitors, as well as up to 18 percent faster graphics performance over competing chips.

    (Image source: AMD)

  • Atmosic Technologies M3 Battery-Free Bluetooth 5 SoC

    Atmosic says its M3 Battery-Free Bluetooth 5 SoC uses so little power that it can even eliminate the need for battery power entirely in devices such as wearables, keyboards, mice, asset trackers, beacons, and remotes. The M3 integrates Atmosic’s Lowest Power Radio, On-demand Wake-Up, and Managed Energy Harvesting technologies to deliver what the company says is 10 to 100 times lower power than other SoCs, while still complying with Bluetooth standards. The M3’s radio uses two “ears” – one for listening in a low-power state to perceive incoming commands, and another that only wakes when alerted. The SoC uses energy harvesting technology to gather power from radio frequency, photovoltaic, thermal, and motion.

    (Image source: Atmosic)

  • Bot3 Zen-P VSLAM Deep Learning Module

    Bot3‘s Zen-P VSLAM Deep Learning module integrates visual simultaneous localization and mapping (VSLAM) technology (a version of the same technology used in autonomous vehicles) into mobile robots ranging from industrial machines to smart home products. Bot3’s image processing algorithm, Pascal, allows for autonomous navigation without tracks as well as indoor mapping and positioning. (for instances such as warehouse applications).

    (Image source: Bot3)

  • BrainCo BrainRobotics Prosthetic Hand

    Many companies have been developing mind-controlled prosthetics for amputees and other disabled patients. What separates the prosthetic hand developed by BrainRobotics is the integration of AI technology. The BrainRobotics hand utilizes machine learning to allow the hand and its user to learn from each other over time – leading to more lifelike movements. The company is aiming to provide accurate and reliable prosthetics and at affordable price for all patients. BrainRobotics is a subsidiary of BrainCo, a software developer focused on brainwave measuring and monitoring.

    (Image source: BrainCo/BrainRobotics)

  • Fluent.ai MultiWake Word and Voice Control Engine

    Fluent.ai is a technology company focused on AI for voice interface and speech recognition. The company’s Multi-Wake Word and Voice Control Engine is an edge-based, noise robust, and multilingual speech technology that consumes minimal power and storage, allowing it to be embedded in small devices. The solution is Cortex M4-based and supports four separate wake words and 100 multilingual commands, according to Fluent.ai.

    Fluent.ai has recently partnered with semiconductor designer Ambiq Micro to implement Fluent.ai’s software solutions into Ambiq’s ultra-small footprint, low-power microcontrollers. Ambiq’s MCU supports frequencies up to 96 MHz, and Fluent.ai’s solution requires only 16 MHz from the MCU. The new partnership means Fluent.ai and Ambiq will be releasing MCUs for OEMs looking for an easy way to add speech recognition and voice command functionality to their smart home devices and other products.

    (Image source: Fluent.ai / CES

  • Intel Tiger Lake Chip

    When Intel announces a new chip, the whole world takes notice. The chipmaking giant is launching its latest chip for consumers this year. Dubbed Tiger Lake, the new chip is said to be optimized for AI performance, graphics, and USB 3 throughput. Rather than desktops, the new chips will be focused on mobile devices such as ultra-thin laptops and tablets. The first products featuring Tiger Lake are expected to ship later in 2020.

    (Image source: Intel)

  • Monster MultiLink Bluetooth Technology

    Sometimes its the most straightforward ideas that can make the biggest difference. Most of us love our Bluetooth wireless headphones and earbuds. The problem is they don’t create a sharable experience. What if you want to show your friend the video you’re watching without disturbing the people around you? Monster has debuted a new technology called Music Share that uses MultiLink technology to allow devices to send Bluetooth audio to multiple devices in sync. The technology expands how Bluetooth headphones can be used and opens up new use cases ranging from air travel to fitness classes as well as new avenues for social interaction.

    (Image source: Bluetooth SIG)

  • Murata Coral Accelerator Module

    Working in partnership with Coral and Google, Murata Electronics has developed what it is calling the world’s smallest AI module. The Coral Accelerator Module packages Google’s Edge TPU ASIC into a miniaturized footprint to enable developers to embed edge-based AI into their products and devices. The new module forms an integral part of Coral’s integrated AI platform, which also includes a toolkit of software tools and pre-compiled AI models.

    (Image source: Murata Electronics Americas)

  • Pollen Robotics Reachy Open-Source Robot

    Reachy is a robot developed by Pollen Robotics, in collaboration with the INCIA Neuroscience Institute in France, that is fully open source. The robot, which can be programmed using Python, is modular – employing a variety of 3D-printed grippers – and comes with prepackaged AI algorithms to allow developers to customize it for a variety of applications ranging from customer service and assisting the elderly or disabled.

    Read more about Reachy, and the rise of open-source robotics, here.

    (Image source: Pollen Robotics)

  • VRgineers 8K XTAL Headset

    VRgineers, a maker of premium VR headsets for enterprise applications in industries ranging from automotive to defense and military, has released a major upgrade to its flagship XTAL headset. The latest version of XTAL features 8K resolution (4K per eye), improved lenses with a 180-degree field-of-view, and a new add-on module for augmented reality and mixed reality functionality. The headset also still includes eye tracking as well as integrated Leap Motion sensors to enable controller-free navigation and interactions.

    (Image source: VRgineers)

  • zGlue ChipBuilder

    zGlue is a software company that develops tools for chipmakers and designers. Its latest offering, ChipBuilder 3.0 is a design tool to for building custom silicon chips and accelerating time to market. The software suite features an expansive library of chipsets and allows engineers to capture schematics, route and verify designs, and download netlists. The tool allows engineers to create realistic 3D models and code their own chips and even place orders for physical chips via zGlue’s Shuttle Program.

    (Image source: zGlue / CES)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics

what-happened-to-intel’s-early-facial-recognition-platform?

Facial recognition technology is one of the big trends at CES 2020. That’s not surprising since facial recognition market is expected to grow from USD 5.07 billion in 2019 to USD 10.19 billion by 2025, according to Mordor Intelligence. The hardware market is segmented into 2D and 3D facial recognition systems with the latter expected to grow the most in the coming decade.

Image Source: Intel / SID  

One of the early hardware platforms that would enable facial recognition was Intel’s Realsense. When the platform was first introduced in 2015, it was positioned as a way for PCs, mobile phones and robotic systems to see beyond two-dimensions or 2D. The smart-camera-based system was capable of sensing the third-dimension or depth perception to better understand objects in its environment. Since the first introduction in 2015, the camera-based system has gotten even smaller in size yet better in performance thanks to the scaling benefits of Moore’s Law.

One of the reasons for the early adoption and growth of the system was that software developers had free access to all of the Realsense APIs. These interfaces interacted with the camera to enable motion tracking, facial expressions – from smiles and frowns to winks – and more. Gesture tracking was also provided to create programs for those cases when users could not really touch the display screen, as while using a cooking recipe. 

“Computers will begin to see the world as we do,” explained Intel’s then CEO Brian Krzanich at the 2015 Society for Information Display conference. “They will focus on key points of a human face instead of the rest of the background. When that happens, the face is no longer a square (2D shape) but part of the application.”  

At the time, one of the early companies adopting the technology was JD.com, a Chinese online consumer distributor. JD.com had replaced its manual tape ruler measurements with container dimensions captured by the RealSense camera platform. This automation had saved almost 3 minutes per container in measurement time. 

Image Source: Intel / SID

Back then, the big deal was to move from 2D to 3D computing, where the third dimension really meant adding depth perception. An example of this extra dimension was given by Ascending Technology, a Germany company that used the Intel platform to enable a fast-moving drone to move quickly through a forest including up and down motions. To accomplish this feat required the use of multiple cameras and an processor.

Now, fast forward to CES 2020, where Intel’s Realsense has further evolved into a platform that not only supports depth perception but also tracking and LiDAR applications. Tracking is accomplished with the addition of two fisheye lens sensors, an Inertial Measurement Unit (IMU) and a Intel Movidius Myriad 2 Visual Processing Units (VPU). The cameras scan the surrounding areas and the nearby environment. These scans are then used to construct a digital map that can be used detect surfaces and for real world simulations.

One application of depth perception and tracking at CES was for a robot that would follow its owner and carry things. Gita, the cargo robot from the makers of Vespa, not only followed it owner but also tracked their where-about on the CES exhibitor floor.

LiDAR (Light Detection and Ranging) was the newest addition to the Realsense platform. LiDAR cameras allow electronics and robots to “see” and “sense” the environment. Such remote sensing technology measures distance to a target by shining the target with a laser light and then measuring the reflected light. It is very accurate and is being used in the automotive industry to complement ultrasonic and regular cameras.

At CES 2020, one of the highlighted LiDAR applications was a full body, real-time, 3D scan of people. Another application of LiDAR was skeletal motion tracking with the Cubemos Skeletal tracking SDK, which boasted the capability to integrate 2D and 3D skeleton tracking into a program with a mere 20 lines of code. The SDK provided full skeleton body tracking of up to 5 people.

Image Source: Intel / Realsense LiDAR

Since its release over 5 years ago, there have been many competitors to Intel’s Realsense platform, including Google Scale, Forge, ThingWorx Industrial IoT, and several others. Such healthy competition attests to the market for compact, relatively inexpensive camera platforms that are capable of depth perception, tracking objects an using LiDAR for scanning of shapes.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

fiber-optic-sensor-moves-robot-in-near-real-time

Although not as prominent at this year’s 2020 CES, fiber optics sensing technology has been a highlight of past shows. Fiber optic sensing measures changes in the backscattered light in a fiber cable, which can happen when the fiber undergoes a vibration or strain. When attached to an opto-electrical connection, the fiber optic sensing can be used as a hyper-sensitive measurement device for electronic systems.

NASA, among other R&D agencies, began developing Fiber Optic Sensing Systems (FOSS) technologies over 5 years ago. Innovators at NASA’s Armstrong Flight Research Center began using FOSS to monitor the safety of aircraft structures in flight, but quickly found other uses for the technology in civil structures, transportation, oil and gas, medical, and many more spaces.

Image Source: Fraunhofer / SPIE Photonics / John Blyler

Germany’s Fraunhofer, one of Europe’s largest application-oriented research organizations, has been exploring the use of a related technology, namely, fiber optical 3D Shape Sensing. One application they have been studying is the real-time shape and position sensing of the fiber anywhere along the length of the optical fiber. Such sensors provide highly accurate measurements as the fibers twist and bend at every point along the sensor.

A few years back, Fraunhofer showed the value of using fiber optic sensing to accurately control the movements of a robot. The video below provides a convincing demonstration of this technology.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

pico-interactive-is-one-of-the-best-kept-secrets-in-vr
The Pico G2 4K (Image source: Design News)

Unless you’ve been deeply entrenched in the VR space for years you might be forgiven for not knowing about Pico Interactive. While big names in VR hardware like HTC and Oculus have only recently shifted their focus to creating standalone headsets (ones that do not need external sensors or a PC to operate), standalone has been Pico’s focus from the very beginning.

The San Francisco-based company, made a quiet, but significant, showing at the 2016 Electronic Entertainment Expo (E3) with a small, standalone headset called the Pico Neo. The Neo was an Android-based gaming headset that actually boasted some impressive specs for its size. It was based on the same Snapdragon 820 chipset behind the early Oculus Rift and HTC Vive headsets, offered a 90 Hz refresh rate, and a 110-degree field of view.

When we spoke to Pico in 2016, Ennin Huang, Pico’s VP of Design, said the company’s vision was pretty straightforward – “We really wanted to push VR and make it affordable for everyone without comprising the graphics and experience.”

The way to do that, Huang said, was by offering users a lightweight, untethered experience.

The Pico Neo didn’t make a big splash in the US. But it turned out Huang was right.

In recent years VR hardware makers have pivoted into offering headsets that are not only lighter and more powerful, but, most importantly, standalone. The latest hardware from Oculus, the Oculus Quest, is an all-in-one headset targeted at gaming with six degrees of freedom (6DoF) tracking. The Quest, the less robust Oculus Go, other standalone models such as the HTC Vive Focus, and the (fortunate) death of smartphone-based VR headsets point to one clear trend for future of VR hardware – wireless, standalone, and powerful.

But Pico Interactive hasn’t stood still. In recent years the company has pivoted into targeting its headsets specifically at engineers, designers, and other enterprise users – with the aim of providing a convenient and mobile experience for applications ranging from virtual prototyping and design, to education, and even medical applications.

Design News had a chance to go hands-on with one of Pico Interactive’s flagship enterprise headsets, the G2 4k, and found it to be one of the best overall user experiences to date. The G2 4K is very light (276 grams according to company specs) and very well-balanced. The 4K resolution, comes through crystal clear thanks to LCD displays and the use of fresnel lenses (which also help contribute to the unit’s light weight).

In terms of overall image quality, the G2 4k rivaled high-end PC-based enterprise headsets like HP’s Reverb, despite having a lower-resolution (3840 x 2160, or roughly 1920 x 1080 per eye).

“We conducted a lot of human-factors study for the G2 4K,” Huang told Design News in a recent interview. “There are two main strategies for tackling the overall weight issue: First, the material for the G2 4k is carefully selected to achieve the lightest weight possible while still keeping it affordable for everyone. Second is the weight distribution – we want to make sure the overall center of the mass is somewhere close to the center of the user’s head when the user is wearing the headset. To achieve that we have moved some of the components to the back of the head-padding while still keeping the form factor as compact as possible.”

The G2 4K’s fresnel lenses lend to its light weight and image quality, while its foam face cushion and adjustable Velcro staps support overall comfort. (Image source: Design News) 

With a 101-degree field of view and a 75 Hz refresh rate, the G2 4K does fall below the specs of more entertainment-focused headsets. But then again, enterprise software apps don’t usually require the same high frame rates as, say, the latest action game.

The G2 4K is built on the Snapdragon 835, Qualcomm’s first chipset offering targeted at mobile VR and augmented reality. It’s the same chipset behind the highly-praised Oculus Quest.

Aside from the headset having its own online store (the Pico Store), the company also offers an SDK for the G2 4K that we found supports both Unreal and Unity engines. For those who might be put off by the thought of learning a new SDK or having to purchase apps within a proprietary store, the headset is also compatible with the Wave SDK for the HTC Vive line of headsets and also supports apps from HTC’s Viveport. We tried running a few enterprise apps from Viveport on the G2 4K and didn’t notice any difference in performance.

Where one might find disappointment with the G2 4K is that it is only offers three degrees of freedom (3DoF) tracking for both the controller and the headset, which can significantly limit user mobility depending on the application. Some enterprise users who prefer a more fixed design space won’t notice the difference at all, but someone like an automotive engineer or architect for example, who might prefer to be able to walk through a virtual space at room scale, might be frustrated at having to use point-and-click navigation to move around.

The G2 4K’s controller is compact and comfortable, but only offers 3DoF tracking. (Image source: Design News)

Asked about the decision to give such a visually powerful headset only 3DoF tracking, Huang said the decision came down to offering a product with the right specs for enterprise users but would also provide a comfortable barrier to entry for new users. “We think 3DoF and 6DoF both have their niche in enterprise solutions,” Huang said. “While 6DOF is great for a little more advanced users, the 3DoF VR training and tutorials experience is great for someone who has never had or had a very little VR experience. In fact, many users of our enterprise-customers have never used VR before.”

Very serious enterprise users will probably opt for a PC-based setup along the lines of the HP Reverb or HTC’s Vive Pro. But smaller organizations or those looking to get their feet wet in implementing VR into their workflow, and who hold high value in an untethered experience, could find the G2 4K a more economical option capable of still delivering in terms of image quality and overall performance.

The G2 4K features two external speakers, audio jack, a micro SD card slot, USB-C port, and a built-in microphone. (Image source: Design News) 

Enter the Neo 2

At the time we spoke, Pico Interactive was shipping another headset, the Neo 1, which featured 6DoF head tracking and a 3DoF controller, in Japan, China, and parts of Europe. Huang teased that the company’s R&D team was working on prototypes for headsets that offers 6DoF head and controller tracking, but declined to provide any further details.

However at CES 2020, Pico made another announcement that should please users who demand a 6DoF experience with their VR.

The company’s latest headset, announced at CES 2020, is the Neo 2, a standalone enterprise headset with 4K resolution and 6DoF inside-out tracking. A second version, the Neo 2 Eye, features eye tracking and foveated rendering capabilities courtesy of Tobii, the go-to supplier of eye tracking sensors and analytics technologies for VR.

The Pico Neo 2 debuted at CES 2020. (Image source: Pico Interactive)

Based on the Qualcomm Snapdragon 845 platform, the Neo 2, is a bit heavier than the G2 4K (350 grams, according to specs), and features the same resolution, lenses, and refresh rate. Where the headset takes a step up from previous models in utilizing the 845’s integrated simultaneous localization and mapping (SLAM) technology for room-scale 6DoF tracking. Both models of the Neo 2 also feature two mono fisheye external cameras.

For its part, Tobii says the foveated rendering technology integrated into the Neo 2 allows it to increase framerates by up to 66 percent and reduces shading loads by up to 72 percent, which gives the headset improved visual performance without sacrificing performance or draining battery life.

The addition of eye tracking also gives Neo 2 users a novel control scheme (navigating apps and menus via eye movement) and can also be used to capture gaze data to assist with training applications as well as capturing user behavior insights.

It’s a pretty big leap for a company that started with mobile gaming, though Huang said Pico hasn’t left its roots behind. And while enterprises may be the first to take note of what Pico is offering, Huang said the company believes first in serving VR users in any application they desire.

“Mobile gaming and entertainment are still part of our focus, and in fact, we have been doing pretty well in Asia in places like Korea, Japan, and China. In addition to the consumer market, we also see great opportunities where Pico can provide enterprise solutions for many of our business partners,” Huang said.

“The motivation behind all the Pico products has always been the same since we’ve started back in 2016; it has always been user-first design.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics

don’t-forget-about-standby-power

Standby power refers to the electronic and electrical power consumed by systems when the primary operational functions are waiting to be activated. Standby power needs are often overlooked by systems designers but are crucial considerations to ensure power is available for the smart devices that make up the Internet of Things (IoT).

Consider the design of a smart home, a dwelling that consumes zero net energy. To maintain zero net power consumption, the smart home must be capable of monitoring and controlling the main energy consumers – e.g., HVAC and lighting – as well as interfacing with energy sources such as solar panels/batteries and the power grid. Adding control and monitoring intelligence to the home will itself require energy. The trick is to make sure that the controlling and monitoring electronics don’t consume more power than the devices themselves. One part of this trick is to make sure that the smart systems pay attention to stand-by loads, those mischievous power draining loads consumed by electronics and electrical appliances even when they are turned off (but still drawing power in standby mode).

In addition to – or often part of – controlling and monitoring electronics, connectivity transceivers like RF and wireless are another reason why standby power awareness are so important. Most of our modern appliances and control devices constantly consume a trickle of power to be ready to perform updates, connect to edge or cloud servers, listen for our voice commands, and the like.

Numerous studies attest to the amount of energy lost from devices not in use due to standby power consumption. According to a report from the Natural Resources Defense Council (NRDC), an international nonprofit environmental organization, always-on but inactive devices can cost Americans $19B annually. That comes to about $165 per U.S. households on average—and 50 large (500-megawatt) power plants’ worth of electricity.

Further, Berkeley Labs notes that standby power is roughly responsible for 1% of global CO2 emissions.

What are the best ways to reduce the impact of standby power? Let’s consider one approach that looked promising but so far has failed and another more integrated approach that has proven to be successful.

Image source: Natural Resources Defense Council (NRDC)
hyper-automation,-multi-experience,-and-securing-ai-(or-baby-yoda)

If you google “technology trends,” one of the companies that will appear in the top 10 hits will be Gartner. The research and advisory firm not only analyzes numerous markets in terms of technical innovations but also covers business aspects of technology for C-suite professionals.

For 2020, Gartner has produced a number of predictive reports, including those covering digital and strategic technologies. From those lists, I’ve selected three trends that appear vaguely familiar from the recent past, albeit with new names. Do you agree? Don’t hesitate to ping me with your take on these trends at: [email protected]

Trend: Hyper Automation

Gartner: “Automation uses technology to automate tasks that once required humans. Hyper automation deals with the application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyper automation often results in the creation of a digital twin of the organization. As no single tool can replace humans, hyper automation today involves a combination of tools, including robotic process automation (RPA), intelligent business management software (iBPMS) and AI, with a goal of increasingly AI-driven decision making. 

My Take: Do we really need yet another word or phrase to represent the ongoing digitization process that will eventually enable a complete digital twin? One might just as well say that the creation of a digital twin – from improved modeling, simulations, sensors, etc. – have accelerated the pace of automation thus creating a new hypeautomoation or superautomation reality.

It’s really a chicken and egg perspective. Which came first – the creation of hyper automation systems that eventually result in a digital twin? Or did the creation of a digital twin from a sensor-rich ecosystem lead to improved automation of tasks previously performed by humans?

Regardless of the answer, there seems to be little doubt about the movement toward a more complete digital twin within the next decade. Mordor Intelligence predicts that the digital twin market is anticipated to witness a CAGR of 35.0% over the forecast period 2019 – 2024. Growth in IoT and cloud-based platforms, the surge in adoption of 3D printing technology in the manufacturing industry, and the objective to reduce project cost are some of the major factors, driving the growth for the digital twin market. Mordor notes that IoT sensors have created a potential space for engineers to test and communicate with sensors integrated with the operating products, hence delivering real-time prescriptive of system functioning and timely maintenance.

Which came first: Hyper automation or the digital twin? It’s your call.