top-10-electronic-enabled-tech-highlights-from-ces-2020

Not all cool tech involved robots and autonomous cars. Here’s a list of the other electronic tech featured at the show.

  • This year’s Consumer Electronics Show (CES) 2020 featured a range of marvals enabled by electronic technologies covering application areas from smart cities, AI edge intelligence, body haptics, security systems, real-time accident reports, uncooled thermo cameras, wearables and more.

    Here are the top 10 products and technologies that piqued the interest of the Design News editorial staff.

  • Smart Cities

    Why do major Japanese car manufacturers like to build smart homes and now cities? Several years ago, Honda built a zero-net energy smart home in partnership with UC-Davis. At this year’s CES, Toyota announced it will build a smart city to test their AI, robots and self-driving cars. Toyota’s Woven City will be built at the foothills of Mt. Fuji in Japan. The city will be the world’s first urban incubator dedicated to the advancement of all aspects of mobility, claims Toyota.

    The project is a collaboration between the Japanese carmaker and the Danish architecture firm Bjarke Ingels Group (BIG). Houses in Woven City will have in-home robotics to help with the more mundane tasks of daily life. The homes will have full-connectivity, which will be needed for the sensor-based AI to automate many household chores, like restocking the refrigerator and taking out the trash. Power storage units and water purification systems will be hidden beneath the ground.

  • Intelligence At The Edge

    Blaize is a computing company that optimizes AI at scale wherever data is collected and processed from the edge. The company enables a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments. The company claims that developers can create new classes of products to bring the benefits of AI and machine learning to broad markets.

    The company has developed a fully programmable GSP architecture that utilizes task-level parallelism and streaming execution processing to take advantage of very low energy consumption, high performance and scalability. Blaize claims that, in comparison, existing GPUs and FPGAs exert a much higher energy price, while CPUs cost more and scale poorly, and all are subject to excessive latency due to their sequential execution processing architectures.

  • Full-Body Haptics Suit

    Haptics are all about the sense of touch. Now you can immerse your entire body – or at least 70 tactile points mainly around your torso – into the world of artificial experiences. The BHaptics Tacksuit provides an audio-to-haptic feature that converts sound into haptic feedbacks that are felt real time around your torso. For example, when a bomb explodes or you hear footsteps during a PC/VR game, you’ll feel the experience from the right direction. You’ll even be able to feel Samurai cuts and friendly hugs.

  • Security Comes In Many Forms

    There are many ways to protect your PC data and applications, from hardware encrypted portable storage devices, backup solutions, file repair software, and data recovery, to digital forensics services. SecureData provides both products and services in these areas. At CES, the company demonstrated a secure UBS drive which they claimed was the only hardware encrypted flash drive in the world with keypad and Bluetooth authentication.

  • Wireless Six-Degrees Of Freedom (6DOF)

    Atraxa’s system tracks 6DOF motion without the need for optical cameras or infrared markers to be placed around the room, or mounted externally to the XR headset or controller. And no line of sight—or wires—are required between the headset and controllers. Unhindered by wires or line-of-sight constraints, users can move freely in large spaces. Even move from room to room without any room mapping, or controller orienting (or reorienting) is required. Tracking starts immediately and lasts without interruption.

    The tech combines electromagnetic (EM) and inertial technologies into a single sensor-fusion tracking platform. The IMU (inertial measurement unit) returns acceleration and angular velocity data. The EM tracker delivers true position and orientation data; it also establishes the tracking volume and local coordinate system. Atraxa is comprised of two main components: a tracker module and receiver module. The tracker module houses the IMU and an EM transmitter coil that generates the magnetic field (i.e. the tracking volume). The tracker modules are embedded into the handheld controllers (or other peripherals).

  • Real-Time Accident Report

    Sooner or later, all of us get into an automotive accident. When that occures, wouldn’t it be great to have a record of what happened? Through the use of embedded acceleration sensors, MDGo generates a real-time report in the case of a car crash, detailing each occupant’s injuries by body region. The company’s technology enables accurate delivery of needed services and support by providing optimal medical care in the case of an emergency and supporting the claim process.

  • Smart Factory

    Could a factory think for itself or autonomously design a better car or aircraft? Can it eliminate waste? All of these questions fit into the realm of manufacturing intelligence. One company with experience in this area is Hexagon, claiming that their technologies are used to produce 85% of smartphones, 75% of cars and 90% of aircraft.

    Their Smart Factory approach aims to have fewer inputs, zero waste and high quality. All this is achieved through sensor, software and autonomous solutions that incorporates data feedback to improve work to boost efficiency, productivity, and quality across industrial and manufacturing.

  • A Cool “Uncooled” Methane Gas Detector

    The FLIR GF77 Gas Find IR is the company’s first uncooled thermal camera designed for detecting methane. This handheld camera offers inspection professionals the features they need to find potentially dangerous, invisible methane leaks at natural gas power plants, renewable energy production facilities, industrial plants, and other locations along a natural gas supply chain. The gas detector provides methane gas detection capability at roughly half the price of cooled gas inspection thermal cameras, to empower the oil and gas industry to reduce emissions and ensure a safer work environment.

  • IoT Arduino Adds LoRaWAN Connectivity

    You can now connect your sensors and actuators over long distances via the LoRa wireless protocol or throughout LoRaWAN networks. The Arduino MKR WAN 1310 board provides a practical and cost effective solution to add LoRa connectivity to projects  requiring low power. This open source board can be connected to: the Arduino IoT Cloud, your own LoRa network using the Arduino LoRa PRO Gateway, existing LoRaWAN infrastructure like The Things Network, or even other boards using the direct connectivity mode.

  • Wearables, Ingestibles, Invisibles

    One of the keys to a healthy life is nutrition. But what exactly constitutes ‘healthy’ food for a specific person? To answer that question, you need to measure and analyze the processes inside the complex human digestive system. Imec is working on prototype technology that is up to that task. It’s called ingestible sensors.

    The company also develops wearables for medical and consumer applications that enable reliable, continuous, comfortable, and long-term health monitoring & management. This includes high-accuracy & low-power biomedical sensing technologies sometimes embedded into fabrics.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

the-12-best-innovations-of-ces-2020

Forget new TVs and smartphones. These are the real game changers introduced at CES 2020.

  • Now that the smoke is cleared from CES 2020, we can take a step back and see which technologies were the real innovations of 2020. Let’s be honest, CES can be a black hole of vaporware, false promises, and concepts intended to be just that.

    We’ve compiled a list of our favorite technologies introduced at CES 2020 – innovations that we’re sure will be having a lasting impact in 2020 and beyond.

  • AerNos AerSIP Gas Sensor

    The AerSIP from AerNos is a 5 x 5-mm, mulit-gas sensing module that combines nanotechnology and machine learning algorithms to monitor indoor and outdoor air quality. The system-in-package (SIP) is an embedded plug-and-play solution that can be integrated into wearables, mobile devices, and other IoT devices and is capable of detecting hazardous gases and other dangers at parts per billion levels.

    (Image source: AerNos/CES)

  • AMD Ryzen 4000 Series Mobile Processor

    AMD’s Ryzen 4000 could be a literal game changer for high-end laptops users – particularly gamers and designers. AMD says its new Ryzen 4000 series is the world’s first 7-nanometer laptop processor. Designed for ultra-thin laptops, the Ryzen 4000 series features up to 8 cores and 16 threads and configurable 15W thermal design power. AMD pledges the Ryzen 4000 series offers up to four percent greater single-thread performance and up to 90 percent faster multithreaded performance than its competitors, as well as up to 18 percent faster graphics performance over competing chips.

    (Image source: AMD)

  • Atmosic Technologies M3 Battery-Free Bluetooth 5 SoC

    Atmosic says its M3 Battery-Free Bluetooth 5 SoC uses so little power that it can even eliminate the need for battery power entirely in devices such as wearables, keyboards, mice, asset trackers, beacons, and remotes. The M3 integrates Atmosic’s Lowest Power Radio, On-demand Wake-Up, and Managed Energy Harvesting technologies to deliver what the company says is 10 to 100 times lower power than other SoCs, while still complying with Bluetooth standards. The M3’s radio uses two “ears” – one for listening in a low-power state to perceive incoming commands, and another that only wakes when alerted. The SoC uses energy harvesting technology to gather power from radio frequency, photovoltaic, thermal, and motion.

    (Image source: Atmosic)

  • Bot3 Zen-P VSLAM Deep Learning Module

    Bot3‘s Zen-P VSLAM Deep Learning module integrates visual simultaneous localization and mapping (VSLAM) technology (a version of the same technology used in autonomous vehicles) into mobile robots ranging from industrial machines to smart home products. Bot3’s image processing algorithm, Pascal, allows for autonomous navigation without tracks as well as indoor mapping and positioning. (for instances such as warehouse applications).

    (Image source: Bot3)

  • BrainCo BrainRobotics Prosthetic Hand

    Many companies have been developing mind-controlled prosthetics for amputees and other disabled patients. What separates the prosthetic hand developed by BrainRobotics is the integration of AI technology. The BrainRobotics hand utilizes machine learning to allow the hand and its user to learn from each other over time – leading to more lifelike movements. The company is aiming to provide accurate and reliable prosthetics and at affordable price for all patients. BrainRobotics is a subsidiary of BrainCo, a software developer focused on brainwave measuring and monitoring.

    (Image source: BrainCo/BrainRobotics)

  • Fluent.ai MultiWake Word and Voice Control Engine

    Fluent.ai is a technology company focused on AI for voice interface and speech recognition. The company’s Multi-Wake Word and Voice Control Engine is an edge-based, noise robust, and multilingual speech technology that consumes minimal power and storage, allowing it to be embedded in small devices. The solution is Cortex M4-based and supports four separate wake words and 100 multilingual commands, according to Fluent.ai.

    Fluent.ai has recently partnered with semiconductor designer Ambiq Micro to implement Fluent.ai’s software solutions into Ambiq’s ultra-small footprint, low-power microcontrollers. Ambiq’s MCU supports frequencies up to 96 MHz, and Fluent.ai’s solution requires only 16 MHz from the MCU. The new partnership means Fluent.ai and Ambiq will be releasing MCUs for OEMs looking for an easy way to add speech recognition and voice command functionality to their smart home devices and other products.

    (Image source: Fluent.ai / CES

  • Intel Tiger Lake Chip

    When Intel announces a new chip, the whole world takes notice. The chipmaking giant is launching its latest chip for consumers this year. Dubbed Tiger Lake, the new chip is said to be optimized for AI performance, graphics, and USB 3 throughput. Rather than desktops, the new chips will be focused on mobile devices such as ultra-thin laptops and tablets. The first products featuring Tiger Lake are expected to ship later in 2020.

    (Image source: Intel)

  • Monster MultiLink Bluetooth Technology

    Sometimes its the most straightforward ideas that can make the biggest difference. Most of us love our Bluetooth wireless headphones and earbuds. The problem is they don’t create a sharable experience. What if you want to show your friend the video you’re watching without disturbing the people around you? Monster has debuted a new technology called Music Share that uses MultiLink technology to allow devices to send Bluetooth audio to multiple devices in sync. The technology expands how Bluetooth headphones can be used and opens up new use cases ranging from air travel to fitness classes as well as new avenues for social interaction.

    (Image source: Bluetooth SIG)

  • Murata Coral Accelerator Module

    Working in partnership with Coral and Google, Murata Electronics has developed what it is calling the world’s smallest AI module. The Coral Accelerator Module packages Google’s Edge TPU ASIC into a miniaturized footprint to enable developers to embed edge-based AI into their products and devices. The new module forms an integral part of Coral’s integrated AI platform, which also includes a toolkit of software tools and pre-compiled AI models.

    (Image source: Murata Electronics Americas)

  • Pollen Robotics Reachy Open-Source Robot

    Reachy is a robot developed by Pollen Robotics, in collaboration with the INCIA Neuroscience Institute in France, that is fully open source. The robot, which can be programmed using Python, is modular – employing a variety of 3D-printed grippers – and comes with prepackaged AI algorithms to allow developers to customize it for a variety of applications ranging from customer service and assisting the elderly or disabled.

    Read more about Reachy, and the rise of open-source robotics, here.

    (Image source: Pollen Robotics)

  • VRgineers 8K XTAL Headset

    VRgineers, a maker of premium VR headsets for enterprise applications in industries ranging from automotive to defense and military, has released a major upgrade to its flagship XTAL headset. The latest version of XTAL features 8K resolution (4K per eye), improved lenses with a 180-degree field-of-view, and a new add-on module for augmented reality and mixed reality functionality. The headset also still includes eye tracking as well as integrated Leap Motion sensors to enable controller-free navigation and interactions.

    (Image source: VRgineers)

  • zGlue ChipBuilder

    zGlue is a software company that develops tools for chipmakers and designers. Its latest offering, ChipBuilder 3.0 is a design tool to for building custom silicon chips and accelerating time to market. The software suite features an expansive library of chipsets and allows engineers to capture schematics, route and verify designs, and download netlists. The tool allows engineers to create realistic 3D models and code their own chips and even place orders for physical chips via zGlue’s Shuttle Program.

    (Image source: zGlue / CES)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics

what-happened-to-intel’s-early-facial-recognition-platform?

Facial recognition technology is one of the big trends at CES 2020. That’s not surprising since facial recognition market is expected to grow from USD 5.07 billion in 2019 to USD 10.19 billion by 2025, according to Mordor Intelligence. The hardware market is segmented into 2D and 3D facial recognition systems with the latter expected to grow the most in the coming decade.

Image Source: Intel / SID  

One of the early hardware platforms that would enable facial recognition was Intel’s Realsense. When the platform was first introduced in 2015, it was positioned as a way for PCs, mobile phones and robotic systems to see beyond two-dimensions or 2D. The smart-camera-based system was capable of sensing the third-dimension or depth perception to better understand objects in its environment. Since the first introduction in 2015, the camera-based system has gotten even smaller in size yet better in performance thanks to the scaling benefits of Moore’s Law.

One of the reasons for the early adoption and growth of the system was that software developers had free access to all of the Realsense APIs. These interfaces interacted with the camera to enable motion tracking, facial expressions – from smiles and frowns to winks – and more. Gesture tracking was also provided to create programs for those cases when users could not really touch the display screen, as while using a cooking recipe. 

“Computers will begin to see the world as we do,” explained Intel’s then CEO Brian Krzanich at the 2015 Society for Information Display conference. “They will focus on key points of a human face instead of the rest of the background. When that happens, the face is no longer a square (2D shape) but part of the application.”  

At the time, one of the early companies adopting the technology was JD.com, a Chinese online consumer distributor. JD.com had replaced its manual tape ruler measurements with container dimensions captured by the RealSense camera platform. This automation had saved almost 3 minutes per container in measurement time. 

Image Source: Intel / SID

Back then, the big deal was to move from 2D to 3D computing, where the third dimension really meant adding depth perception. An example of this extra dimension was given by Ascending Technology, a Germany company that used the Intel platform to enable a fast-moving drone to move quickly through a forest including up and down motions. To accomplish this feat required the use of multiple cameras and an processor.

Now, fast forward to CES 2020, where Intel’s Realsense has further evolved into a platform that not only supports depth perception but also tracking and LiDAR applications. Tracking is accomplished with the addition of two fisheye lens sensors, an Inertial Measurement Unit (IMU) and a Intel Movidius Myriad 2 Visual Processing Units (VPU). The cameras scan the surrounding areas and the nearby environment. These scans are then used to construct a digital map that can be used detect surfaces and for real world simulations.

One application of depth perception and tracking at CES was for a robot that would follow its owner and carry things. Gita, the cargo robot from the makers of Vespa, not only followed it owner but also tracked their where-about on the CES exhibitor floor.

LiDAR (Light Detection and Ranging) was the newest addition to the Realsense platform. LiDAR cameras allow electronics and robots to “see” and “sense” the environment. Such remote sensing technology measures distance to a target by shining the target with a laser light and then measuring the reflected light. It is very accurate and is being used in the automotive industry to complement ultrasonic and regular cameras.

At CES 2020, one of the highlighted LiDAR applications was a full body, real-time, 3D scan of people. Another application of LiDAR was skeletal motion tracking with the Cubemos Skeletal tracking SDK, which boasted the capability to integrate 2D and 3D skeleton tracking into a program with a mere 20 lines of code. The SDK provided full skeleton body tracking of up to 5 people.

Image Source: Intel / Realsense LiDAR

Since its release over 5 years ago, there have been many competitors to Intel’s Realsense platform, including Google Scale, Forge, ThingWorx Industrial IoT, and several others. Such healthy competition attests to the market for compact, relatively inexpensive camera platforms that are capable of depth perception, tracking objects an using LiDAR for scanning of shapes.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

pico-interactive-is-one-of-the-best-kept-secrets-in-vr
The Pico G2 4K (Image source: Design News)

Unless you’ve been deeply entrenched in the VR space for years you might be forgiven for not knowing about Pico Interactive. While big names in VR hardware like HTC and Oculus have only recently shifted their focus to creating standalone headsets (ones that do not need external sensors or a PC to operate), standalone has been Pico’s focus from the very beginning.

The San Francisco-based company, made a quiet, but significant, showing at the 2016 Electronic Entertainment Expo (E3) with a small, standalone headset called the Pico Neo. The Neo was an Android-based gaming headset that actually boasted some impressive specs for its size. It was based on the same Snapdragon 820 chipset behind the early Oculus Rift and HTC Vive headsets, offered a 90 Hz refresh rate, and a 110-degree field of view.

When we spoke to Pico in 2016, Ennin Huang, Pico’s VP of Design, said the company’s vision was pretty straightforward – “We really wanted to push VR and make it affordable for everyone without comprising the graphics and experience.”

The way to do that, Huang said, was by offering users a lightweight, untethered experience.

The Pico Neo didn’t make a big splash in the US. But it turned out Huang was right.

In recent years VR hardware makers have pivoted into offering headsets that are not only lighter and more powerful, but, most importantly, standalone. The latest hardware from Oculus, the Oculus Quest, is an all-in-one headset targeted at gaming with six degrees of freedom (6DoF) tracking. The Quest, the less robust Oculus Go, other standalone models such as the HTC Vive Focus, and the (fortunate) death of smartphone-based VR headsets point to one clear trend for future of VR hardware – wireless, standalone, and powerful.

But Pico Interactive hasn’t stood still. In recent years the company has pivoted into targeting its headsets specifically at engineers, designers, and other enterprise users – with the aim of providing a convenient and mobile experience for applications ranging from virtual prototyping and design, to education, and even medical applications.

Design News had a chance to go hands-on with one of Pico Interactive’s flagship enterprise headsets, the G2 4k, and found it to be one of the best overall user experiences to date. The G2 4K is very light (276 grams according to company specs) and very well-balanced. The 4K resolution, comes through crystal clear thanks to LCD displays and the use of fresnel lenses (which also help contribute to the unit’s light weight).

In terms of overall image quality, the G2 4k rivaled high-end PC-based enterprise headsets like HP’s Reverb, despite having a lower-resolution (3840 x 2160, or roughly 1920 x 1080 per eye).

“We conducted a lot of human-factors study for the G2 4K,” Huang told Design News in a recent interview. “There are two main strategies for tackling the overall weight issue: First, the material for the G2 4k is carefully selected to achieve the lightest weight possible while still keeping it affordable for everyone. Second is the weight distribution – we want to make sure the overall center of the mass is somewhere close to the center of the user’s head when the user is wearing the headset. To achieve that we have moved some of the components to the back of the head-padding while still keeping the form factor as compact as possible.”

The G2 4K’s fresnel lenses lend to its light weight and image quality, while its foam face cushion and adjustable Velcro staps support overall comfort. (Image source: Design News) 

With a 101-degree field of view and a 75 Hz refresh rate, the G2 4K does fall below the specs of more entertainment-focused headsets. But then again, enterprise software apps don’t usually require the same high frame rates as, say, the latest action game.

The G2 4K is built on the Snapdragon 835, Qualcomm’s first chipset offering targeted at mobile VR and augmented reality. It’s the same chipset behind the highly-praised Oculus Quest.

Aside from the headset having its own online store (the Pico Store), the company also offers an SDK for the G2 4K that we found supports both Unreal and Unity engines. For those who might be put off by the thought of learning a new SDK or having to purchase apps within a proprietary store, the headset is also compatible with the Wave SDK for the HTC Vive line of headsets and also supports apps from HTC’s Viveport. We tried running a few enterprise apps from Viveport on the G2 4K and didn’t notice any difference in performance.

Where one might find disappointment with the G2 4K is that it is only offers three degrees of freedom (3DoF) tracking for both the controller and the headset, which can significantly limit user mobility depending on the application. Some enterprise users who prefer a more fixed design space won’t notice the difference at all, but someone like an automotive engineer or architect for example, who might prefer to be able to walk through a virtual space at room scale, might be frustrated at having to use point-and-click navigation to move around.

The G2 4K’s controller is compact and comfortable, but only offers 3DoF tracking. (Image source: Design News)

Asked about the decision to give such a visually powerful headset only 3DoF tracking, Huang said the decision came down to offering a product with the right specs for enterprise users but would also provide a comfortable barrier to entry for new users. “We think 3DoF and 6DoF both have their niche in enterprise solutions,” Huang said. “While 6DOF is great for a little more advanced users, the 3DoF VR training and tutorials experience is great for someone who has never had or had a very little VR experience. In fact, many users of our enterprise-customers have never used VR before.”

Very serious enterprise users will probably opt for a PC-based setup along the lines of the HP Reverb or HTC’s Vive Pro. But smaller organizations or those looking to get their feet wet in implementing VR into their workflow, and who hold high value in an untethered experience, could find the G2 4K a more economical option capable of still delivering in terms of image quality and overall performance.

The G2 4K features two external speakers, audio jack, a micro SD card slot, USB-C port, and a built-in microphone. (Image source: Design News) 

Enter the Neo 2

At the time we spoke, Pico Interactive was shipping another headset, the Neo 1, which featured 6DoF head tracking and a 3DoF controller, in Japan, China, and parts of Europe. Huang teased that the company’s R&D team was working on prototypes for headsets that offers 6DoF head and controller tracking, but declined to provide any further details.

However at CES 2020, Pico made another announcement that should please users who demand a 6DoF experience with their VR.

The company’s latest headset, announced at CES 2020, is the Neo 2, a standalone enterprise headset with 4K resolution and 6DoF inside-out tracking. A second version, the Neo 2 Eye, features eye tracking and foveated rendering capabilities courtesy of Tobii, the go-to supplier of eye tracking sensors and analytics technologies for VR.

The Pico Neo 2 debuted at CES 2020. (Image source: Pico Interactive)

Based on the Qualcomm Snapdragon 845 platform, the Neo 2, is a bit heavier than the G2 4K (350 grams, according to specs), and features the same resolution, lenses, and refresh rate. Where the headset takes a step up from previous models in utilizing the 845’s integrated simultaneous localization and mapping (SLAM) technology for room-scale 6DoF tracking. Both models of the Neo 2 also feature two mono fisheye external cameras.

For its part, Tobii says the foveated rendering technology integrated into the Neo 2 allows it to increase framerates by up to 66 percent and reduces shading loads by up to 72 percent, which gives the headset improved visual performance without sacrificing performance or draining battery life.

The addition of eye tracking also gives Neo 2 users a novel control scheme (navigating apps and menus via eye movement) and can also be used to capture gaze data to assist with training applications as well as capturing user behavior insights.

It’s a pretty big leap for a company that started with mobile gaming, though Huang said Pico hasn’t left its roots behind. And while enterprises may be the first to take note of what Pico is offering, Huang said the company believes first in serving VR users in any application they desire.

“Mobile gaming and entertainment are still part of our focus, and in fact, we have been doing pretty well in Asia in places like Korea, Japan, and China. In addition to the consumer market, we also see great opportunities where Pico can provide enterprise solutions for many of our business partners,” Huang said.

“The motivation behind all the Pico products has always been the same since we’ve started back in 2016; it has always been user-first design.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics

want-to-build-an-open-source-hardware-and-software-robot?

The 2020 Consumer Electronics Show (CES) is full of engineering marvels. Many of these marvels are manifested as advances in robots. For example, consider UBTech highlights are this year’s show. The company’s intelligent humanoid service robot named “Walker” won the Best of CES 2019 and will be back with additional features at the 2020 show. According to the company, Walker will be faster and demonstrate more human-like walking as well as yoga poses that show its huge improvement in motion control. The robot will also demonstrate the ability to push a cart, draw pictures, and write characters, plus showing improved static balance with full-body compliance control.

There’s another robot system that the technical community might find equally interesting if a bit less flashy. France’s Pollen Robotics is displaying their “Reachy” robot at CES2020. In collaboration with the INCIA Neuroscience Institute in France, the company has developed a 3D-printed robot arm that’s 100% open source. Reachy is billed as an expressive humanoid service robot specializing in interacting with people and manipulating objects. This robot is built with prepackaged AI and modular robotics that should easily accommodate many real-world applications, such as extensions for disabled people (human augmentation), helping out at events (like CES), assisting small businesses and even as a receptionist.

According to the company, Reachy can be easily programmed in Python and offers ready-to-use operational environments for game play, serving coffee, making music, handing out specific objects, and more. The robot is also fully customizable with open hardware, software and data!

The company invites developers to join their open source community and participate on Github, although the company explains that Reachy is currently still under development. The open source hardware, software and data won’t be released until the robot is ready, toward the end Q1-2020.

Image source: Pollen Robots / Reachy

But what does it really mean to say a company or platform supports open hardware and software/  

the-bosch-virtual-visor-dynamically-blocks-the-sun-from-your-eyes

Bosch engineers are prepared to deliver us from the heartbreak of intrusive sun visors, with an LCD panel that dynamically shades only the driver’s eyes from sun glare while remaining otherwise transparent.

Though it seems that we struggle mainly to see traffic signals while waiting at a red light with the visor deployed to block the sun, a pair of University of Toronto researchers say that the risk of life-threatening crashes is 16 percent higher when the sun is bright, so the Bosch Virtual Visor has potential as a life-saving technology.

The visor itself is a single transparent LCD panel fitted with a driver-facing camera and backed by artificial intelligence facial detection and analysis software. The AI locates the landmarks on the driver’s face, identifying the eyes so that it can darken the sections of the visor that cast a shadow on the eyes. 

“We discovered early in the development that users adjust their traditional sun visors to always cast a shadow on their own eyes,” said Jason Zink, technical expert for Bosch in North America and one of the co-creators of the Virtual Visor. “This realization was profound in helping simplify the product concept and fuel the design of the technology.” 

Bosch proudly points to the ability of its employees to come up with an idea and gain corporate backing to develop it to this stage as evidence of what the company calls an “innovation culture.”

Image source: Bosch

“We’ve built a culture around empowering our associates by putting them in the driver’s seat,” said Mike Mansuetti, president of Bosch in North America. The Virtual Visor was developed by a team in North America as part of Bosch internal innovation activities. “As a leading global technology provider, we understand that innovation can come from any level of an organization, and we want to see that grow.” 

Zink and his colleagues Andy Woodrich, Arun Biyani, and Ryan Todd toiled to win budget approval to work on his idea for an active sun visor. “It was an inspiring idea,” recalled Zink. “The only part of the sun visor that needs to do any blocking is where the sun hits your eyes. The rest of it can be totally transparent.”

The team of engineers, who work in Bosch’s powertrain department, pursued this idea far outside their own area with creativity. “Like many early-stage ideas, we were working with limited capital and resources,” said Zink. “The original prototype, we used to first pitch the concept, was made from an old LCD monitor we recovered from a recycling bin.” 

The Virtual Visor has since been moved to the Bosch Car Multimedia division, which demonstrates that it has graduated from an engineer’s crazy notion to a production-ready device.

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.

top-10-tech-failures-from-2019-that-hint-at-2020-trends
  • As the last year of the last decade, 2019 had a lot to live up to. Within the span of 10 short years, service apps like Uber, Lyft, AirBnB and others on mobile phones became big business. Mobile phone companies introduced amazing personal features like voice assistance (e.g., Siri and Alexa), iCloud connections for fast video streaming, and very high-resolution HD cameras. Not to be outdone, the automobile was transformed with automation tech and electrification. A Tesla electric vehicle even made it into space.

    Space technology flourished in the last decade with the commercialization of space rockets, the launch of hundreds upon hundreds of communication satellites and the increasing popularity of Cubesats. Back on earth, homes and buildings became smarter while alternative forms of energy continued to improve in efficiency. And the list goes on.

    But there were several notable failures in the last decade, many seeming to culminate in 2019. Here is the short list of the 10 tech failures most worthy of mention, in no particular order.

  • #1 Glitchy Spacecraft Launch

    Boeing suffered several major setbacks this year. The first one was an incomplete demonstration flight of its new astronaut capsule. The mission of Boeing’s CST-100 Starliner spacecraft began successfully but suffered technical problems that prevented it from reaching the International Space Station (ISS). Many observers believe that the Starliner capsule on top of an Atlas rocket simply burned too much fuel as it climbed into space, leaving an insufficient amount to reach the ISS. Some have suggested the failure was from a glitchy timer system that turned off the rocket thrusters too soon.

    The demonstration test wasn’t a complete failure as the Starliner did land successfully in the deserts of New Mexico.

  • #2 Andromeda Strain revisited?

    Remember the Andromeda Strain? It was a techno-thriller novel from 1969 written by Michael Crichton that centered around the efforts of a team of scientists investigating the outbreak of a deadly extraterrestrial microorganism in Arizona.

    Fast forward to 2019. A company in Israel launched its first lunar lander that unfortunately crashed-landed on the moon. The small robotic spacecraft called Beresheet was created by the SpaceIL and Israel Aerospace Industries (IAI). It failed just moments before landing on the moon.

    This was an unmanned operation, but not one devoid of life. A US-based nonprofit had added tardigrades, or water bears, to the capsule. These microscopic, eight-legged creatures could survive in a dormant state through harsh conditions, and maybe even on the moon.

    In other words, earth-based lifeforms have now been introduced to the moon’s ecosystem. Without some water, the tardigrades aren’t likely to revive and spread. But this failure highlights the need for planetary protections – both on the moon and earth.

    It should be noted that the goal of the Arch Mission Foundation was not to contaminate the moon but rather to, “create multiple redundant repositories of human knowledge around the Solar System.” The foundation tests out technologies for long-lasting archives, like securing information in DNA strands or encapsulating insects in artificial amber. In addition to water bears, the Arch’s payload included nickel sheets nanopatterned with thousands of pages of Wikipedia and other texts.

    One of Arch’s first missions was launched by SpaceX on the Falcon Heavy rocket and is now entering an orbit around the Sun for millions of years.  The first books in the Solar Library were Isaac Asimov’s Foundation Trilogy. Can you guess where they are located? The books containing Asimov’s Foundation Trilogy were placed in the glovebox of the Cherry Red Tesla Roadster that will soon be orbiting the Sun.

  • #3 Communication Failures (again)

    Both Boeing and the FAA have been cited for oversight breakdowns that contributed to 737 Max failure. But the actual cause of the tragedy that resulted in the crash of two Boeing 737 Max aircrafts seems to be broad failures in the automated system that controls the new planes. The report by the Joint Authorities Technical Review panel said that assumptions about critical aspects of the plane’s design were “not adequately reviewed, updated, or validated.”

    This lack of communication and incorporation of warnings from the engineering teams is a common problem with very complex, modern systems, e.g., the Challenger Space Shuttle and others.

  • #4 Disappearing Bitcoin Miners

    While 2019 was overall a profitable year for the semiconductor chip development market, there were a few noticeable declines. One was the system-on-chip (SoC) devices made specifically for bitcoin mining. The cost of mining for bitcoins dramatically increased in 2019, leading to a drop in the need for hardware SoC-based equipment.

    In essence, it took much more effort for bitcoin miners to solve the equations required to validate transactions on the Bitcoin network. This increase in mining difficulty reflects the increased competition.

    Another slowdown was in the market for automotive chips and electronics, as companies and drivers realized that autonomous car technology won’t really be ready for several more years. This corresponds well to Gartner’s famous “trough of disappointment” portion in its hype cycle for emerging technologies.

  • #5 Cloud Buckets

    A new type of cybersecurity issue has emerged in which millions of people have had their personal information exposed through file storage systems known as cloud buckets. Such storage areas typically consist of public resources that are easily accessed by a variety of web service applications. Cloud buckets are like public file folders which contain user information.

    Placing sensitive user data information in the cloud offers companies the capability to offload their security to big firms like Google, Apple, Amazon or Microsoft. The problem is that the buckets are not configured by these firms but rather by the companies who use their cloud networks.

    Not all of these companies are storing their customer information properly. This lack of security is easy pickings for identity thieves. It is an example of readily available information that doesn’t require any hacking.

  • #6 Hacks of the Year

    Speaking of hacks, this year experienced even more cybersecurity breaches. In 2018, there were 500 million personal records stolen, according to the Identity Theft Resource Center. But that number was miniscule compared to the 7.9 billion records exposed in 2019 by over 5,000 breaches, as reported by Risk-Based Security. Compared to the 2018 Q3 report, the total number of 2019 breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent. Here’s just a small sampling of the more infamous breaches (more details here):

    > ElasticSearch Server Breach

    > Canva Data Breach

    > Facebook App Data Exposure 

    > Orvibo Leaked Database

    > Social Media Profiles Data Leak

    Sadly, the common theme in many of these data exposures is that data aggregators obtained and used personal information in a way the owners never imaged or gave their consented. This is a legal problem as much as a technical one.

  • #7 Google Glass

    In 2019, Google announced a new $999 Glass augmented reality headset that looked suspicious like the failed Google Glass from the past.

    Early in 2012, Google co-founder Sergey Brin debuted Google Glass. A year later, the founder and head of the Google Glass Project, Babak Parviz, delivered a keynote about the technology at the IEEE Hot Chips event at Stanford.

    One of the ongoing leading smart phone trends is the ever-improving screen resolution and larger screen size. During his keynote, Parviz argued that there was a physical limit to this trend, but glass offered the next display form factor evolution, i.e., immersion with one’s surroundings. This will be especially important in augmented reality applications.

    Originally, Google Glass was a standalone unit (not yet cloud-based) that included internet access, voice controls, and a camera for pictures and videos. It accomplished all of this with dual core processors running at more than 1 GHz. Five MEMS sensors capture all the environmental data. It had a two-dimensional touch panel on side of glass.

    Why was this technology a failure? It wasn’t because of the technology, but rather because it wasn’t clear to the customer what problem it solved or why they needed it. Additionally, many felt it was intrusive as a user of the device could take pictures and short film snippets of people without their knowledge.

    In January 2015, Google announced that they would no longer be developing Google Glass. But that wasn’t the end of the project. Instead, Google pivoted to the business sector by launching Glass Enterprise Edition for workplaces like factories in 2017. This year, Google announced the Glass augmented reality headset.

  • #8 Folding Phone

    Samsung’s Galaxy folding phone was billed as a new dawn in display technology. The phone levered open into a 7.3-inch dynamic AMOLED display.

    Unfortunately, the company had to postpone the launched of the folding phone after early review models broke, delaminated, and got filled with gunk. The problem seemed to be potential defects with a weak hinge as well as substances found inside the device.

    As with many new technologies, the price tag also presented a barrier to anyone but early adopters. A reengineered and improved version is now on sale for near $2,000.

  • #9 Machine-Bias or Garbage-in, Garbage-out

    The challenge of machine-bias came clearly into focus in 2019. Similar to human-bias, machine-bias occurs when the learning process for a Silicon-based machine makes erroneous assumptions due to the limitations of a data set and pre-programming criteria. One example of machine-bias was recently revealed in Apple’s new credit card, which contained an algorithm to decide how much trustworthy (or risky) a user might be. This evaluation used to be done by trained humans but now is often performed by AI based algorithms.

    Apple’s credit card was shown to have a gender bias. Males are more likely to get a higher credit line limit than females. This bias was highlighted when a male entrepreneur was assigned a spending limit 10 times higher than that of his wife, even though they have a common account.

    How does a machine get a bias? A report from IBM Research outlines two main ways AI systems could inherit biases. First, the AI software might contain errors and dependencies. Second, the data set from which AI learns its task may have flaws and bias. These data points come from the real world which contains many biases, e.g., favoring white men to the exclusion of women and minorities. Algorithms are only as smart as the data you feed them. This is a modern update of the old computer data expression, “garbage-in, garbage-out.”

  • #10 Software App Failures

    No list of tech failures would be complete without mention of the apps that didn’t make it. The range of the applications that failed is wide.

    Consider first British Airways (BA) glitch, whose computer system completely wend down during a peak travel season. Over a hundred flights of BA were cancelled and near to 300 delayed. Thousands of passengers were affected. Sadly, this wasn’t the first time the system had failed, which suggests a systemic problem that has not been properly addressed by management.

    Or how about the Facebook 2019 failure that prevented users from viewing or loading images form the newsfeed? Several other social media apps had a similar problem, including Instagram, WhatsApp and Messenger. In each case, users were prevented from sending messages, media files and the like.  Facebook claimed their problem was the result of an accident during routine maintenance.

    Several app failures or hacks from 2019 include Apple’s Facetime bug and the Ring security camera intrusions. The later may have been more of a customer problem as Ring notes that the system invasion was likely the result of the hacker gaining access to the family’s account through weak or stolen login credentials.

19-in-2019:-best-electronic-bits-from-the-design-news-vault

Check out the best curated technical content from the editors at Design News.

For electronic developers, 2019 was another stellar year. In addition to ongoing challenges and successes in areas such as embedded systems, hardware chips-board systems and system-of-systems (like satellites), there were new materials, evolving design approaches and insights to be considered. Here are the best stories that covered these issues from the 2019 Design News arhives.

  1. Top 10 2019 engineering, science and technology awards – Each year reveals further advances in the disciplines of technology, engineering and science. This year, luminaries were awarded for their work in cosmology, photonics, GPS systems, video processing, semiconductors, brain neurons and more.
  2. Who’s left to make chip development tools? –Electronic design automation (EDA) are the software tools used for designing electronic systems, such as system-on-chip (SoC) integrated circuits and printed circuit boards. The tools work in a design-verification flow that chip designers use to analyze and develop semiconductor chips. Here’s a look at the remaining major EDA tool companies after years of consolidation.
  3. Complete integrated circuits fabricated using printing press – Researchers have for the first time printed complete integrated circuits that have more than 100 organic transistors, a breakthrough in the quest to use printing to create complex next-generation electronic and computing devices. The breakthrough eliminates the need to use multiple manufacturing methods to create an integrated circuit with more than 100 organic transistors.
  4. 2 game-changing trends that will define embedded systems in the 2020s – The last decade has seen an amazing advancement in embedded system development techniques, tools and technologies. The next decade has the potential to dramatically change the way that products and embedded systems are developed.
  5. Developing an embedded software build pipeline – One interesting fact that I’ve noticed about embedded software development is that development processes and techniques tend to lag the general software industry. Developing a more sophisticated build pipeline can have dramatic effects on the embedded software development life cycle.
  6. 8 criteria to evaluate when selecting an RTOS – Real-time operating systems (RTOS) are finding their way into nearly two-thirds of all applications. Cost is a factor. But there are more important things to consider when choosing a real-time operating system.
  7. Old 3G battle shifts to 5G struggle – The old 3G battle between communication and computational industries has been replaced with the 5G struggle between nations and sub-6 vs mmWave global spectrums.
  8. Internet of Space or Space Junk?– When bad movies make good predictions and how to lessen the Kessler Syndrome with everything from AI to space harpoons.
  9. Did Edison Really Lose a Non-Existent ‘Current War?’ – The recent movie, The Current War, dramatizes the struggles between Edison, Westinghouse, and Tesla to bring electrical power to the US. But was the “war” actually fabricated?
  10. 3 Do’s and Don’ts for Medical Device Software Development – Medical devices are one of the fastest growing areas of embedded hardware and software development. Here are some successful strategies – and potential pitfalls – gleaned from real-world medical device development projects.
  11. Microorganisms Provide Solar Energy Independent of Using Solar Cells – The concept of solar energy usually inspires images of long rows of solar panels lined up in a vast field. Researchers in Sweden achieved production-potential amounts of butanol using carbon dioxide and sunlight
  12. Beware the Hidden Costs of a Free Embedded RTOS – If you’re basing your selection of a real-time operating system (RTOS) solely on initial cost, then you may be in for a rude awakening.
  13. 8 RISC-V Companies to Watch – The open source nature of RISC-V promises to enable companies to create custom chip hardware specifically tailored to their products and devices. These eight companies are developing their own RISC-V technologies and are committing to helping third parties do the same to help push adoption of the open-source chip architecture. April 2019
  14. New Material Could Transform How Electronics Are Built – A new family of crystal materials can serve a dual purpose in electron movement in electronic devices, potentially changing how they will be designed in the future.
  15. Biocompatible Transistor Invented for New Devices – Researchers have developed what they said is the first biocompatible ion-driven transistor fast enough to enable real-time signal sensing and stimulation of brain signals.
  16. Efficient Fabrication Method Achieved for Nano-Sized Processors – A new rapid fabrication method for nano-scale semiconductors could help advance the design of next-generation processors.
  17. The Biggest Embedded Software Issue Is … – There are many different problems and challenges that embedded software developers are facing today. One of the biggest and least spoken about issues is that too many developers are writing software code without considering what could go wrong.
  18. Smart Manufacturing Expert Says It’s Time to Embrace Fuzziness – Combining fuzzy sensing technologies with artificial intelligence, manufacturers can learn more about their enterprise for less cost.
  19. 2019 Will Be the Year of Open Source – After decades of being looked at as more of a subculture (or arguably counter-culture) in the larger technology landscape, open source is finally getting its due. From software and even hardware, we saw more activity in open source than ever before in 2018. 

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

the-history-and-future-of-the-bci

Mindflex, from Mattel and NeuroSky, used brainwaves to control a series of fans to navigate a floating ball through an obstacle course. (Image source: NeuroSky)

Remember about five years ago when several consumer headsets from NeuroSky and Emotiv were pitched as a computer game controllers? For example, NeuroSky promoted its EEG Headset as a training aid for young padawans to focus their minds through various exercises so they could control the movement of tablet-projected holograms. Based upon the wearer’s focus and concentration, the brainwave sensors on the EEG headset would allow them to interact with an application on a table.

NeuroSky and others (most notably, Emotiv) have created EEG headsets for gaming, neurological rehabilitation, and other applications. These headsets are examples of brain computer interface (BCI) technology – a computer-based system that acquires brain signals, analyzes them, and translates them into commands that are relayed to an output device like a computer, game console, medical prosthetic, or similar devices.

A BCI doesn’t actually read thoughts but rather detects the smallest of changes in the energy radiated by the brain when you think in a certain way. A BCI recognizes specific energy/frequency patterns in the brain. Some believe that the combination of humans and technology could be more powerful than artificial intelligence (AI) alone. For example, instead of relying on human perception and reasoning, one could use neurotechnologies to improve our perception, as in the case of interpreting a blurry security camera image before deciding whether to take action.

In the future, perhaps the carbon-based mind will connect directly with silicon-based artificial intelligence, robots, and other minds through BCI technologies to greatly extend our existing senses or maybe add new ones.

In order to appreciate what BCI technologies have to offer, it helps to understand a bit of their history.

Infographic – History of Brain Computer Interface (BCI) Tech

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

building-a-digital-timer-with-the-bbc-micro:bit

Timed activities such as sports, gaming, and cooking are monitored and alerted with digital timers. A digital timer uses an electronic counter circuit to keep track of timed events or activities based on preset and an accumulated- values being equal. The preset value can be set using either a rotary based control, a keypad, or discrete tactile pushbuttons. After the preset value has been set, the electronic counter will then accumulate time, thereby creating a predetermined delay in the timer.

Upon the accumulated value reaching the preset time value, the digital timer will trigger an output wired circuit. Traditionally, a transistor Open Collector (O.C) is used to drive an electromechanical relay. The electromechanical relay can control an electrical load wired to its switching contacts. Also, a digital display such as a seven segment LED or LCD can be used to show the digital counters’ pulses.

A digital timer circuit using discrete Transistor-Transistor-Logic (TTL) integrated circuits (ICs) and electrical-electronic components. (Image source: Don Wilcher)

A typical digital timer

Besides digital timers being deployed in consumer application-based activities, these programmable devices are found in manufacturing facilities. For small industrial control applications such as operating electropneumatic, electrohydraulic circuits or running parts conveyors, digital timers can be implemented quite easily. The preset value is traditionally set using a pair of discrete tactile pushbutton switches. The tactile pushbutton switches allow the operator to change the preset value. The time can be adjusted for an up or down count trigger setting. Once set, the internal electronic counter will proceed to count based on the time adjusted by the operator using the tactile pushbutton switches.

A digital timer can be easily constructed using a few off the shelf components. A typical digital timer has several digital and analog circuits for setting the preset value and displaying the time.

This timer’s, output control, and display functions can be implemented using a BBC micro:bit and an electromechanical relay.

The BBC micro:bit Specs

The BBC micro:bit is a small embedded platform that allows a variety of smart electronic devices to be created by makers, educators, hobbyists, and engineers. The electronics prototyping platform is engineered to be programmed in a variety of coding languages like Javascript, MicroPython, CircuitPython, or Google Blockly code.

Several external electrical and electronics devices and circuits can be attached to the micro:bit using an etched printed circuit board (PCB) edge connector. At the heart of the micro:bit is a Nordic Semiconductor nRF51822 system on a chip (SoC) integrated circuit (IC). The nRF51822 SoC IC is a low power device operating at 3.3V direct current (DC) supply voltage. Included within the SoC is a 2.4GHz transceiver and a 32bit ARM Cortex M0 central processing unit (CPU) with an integrated temperature sensor. Also, flash memory, Bluetooth low energy (BLE), an accelerometer, analog, and digital circuit peripherals support the micro:bit. In addition, the micro:bit uses a 5×5 array of discrete LEDs to display text messages, images, and numerical values.

The BBC micro:bit is populated with a nRF51822 SoC and supported by analog and digital electronic circuits. (Image source: BBC micro:bit)

The micro:bit Digital Timer Build

To build a digital timer using the micro:bit basically requires four building block components: the digital timer code, a micro:bit, a transistor relay driver circuit, and an electromechanical relay.

A system block diagram for a micro:bit Digital Timer. (Image source: Don Wilcher)

A solderless breadboard can be used to wire three of the four building block components together. Also, the discrete electromechanical relay and transistor relay driver component circuits can be wire on a solderless breadboard. Although this method of construction is okay, another approach is to insert the micro:bit into an interface board which provides four input/output connectors, a 3.3VDC supply, and ground. The transistor relay driver and electromechanical relay can be purchased as a single module. Both circuit components are conveniently populated on one PCB. A small jumper harness is attached from the interface board to the transistor driven relay board making the electrical interface between the micro:bit and the electromechanical relay circuit conveniently easy.

The micro: bit attached to the transistor driven relay board with a mini jumper harness. (Image source: Don Wilcher)

The two tactile pushbuttons allow the digital timer’s preset time to be adjusted using button A on the micro:bit. Locking in the preset time is accomplished using the B button. Pressing both the A and B buttons together will start the count-down timing sequence.

Setting the preset time on the micro:bit Digital Timer. (Image source: Don Wilcher)

The digital timer function can be coded using Javascript or Google’s Blockly Code. Also, prior to downloading the code to the physical micro:bit the software can be tested using the Microsoft Makecode micro:bit simulator. Such a debug feature will capture software function glitches and errors prior to deploying the code on the physical device.

The micro:bit Digital Timer Blockly Code. (Image source: Don Wilcher)

The Javascript micro:bit Digital Timer code. (Image source: Don Wilcher)

Code testing on the Microsoft micro:bit Makecode simulator. (Image source: Don Wilcher)

After the displayed count-down value reaches zero, the message Device ON will scroll across the discrete LED display.

Various electrical devices like DC voltage small electric motors and electronic devices can be operated using this simple but effective digital timer. Additional information on the micro:bit can be found on the following website. The Microsoft Makecode programming environment can be found on its website as well.

Don is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He has worked on industrial robotics systems, automotive electronic modules and systems, and embedded wireless controls for small consumer appliances. He’s currently developing 21st century educational products focusing on the Internet of Things for makers, engineers, technicians, and educators. He is also a Certified Electronics Technician with ETA International and a book author.