robots-reach-out-and-touch-ces-attendees

Robot makers put their new offerings on display at CES, showing how these mechanical devices interact with humans, from making pizza to reading emotions.

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics

    The new robots at the Consumer Electronics Show (CES) came is a variety of forms and uses, but a common theme throughout was the touch-worthiness of these machines.

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang

    Cute Little ChuangChuang

    ChuangChuang, an intelligent service robot self-developed by Chuangze Intelligent Robot Group (a high-tech enterprise from China), showed off one of the cutest robots at CES. Entering the showroom of Chuangze Group, you could see their latest series of intelligent commercial service robots, intelligent companion robots, intelligent large-screen robots, and intelligent medical robot. (Image source: Chuangze Intelligent Robot Group)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, sweeping robot, BONA

    BONA’s Sweeping Robot

    BONA Robots launched its own brand coayu, and launched a commercial sweeping robot BLNE01. The robot is equipped with an x-matching global navigation system. It was designed for complex and diverse indoor business environments with strong or weak light, strong or weak texture, and mapping technology. (Image source: BONA Robots)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot

    Food Delivery Robots

    PuduBot and BellaBot serve as food delivery robots. Following positioning and navigation instructions from PuduSLAM algorithm, the robots will reach designated tables after the waiter chooses the correct table number for the trays. PuduBots are currently working at over 2,000 restaurants of different categories in 200-plus cities in more than 20 countries. In a year, million trays of food are delivered to customers, which is equivalent to 3,000 waiters working for a whole year. (Image source: PuduBots)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics

    The Guardian X0 by Delta Air Lines and Sarcos Robotics

    Delta Air Lines has partnering with Sarcos Robotics to create employee technology fit for a superhero – a mobile and dexterous exoskeleton designed to boost employees’ physical capabilities and bolster their safety. Delta employees have worked directly with Sarcos to determine potential operational uses for the Guardian XO. (Image source: Sarcos)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot, Misty Robotics

    Play Misty For Me

    Misty Robotics, the creators of the Misty II platform robot, has launched the Misty as a concierge application template that provides developers with a robust starting point to build robot skills and quickly put Misty II to work. The Misty II application templates are open source code for developers to build upon and customize for a specific assignment or task. (Image source: Mysty Robotics)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot, Misty Robotics, Cruzr

    Cruzr Comes to CES

    UBTECH showed off its newest robots, including the latest updates to Walker, the intelligent humanoid service robot and an autonomous indoor monitoring robot AIMBOT, enterprise service robot Cruzr, and award-winning JIMU Robot kits for kids. (Image source: UBTECH)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot, Misty Robotics, Cruzr, pizza making robot

    Pizza Making Robot

    Seattle-based Picnic, a food production technology and Robotics-as-a-Service (RaaS) provider, displayed its automated food assembly system. The robotics system served pizza to attendees at CES. (Image source: Picnic)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot, Misty Robotics, ITRI

    A Smart Arm And Emotion-Reading AI

    Taiwan’s ITRI demonstrated AI and robotics technologies that included the Mobile Arm Robot System, a smart integrated service robot platform combining mobility, sensing, manipulation, and human-machine interaction functions; and GenkiCam, an AI camera that can identify a baby’s emotions, monitor its heartbeat and breathing, and immediately inform parents of any abnormality. (Image source: ITRI)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot, Misty Robotics, Cruzr, OMRON

    OMRON’s i4 Line Of Robots

    OMRON introduced a new line of SCARA robots with sleek design and enhanced performance. Named the i4, the new generation of SCARA robot is designed to save space during installation and allow easier configuration into existing production lines. (Image source: OMRON)

  • CES, robots, exoskeleton, Delta Air Lines, Sarcos Robotics, ChuangChuang, PuduBot, food delivery robot, Misty Robotics, FANUC

    FANUC Let Its Cobots Touch Attendees

    FANUC let CES attendees interact with its robots in a wide range of demonstrations and contests including a selfie station, a voice-activated gift selection, hand-guided robot programming, and speed and dexterity challenges. (Image source: FANUC)

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News . Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event,  DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard?  Register to attend !

10-new-auto-tech-products-to-watch-in-2020

The year 2020 is bringing in a slew of innovative products set to transform vehicles themselves, as well as the automotive experience. Here are 10 products to watch.

  • Every year brings plenty of new vehicles, but there are also even more technologies behind those vehicles. Now more than ever technology companies are releasing new technologies to make vehicles safer, more connected, and more autonomous.

    Here are some new innovations – from chips, to headlights, and even sensors for infrastructure – that will be transforming vehicles in 2020 and the years to come.

  • Adasky Viper

    More and more engineers are coming to believe that autonomous vehicles should integrate thermal imagining and sensing capabilities into their sensor array. Adasky has released Viper, a long-wave infrared (LWIR) thermal camera system for autonomous vehicles and ADAS that integrates both an automotive-grade image signal processor and edge-based computer vision algorithms – allowing it to recognize vehicles, pedestrians, animals, and other objects on the road on its own.

    The ISO 26262 ASIL-B ready camera consumes less than 750mW of power, according to the company, and captures VGA images at up to 60 frames per second. Viper can also be integrated directly into vehicles’ headlights – reducing their visible footprint for automotive designers.

    (Image source: Adaksy)

  • Boréas Technologies BOS1211 Haptic Feedback Chip

    Haptic feedback is looking to become the next frontier in automotive interfacing. Touchscreens after all have some of the same disadvantages of a mechanical dashboard. Haptics would allow drivers and passengers easy control of dashboard functions with less distraction.

    Haptic technology developer Boréas Technologies, has announced the BOS1211, a low-power, high-voltage, piezoelectric driver integrated circuit for enabling high-definition haptic feedback in vehicle interfaces such as infotainment screens and steering wheels. Boréas is partnering with TDK to make the BOS1211 compatible with TDK’s PowerHap family of piezo actuators and to meet the standards of the automotive market.

    The BOS1211 is based on the company’s proprietary CapDrive technology, a scalable piezo driver architecture optimized for energy efficiency, low heat dissipation, and rapid response times. Boréas is planning to launch a plug-and-play development kit for automotive haptic feedback in February 2020.

    (Image source: Boréas Technologies)

  • Bosch 3D Display For Automotive

    Bosch captured a lot of attention at CES 2020 with a handful of new automotive new technology announcements. Among the company’s new offerings is a 3D display that uses passive multi-view 3D technology to generate three-dimensional graphics in a vehicle’s cockpit – without the need for 3D glasses or special cameras. Bosch says the 3D effect is visible for multiple people inside the vehicle from multiple angles without shaking or blurring and is adjustable to the user’s preference.

    The company believes its 3D displays can enhance safety by pushing important information and alerts right into a driver’s field of vision and reduce overall driver distraction.

    (Image source: Bosch)

  • Bosch Virtual Visor

    Bosch want to replace your car’s boring, traditional visor with a transparent LCD that can keep the sun out of your eyes without reducing your ability to see the road. The company’s Virtual Visor uses a camera that tracks the driver’s face and eyes and utilizes computer vision technology to only block the portion of the visor where the sun would be hitting the driver’s eyes – leaving the rest of the visor transparent. The result is more of a floating point effect in blocking the light, rather than having a chunk of your windshield completely blocked out.

    (Image source: Bosch)

  • Koito Manufacturing  BladeScan ADB

    High beams are an important safety feature. But we all hate that person who pulls up behind us or comes at us head-on with their high beams blazing.

    Koito Manufacturing‘s Adaptive Driving Beam (ADB) technology is a headlight upgrade that selectively dims and brightens areas of the road to improve driver visibility. Using a camera sensor that provides information to the headlight LEDs, the BladeScan ADB can selectively dim the high beams to low beams for oncoming traffic to prevent glare, for example.

    The BladeScan ADB creates what the company calls a “controlled, high-resolution photometry pattern” in front of the vehicle by emitting LED light onto rotating reflectors (“blades”) and then reflecting it at an angle and pulsing it on and off through a plastic lens and onto the roadway. Doing this the company says BladeScan minimizes the dimmed area in front of the vehicle and can increase the visibility of other vehicles, pedestrians, and other potential road hazards without causing annoying glare to surrounding vehicles.

    BladeScan ADB has already been integrated into the 2020 Toyota Lexus RX.

    (Image source: Kioto Manufacturing)

  • Outsight 3D Semantic Camera

    The 3D Semantic Camera from Outsight aims to “bring full situational awareness to smart machines,” according to the company. The Outsight camera is capable of detecting, tracking, and classifying objects with up to centimeter accuracy and relaying that information to other smart devices – including autonomous and connected vehicles. Utilizing a low-power, long-range broadband laser also allows the camera to identify material composition of objects via hyperspectral analysis under any lighting conditions – adding a new level of confidence to determining what the camera is seeing.

    The camera also uses 3D Simultaneous Localization and Mapping (SLAM) technology for positional data. Outsight says its camera does all of this via edge-based processing through an onboard SoC that does not rely on machine learning. By taking a machine learning-free approach Outsight says it is able to reduce energy consumption and bandwidth needs and also eliminate the need for massive data sets to train the cameras.

    Outsight’s cameras will be deployed at Paris-Charles de Gaulle airport. The company also offers a vehicle-specific version of its cameras.

    (Image source: Outsight)

  • Qualcomm Snapdragon Ride

    Chipmaker Qualcomm has unveiled the first generation of a new SoC targeted at autonomous driving. The Snapdragon Ride platform will come in versions focused on safety and autonomy respectively, with the aim of providing automakers a scalable solution designed to support Level 1 and 2 autonomy – with features including automatic emergency braking, traffic sign recognition, lane keeping assistance, automated highway driving, and self-parking as well as Level 4 and 5 full autonomy.

    The Snapdragon Ride SoCs are capable of performing 30 Tera Operations Per Second (TOPS) for Level 1 and 2 applications and up to over 700 TOPS for Level 4 and 5 applications and are designed for functional safety ASIL-D systems.

    Qualcomm says the platform will be available for pre-development to automakers and Tier-1 supplies in the first half of 2020. The first vehicles to utilize Snapdragon Ride are expected in 2023.

    (Image source: Qualcomm)

  • RoboSense RS-LiDAR-M1 Smart LiDAR

    RoboSense is releasing what it calls the world’s first smart solid-state LiDAR for autonomous vehicles. The company says its RS-LiDAR-M1 line of LiDAR products offer several advantages over mechanical LiDAR systems. The RS-LiDAR-M1 has a 120 x 25-degree field of view, a 15Hz frame rate, and a detection range of up to 150m at 10% NIST target. Its solid-state design also means fewer parts and a more modular design, making it easier for automakers to integrate and scale. In tests conducted by the company, Robosense reports that the RS-LiDAR-M1 met standards of performance for rain and fog and under different light and wind speed conditions and can adapt to all climatic and working conditions. The first version, the RS-LiDAR-M1Simple, is currently available.

    (Image source: RoboSense)

  • Siemens PAVE360 Automotive Digital Twin Platform

    Siemens has announced a new digital twin solution for the automotive industry. PAVE360 allows automakers and OEMs to simulate and validate automotive SoCs and other systems in the context of the vehicle, before the vehicle is built. Developed in collaboration with Arm, PAVE360 is able to model sensors, ICs, as well as other systems related to vehicle dynamics and the overall vehicle environment. Engineers can use the solution to create simulations for systems related to safety, ADAS, infotainment, digital cockpits, V2V and V2X, and even autonomous driving applications.

    (Image source: Siemens PLM)

  • Valerann Smart Roads System

    The emergence of smart cities is rapidly making infrastructure technologies as important as those inside of automobiles. Valerann has developed a sensor, the Valerann Stud, that can replace standard road pavement markers, transforming roads into an IoT sensor network. The solar-powered sensors use LoRA communication to relay information to each other and can track road conditions – including accidents and weather – in real time. The company says it can even track the exact driving pattern of every single vehicle on the road, right down to each vehicle’s specific lane location, in real time.

    The sensors also come equipped with LEDs and can change color to alert drivers of hazardous conditions such as ice, let them know to slow down or stop, and even indicate if they are driving in the wrong direction down a one-way road. The Valerann Smart Roads System is currently deployed various locations in the UK and Europe.

    (Image source: Valerann)

Chris Wiltz is a Senior Editor at   Design News   covering emerging technologies including AI, VR/AR, blockchain, and robotics.

top-10-electronic-enabled-tech-highlights-from-ces-2020

Not all cool tech involved robots and autonomous cars. Here’s a list of the other electronic tech featured at the show.

  • This year’s Consumer Electronics Show (CES) 2020 featured a range of marvals enabled by electronic technologies covering application areas from smart cities, AI edge intelligence, body haptics, security systems, real-time accident reports, uncooled thermo cameras, wearables and more.

    Here are the top 10 products and technologies that piqued the interest of the Design News editorial staff.

  • Smart Cities

    Why do major Japanese car manufacturers like to build smart homes and now cities? Several years ago, Honda built a zero-net energy smart home in partnership with UC-Davis. At this year’s CES, Toyota announced it will build a smart city to test their AI, robots and self-driving cars. Toyota’s Woven City will be built at the foothills of Mt. Fuji in Japan. The city will be the world’s first urban incubator dedicated to the advancement of all aspects of mobility, claims Toyota.

    The project is a collaboration between the Japanese carmaker and the Danish architecture firm Bjarke Ingels Group (BIG). Houses in Woven City will have in-home robotics to help with the more mundane tasks of daily life. The homes will have full-connectivity, which will be needed for the sensor-based AI to automate many household chores, like restocking the refrigerator and taking out the trash. Power storage units and water purification systems will be hidden beneath the ground.

  • Intelligence At The Edge

    Blaize is a computing company that optimizes AI at scale wherever data is collected and processed from the edge. The company enables a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments. The company claims that developers can create new classes of products to bring the benefits of AI and machine learning to broad markets.

    The company has developed a fully programmable GSP architecture that utilizes task-level parallelism and streaming execution processing to take advantage of very low energy consumption, high performance and scalability. Blaize claims that, in comparison, existing GPUs and FPGAs exert a much higher energy price, while CPUs cost more and scale poorly, and all are subject to excessive latency due to their sequential execution processing architectures.

  • Full-Body Haptics Suit

    Haptics are all about the sense of touch. Now you can immerse your entire body – or at least 70 tactile points mainly around your torso – into the world of artificial experiences. The BHaptics Tacksuit provides an audio-to-haptic feature that converts sound into haptic feedbacks that are felt real time around your torso. For example, when a bomb explodes or you hear footsteps during a PC/VR game, you’ll feel the experience from the right direction. You’ll even be able to feel Samurai cuts and friendly hugs.

  • Security Comes In Many Forms

    There are many ways to protect your PC data and applications, from hardware encrypted portable storage devices, backup solutions, file repair software, and data recovery, to digital forensics services. SecureData provides both products and services in these areas. At CES, the company demonstrated a secure UBS drive which they claimed was the only hardware encrypted flash drive in the world with keypad and Bluetooth authentication.

  • Wireless Six-Degrees Of Freedom (6DOF)

    Atraxa’s system tracks 6DOF motion without the need for optical cameras or infrared markers to be placed around the room, or mounted externally to the XR headset or controller. And no line of sight—or wires—are required between the headset and controllers. Unhindered by wires or line-of-sight constraints, users can move freely in large spaces. Even move from room to room without any room mapping, or controller orienting (or reorienting) is required. Tracking starts immediately and lasts without interruption.

    The tech combines electromagnetic (EM) and inertial technologies into a single sensor-fusion tracking platform. The IMU (inertial measurement unit) returns acceleration and angular velocity data. The EM tracker delivers true position and orientation data; it also establishes the tracking volume and local coordinate system. Atraxa is comprised of two main components: a tracker module and receiver module. The tracker module houses the IMU and an EM transmitter coil that generates the magnetic field (i.e. the tracking volume). The tracker modules are embedded into the handheld controllers (or other peripherals).

  • Real-Time Accident Report

    Sooner or later, all of us get into an automotive accident. When that occures, wouldn’t it be great to have a record of what happened? Through the use of embedded acceleration sensors, MDGo generates a real-time report in the case of a car crash, detailing each occupant’s injuries by body region. The company’s technology enables accurate delivery of needed services and support by providing optimal medical care in the case of an emergency and supporting the claim process.

  • Smart Factory

    Could a factory think for itself or autonomously design a better car or aircraft? Can it eliminate waste? All of these questions fit into the realm of manufacturing intelligence. One company with experience in this area is Hexagon, claiming that their technologies are used to produce 85% of smartphones, 75% of cars and 90% of aircraft.

    Their Smart Factory approach aims to have fewer inputs, zero waste and high quality. All this is achieved through sensor, software and autonomous solutions that incorporates data feedback to improve work to boost efficiency, productivity, and quality across industrial and manufacturing.

  • A Cool “Uncooled” Methane Gas Detector

    The FLIR GF77 Gas Find IR is the company’s first uncooled thermal camera designed for detecting methane. This handheld camera offers inspection professionals the features they need to find potentially dangerous, invisible methane leaks at natural gas power plants, renewable energy production facilities, industrial plants, and other locations along a natural gas supply chain. The gas detector provides methane gas detection capability at roughly half the price of cooled gas inspection thermal cameras, to empower the oil and gas industry to reduce emissions and ensure a safer work environment.

  • IoT Arduino Adds LoRaWAN Connectivity

    You can now connect your sensors and actuators over long distances via the LoRa wireless protocol or throughout LoRaWAN networks. The Arduino MKR WAN 1310 board provides a practical and cost effective solution to add LoRa connectivity to projects  requiring low power. This open source board can be connected to: the Arduino IoT Cloud, your own LoRa network using the Arduino LoRa PRO Gateway, existing LoRaWAN infrastructure like The Things Network, or even other boards using the direct connectivity mode.

  • Wearables, Ingestibles, Invisibles

    One of the keys to a healthy life is nutrition. But what exactly constitutes ‘healthy’ food for a specific person? To answer that question, you need to measure and analyze the processes inside the complex human digestive system. Imec is working on prototype technology that is up to that task. It’s called ingestible sensors.

    The company also develops wearables for medical and consumer applications that enable reliable, continuous, comfortable, and long-term health monitoring & management. This includes high-accuracy & low-power biomedical sensing technologies sometimes embedded into fabrics.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

manufacturing-2020:-5g,-ai,-iot-and-cloud-based-systems-will-take-over

Technology vendors expect that 2020 will be a big year for manufacturing plants to onboard digital systems. But will it happen? While digital systems – IoT, machine learning, 5G, cloud-based systems – have proven themselves as worthwhile investments, they may not get deployed widely.

For insight on what to expect in 2020, we turned to Rajeev Gollarahalli, chief business officer at 42Q, a cloud-based MES software division of Sanmina. Gollarahalli sees a manufacturing world that will take solid steps toward digitalization in 2020, but those steps are likely to be incremental rather than revolutionary.

digital manufacturing, IoT, IIoT, artificial intelligence, AI, machine learning, ML, 5G, big data, data analytics
We’re going to see manufacturers make progress on digitizing their factories in 2020, but it won’t happen as quickly as vendors would like. (Image source: digital.gov)

5G On The Plant Floor

Design News: Will 5G increase the pace of digital factory transformation, and where it will have the most impact?

Rajeev Gollarahalli: We’ve started to see a little of 5G popping up in the factory, but it’s limited. It’s mostly still in the proof-of-concept stage. It will be some time before we see more, probably around the end of next 2020.

DN: Will 5G increase the pace of digital transformation?

Gollarahalli: Undoubtedly. Yet one limit is that in order to make accurate decisions, you need to be able to ingest high volumes of data in real-time. That’s been one of the limitations in infrastructure. When you can use 5G across the factory, you’ll have considerable infrastructure. That challenge with data is solved by 5G.

DN: What still needs to be done in order to deploy 5G?

Gollarahalli: You have the 5G service providers and 5G equipment manufacturers working together. Both are developing capabilities in their own silos. What has not yet matured is putting these together, whether it’s in health, discreet manufacturing, telecom, or aerospace. The use cases haven’t matured, but we are seeing more use cases piling up.

DN: What could spur equipment vendors and telecom to work together?

Gollarahalli: I think we’ll see an industry consortium. That doesn’t exist now. There are partners that are starting to talk. Verizon is working with network providers. You’re going to see two or three different groups emerge and come together to do standards. With the advent of 5G, and the emergence of IIoT, they are all going to come together. One of the limitations is the volume. We generate about a terabyte of data with IoT. The timing will be perfect for getting 5G utilized for IoT and get it widely adopted.

The Emerging Workforce Skilled In Digital Systems

DN: What changes in the plant workforce can we expect in the coming year?

Gollarahalli: The workforce will need a completely different set of skills to drive automation on the factory floor, and industry has to learn how to attract those workers People are saying manufacturing is contracting, but I’m not seeing it. Manufacturing seems to be stable. As for skills for the factory of the future, we need to be re-tooling our employees. The employees today don’t have the technical skills, but they have the domain skills. We need to get them the technical skills they need.

DN: Will the move to a workforce with greater technology skills be disruptive?

Gollarahalli: You’re not going to see mass layoffs, but you’re going to see retooling the skills of the employees. We can’t get them trained at the speed that technology is increasing. We’re going to see more employees getting ready in trade schools and with degrees. What you’re seeing is a convergence of data skills with AI and domain skills. An ideal skillset is someone who understands manufacturing and knows the data. For several years kids were moving away from STEM, wanting to learn the sexier stuff. But I think STEM is coming back.

Cloud-Based Systems For Security

DN: Will cloud-based systems be the go-to for manufacturing security versus on-premises security?

Gollarahalli: Five years ago, when I talked about cloud with customers, they asked whether it was real-time. That was when the infrastructure was not as secure. I have a network at home. That was unheard of 10 years ago in factories. Now that the infrastructure issue has been solved, the next step is security. I have always countered that you can’t secure data on premises as well as you can in a cloud. A lot of money has poured into cloud-based security. No single company can match that. It’s almost impossible to do it on premises.

AI, Machine Learning and Big Data Analytics

DN: Will advances in AI, machine learning, and analytics?

Gollarahalli: We’re seeing AI and ML (machine learning) is some areas. We’re seeing it implemented in some areas at 42Q. Most use cases are around asset management and quality. It’s used to predict the quality of a product and to take preventive actions in asset maintenance. AI and ML are also popping up in supply chain management. 2020 will be the year of AI and ML. It’s getting embedded into medical products. You’ll see it pop up everywhere, showing up on the factory floor as well as in our consumer products.

DN: Is AI and machine learning going mainstream yet or is it mostly getting deployed by large manufacturers who are typically the early users?

Gollarahalli: You’re going to see it move down the supply chain to tier 2 and tier 3 suppliers. I don’t think it’s just for the elite any more. It’s getting adopted quickly, but it is not happening as quickly as I thought it would.

The Role Of IoT In Manufacturing

DN: Will we see growth in IoT’s role in measuring and providing closed loop controls?

Gollarahalli: We’re going to see it in manufacturing, regulating the humidity in the room or the temperature on the floor. They need closed loop from IoT. They’re measuring with IoT, but the closed loop as not been adopted as quickly. We don’t have the right standards. How do you do close loop with a system that is throwing off data in milliseconds. You must be able to use the IoT and those algorithms. If you can make them more efficient for closed loop control, you’ll see a lot more of it going forward.

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

nanowire-network-could-advance-design-of-ai

An international group of researchers has made a breakthrough in the development of artificial intelligence (AI) with the design of a nanowire network capable of a complex brain-like functions.

The team of scientists at Japan’s National Institute of Materials Science (NIMS) created what is called a “neuromorphic network” by integrating numerous silver nanowires covered with a polymer insulating layer about 1 nanometer in thickness. The network functions because a junction between two nanowires forms a variable resistive element, or a synaptic element, that behaves like a neuronal synapse.

neuronetwork, Nanowire Network, AI, artificial intelligence, National Institute of Materials Science, NIMS, neuromorphic network
Diagram A shows a micrograph of the neuromorphic network fabricated by a research team at Japan’s National Institute of Materials Science (NIMS). The network contains of numerous junctions between nanowires, which operate as synaptic elements. Diagram B shows a human brain and one of its neuronal networks. (Image source: NIMS)

The network, then, is comprised of a number of these interacting synaptic element to form the larger network, which is triggered when a voltage is applied, causing it to find optimal current pathways—or the most electrically efficient pathways.

Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert, and returning to calm.

The network is part of a larger scientific aim to improve artificial intelligence through what’s called neuromorphic computing, said Tomonobu Nakayama, research leader and deputy director of the International Center for Materials Nanoarchitectonics. “Neuromorphic computing has been attracting much interest,” he told Design News, citing some recent developments in the areas such as the development of neural chips such as the Intel’s Nervana and IBM’s True North Chip. “These challenges are based on emulating biological signals in the brain using electric circuits, and beautifully simulate neural network modeling developed so far.”

Creating More Creative AI

The NIMS team has seen limitations in some of the neuromorphic computing work being conducted, however, with “fundamental features of the brain missing,” Nakayama told us.  “The problem here is all AI technology at the moment needs massive sets of data connected to computer systems even for the neural chip-based systems,” he explained. “The. complexity of wiring is one of the issues to be solved, since the brain is allowing a single nerve cell to communicate with 10,000 other cells via 10,000 synapses while present semiconductor technology—lithography–cannot realize such a complex wiring using lithographic techniques.”

Also, the human brain can engage in self-organization and the creation of functions, “so we have associative thinking, creativity, intuition, and so on,” Nakayama said, something that is currently lacking in AI technology.

While the network developed by Nakayama’s team does not solve all of these problems, it does provide a starting point to develop more complex AI as well as introduces a “design-less” approach to neuromorphic nanowire networks that are “naturally emerging without. software programs,” he said. “So now we hope our work might be able to contribute to construct better hypothesis on the origin of our ‘intelligence’ and ‘creativity’ and to find a way to open a new type of information processors.”

Brain-Like Technology

Researchers published a paper on their work in the journal Nature Scientific Reports.

The team is currently developing a brain-like memory device using the neuromorphic network material that they hope will operate using fundamentally different principles than those used in current computers. While computers are currently designed to spend as much time and electricity as necessary in pursuit of the most optimal solutions, the new memory device is intended to make a quick decision within particular limits, even though the solution generated may not be the absolutely optimum.

“For example, after we store shapes characters in a network only by providing a part of the shape of a character, the device would suggest possible ones without any programs,” Nakayama explained. “It’s a sort of associative or cognitive information processors; however, since it does not require any programs and computers, it would be energy-efficient processors.”

The network also makes it possible to develop energy-efficient autonomous systems

that dynamically make decisions in response to environmental changes, something AI does not currently. Ultimately, researchers hope their work provides a deeper understanding of how the brain works to inform future technology and applications.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

the-12-best-innovations-of-ces-2020

Forget new TVs and smartphones. These are the real game changers introduced at CES 2020.

  • Now that the smoke is cleared from CES 2020, we can take a step back and see which technologies were the real innovations of 2020. Let’s be honest, CES can be a black hole of vaporware, false promises, and concepts intended to be just that.

    We’ve compiled a list of our favorite technologies introduced at CES 2020 – innovations that we’re sure will be having a lasting impact in 2020 and beyond.

  • AerNos AerSIP Gas Sensor

    The AerSIP from AerNos is a 5 x 5-mm, mulit-gas sensing module that combines nanotechnology and machine learning algorithms to monitor indoor and outdoor air quality. The system-in-package (SIP) is an embedded plug-and-play solution that can be integrated into wearables, mobile devices, and other IoT devices and is capable of detecting hazardous gases and other dangers at parts per billion levels.

    (Image source: AerNos/CES)

  • AMD Ryzen 4000 Series Mobile Processor

    AMD’s Ryzen 4000 could be a literal game changer for high-end laptops users – particularly gamers and designers. AMD says its new Ryzen 4000 series is the world’s first 7-nanometer laptop processor. Designed for ultra-thin laptops, the Ryzen 4000 series features up to 8 cores and 16 threads and configurable 15W thermal design power. AMD pledges the Ryzen 4000 series offers up to four percent greater single-thread performance and up to 90 percent faster multithreaded performance than its competitors, as well as up to 18 percent faster graphics performance over competing chips.

    (Image source: AMD)

  • Atmosic Technologies M3 Battery-Free Bluetooth 5 SoC

    Atmosic says its M3 Battery-Free Bluetooth 5 SoC uses so little power that it can even eliminate the need for battery power entirely in devices such as wearables, keyboards, mice, asset trackers, beacons, and remotes. The M3 integrates Atmosic’s Lowest Power Radio, On-demand Wake-Up, and Managed Energy Harvesting technologies to deliver what the company says is 10 to 100 times lower power than other SoCs, while still complying with Bluetooth standards. The M3’s radio uses two “ears” – one for listening in a low-power state to perceive incoming commands, and another that only wakes when alerted. The SoC uses energy harvesting technology to gather power from radio frequency, photovoltaic, thermal, and motion.

    (Image source: Atmosic)

  • Bot3 Zen-P VSLAM Deep Learning Module

    Bot3‘s Zen-P VSLAM Deep Learning module integrates visual simultaneous localization and mapping (VSLAM) technology (a version of the same technology used in autonomous vehicles) into mobile robots ranging from industrial machines to smart home products. Bot3’s image processing algorithm, Pascal, allows for autonomous navigation without tracks as well as indoor mapping and positioning. (for instances such as warehouse applications).

    (Image source: Bot3)

  • BrainCo BrainRobotics Prosthetic Hand

    Many companies have been developing mind-controlled prosthetics for amputees and other disabled patients. What separates the prosthetic hand developed by BrainRobotics is the integration of AI technology. The BrainRobotics hand utilizes machine learning to allow the hand and its user to learn from each other over time – leading to more lifelike movements. The company is aiming to provide accurate and reliable prosthetics and at affordable price for all patients. BrainRobotics is a subsidiary of BrainCo, a software developer focused on brainwave measuring and monitoring.

    (Image source: BrainCo/BrainRobotics)

  • Fluent.ai MultiWake Word and Voice Control Engine

    Fluent.ai is a technology company focused on AI for voice interface and speech recognition. The company’s Multi-Wake Word and Voice Control Engine is an edge-based, noise robust, and multilingual speech technology that consumes minimal power and storage, allowing it to be embedded in small devices. The solution is Cortex M4-based and supports four separate wake words and 100 multilingual commands, according to Fluent.ai.

    Fluent.ai has recently partnered with semiconductor designer Ambiq Micro to implement Fluent.ai’s software solutions into Ambiq’s ultra-small footprint, low-power microcontrollers. Ambiq’s MCU supports frequencies up to 96 MHz, and Fluent.ai’s solution requires only 16 MHz from the MCU. The new partnership means Fluent.ai and Ambiq will be releasing MCUs for OEMs looking for an easy way to add speech recognition and voice command functionality to their smart home devices and other products.

    (Image source: Fluent.ai / CES

  • Intel Tiger Lake Chip

    When Intel announces a new chip, the whole world takes notice. The chipmaking giant is launching its latest chip for consumers this year. Dubbed Tiger Lake, the new chip is said to be optimized for AI performance, graphics, and USB 3 throughput. Rather than desktops, the new chips will be focused on mobile devices such as ultra-thin laptops and tablets. The first products featuring Tiger Lake are expected to ship later in 2020.

    (Image source: Intel)

  • Monster MultiLink Bluetooth Technology

    Sometimes its the most straightforward ideas that can make the biggest difference. Most of us love our Bluetooth wireless headphones and earbuds. The problem is they don’t create a sharable experience. What if you want to show your friend the video you’re watching without disturbing the people around you? Monster has debuted a new technology called Music Share that uses MultiLink technology to allow devices to send Bluetooth audio to multiple devices in sync. The technology expands how Bluetooth headphones can be used and opens up new use cases ranging from air travel to fitness classes as well as new avenues for social interaction.

    (Image source: Bluetooth SIG)

  • Murata Coral Accelerator Module

    Working in partnership with Coral and Google, Murata Electronics has developed what it is calling the world’s smallest AI module. The Coral Accelerator Module packages Google’s Edge TPU ASIC into a miniaturized footprint to enable developers to embed edge-based AI into their products and devices. The new module forms an integral part of Coral’s integrated AI platform, which also includes a toolkit of software tools and pre-compiled AI models.

    (Image source: Murata Electronics Americas)

  • Pollen Robotics Reachy Open-Source Robot

    Reachy is a robot developed by Pollen Robotics, in collaboration with the INCIA Neuroscience Institute in France, that is fully open source. The robot, which can be programmed using Python, is modular – employing a variety of 3D-printed grippers – and comes with prepackaged AI algorithms to allow developers to customize it for a variety of applications ranging from customer service and assisting the elderly or disabled.

    Read more about Reachy, and the rise of open-source robotics, here.

    (Image source: Pollen Robotics)

  • VRgineers 8K XTAL Headset

    VRgineers, a maker of premium VR headsets for enterprise applications in industries ranging from automotive to defense and military, has released a major upgrade to its flagship XTAL headset. The latest version of XTAL features 8K resolution (4K per eye), improved lenses with a 180-degree field-of-view, and a new add-on module for augmented reality and mixed reality functionality. The headset also still includes eye tracking as well as integrated Leap Motion sensors to enable controller-free navigation and interactions.

    (Image source: VRgineers)

  • zGlue ChipBuilder

    zGlue is a software company that develops tools for chipmakers and designers. Its latest offering, ChipBuilder 3.0 is a design tool to for building custom silicon chips and accelerating time to market. The software suite features an expansive library of chipsets and allows engineers to capture schematics, route and verify designs, and download netlists. The tool allows engineers to create realistic 3D models and code their own chips and even place orders for physical chips via zGlue’s Shuttle Program.

    (Image source: zGlue / CES)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics

want-to-build-an-open-source-hardware-and-software-robot?

The 2020 Consumer Electronics Show (CES) is full of engineering marvels. Many of these marvels are manifested as advances in robots. For example, consider UBTech highlights are this year’s show. The company’s intelligent humanoid service robot named “Walker” won the Best of CES 2019 and will be back with additional features at the 2020 show. According to the company, Walker will be faster and demonstrate more human-like walking as well as yoga poses that show its huge improvement in motion control. The robot will also demonstrate the ability to push a cart, draw pictures, and write characters, plus showing improved static balance with full-body compliance control.

There’s another robot system that the technical community might find equally interesting if a bit less flashy. France’s Pollen Robotics is displaying their “Reachy” robot at CES2020. In collaboration with the INCIA Neuroscience Institute in France, the company has developed a 3D-printed robot arm that’s 100% open source. Reachy is billed as an expressive humanoid service robot specializing in interacting with people and manipulating objects. This robot is built with prepackaged AI and modular robotics that should easily accommodate many real-world applications, such as extensions for disabled people (human augmentation), helping out at events (like CES), assisting small businesses and even as a receptionist.

According to the company, Reachy can be easily programmed in Python and offers ready-to-use operational environments for game play, serving coffee, making music, handing out specific objects, and more. The robot is also fully customizable with open hardware, software and data!

The company invites developers to join their open source community and participate on Github, although the company explains that Reachy is currently still under development. The open source hardware, software and data won’t be released until the robot is ready, toward the end Q1-2020.

Image source: Pollen Robots / Reachy

But what does it really mean to say a company or platform supports open hardware and software/  

mobile-robot-companies-you-need-to-know

Here’s a broad look at the commercial mobile robot market. Most of these robots are designed specifically for manufacturing and warehouse tasks, while others provide specialized tasks, such as cleaning or carrying medical care items.

  • Mobile robots, Boston dynamics, Aethon Tug

    McKinsey describes mobile robots as autonomous guided vehicles (AGVs) and autonomous mobile robots (AMRs). AGVs and AMRs are not fixedly installed but mobile. Navigation is either onboard (camera or laser based) or external (path based using magnetic tape, wire, or rails on the ground). Application mobile robots are used for logistics and delivery as well as for moving pieces, such as boxes, pallets, or tools, in industrial settings between machinery, transfer points, or storage areas.

    According to IDTechEx, automated guide carts and vehicles (AGC and AGV) have been in use for a long time. They are reliable and trusted to handle all manner of payloads. Their installation is however time-consuming, and their workflow can be difficult to adapt.

    The landscape of mobile robots was set on fire when Amazon acquired Kiva Systems in 2012. The event set off a wave of start-ups, and Amazon has continued to develop mobile robot technology and acquire mobile robot companies. IDTechEx forecasts that between 2020 and 2030, more than 1 million mobile robots will be sold.

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto

    Clearpath Robotics (Otto Motors)

    The OTTO 100 is a small, powerful self-driving vehicle designed to move boxes, carts, bins, and other human-scale payloads through dynamic environments. OTTO navigates spaces just like a person does. It maintains a map of the environment in its memory and uses visual reference points to always know its position. No guides, infrastructure, or predefined paths required. (Image source: Otto Motors)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck

    6 River Systems

    The Chuck robot from 6 River Systems is built from the same technology and sensors as autonomous vehicles. Chuck uses machine learning and artificial intelligence to navigate. The robot leads users through their work zones to help them minimize walking, stay on task, and work more efficiently. Chuck integrates with warehouse management systems so it can be used in all put-away, picking, counting, replenishment, and sorting tasks. (Image source: 6 River Systems)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics

    NextShift Robotics

    The TM-100 from NextShift Robotics is designed to recognize humans and other obstacles in its path. It is designed to stop to give way to people and smart enough find a way around obstacles. The TM-100 is built to handle normal warehouse conditions; obstacles, dirt, dust, and temperature extremes. With its rugged industrial design, it can navigate uneven floors, bumps and dropped items. (Image source: NextShift Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange

    GrayOrange

    The Butler from GrayOrange helps users with the volume and mix of orders common in warehouses. For many companies, flexible automation is the only viable solution. The autonomous mobile robots from GreyOrange are designed to meet these needs. (Image source: GrayOrange)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Swisslog

    Swisslog

    Swisslong uses customized Kuka robots to provide traditional high bay warehouse robot-based material handling solutions. Swisslog offers a range of traditional and out-of-the-box technologies for automated warehousing. The company offers modular, flexible and software-driven material handling technologies. The warehouse solutions are customized for optimal flow of goods at a low cost-per-pick. (Image source: Swisslog)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Seegrid

    Seegrid

    The Seegrid Smart Platform combines self-driving vehicles and fleet management software for a connected materials handling solution. On a Seegrid AGV, a series of stereo cameras work in unison to continuously capture and build a three-dimensional, computer-generated view of the work environment. Which means, when something in the environment changes, Seegrid AGVs compute thousands of up-to-the-moment reference points to continue successful navigation, uninterrupted. (Image source: Seegrid)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Moxi, Dilligent Robotics

    Diligent Robotics

    The Moxi from Diligent Robotics is a hospital robot assistant that helps clinical staff with non-patient-facing tasks like gathering supplies and bringing them to patient rooms, delivering lab samples, fetching items from central supply, and removing soiled linen bags. Automation helps hospitals maintain consistent care workflows and gives staff more time for patient care. (Image source: Diligent Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Locus Robotics

    Locus Robotics

    LocusBots are designed to increase in productivity. The Locus solution works with any type of tote, box, bin, or container needed. The robots can use multiple tote types at the same time to meet changing needs, products, or order profiles. The LocusBots are deisgned to make it easy to consistently increase units-per-hour and lines-per-hour rates, fulfill more orders, and scale on demand, compared to traditional cart or motorized cart systems. (Image source: Locus Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, MiR Robotics

    MiR (Mobile Industrial Robots Corp.)

    MiR develops and markets a line of autonomous mobile robots that manage internal logistics. Founded and run by Danish robotics industry professionals, MiR is headquartered in Odense, Denmark In April 2018 was acquired by American company Teradyne. (Image source: MiR)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, AutoGuide Mobile Robots

    AutoGuide Mobile Robots

    AutoGuide Mobile Robots designs, develops and manufactures high-payload industrial autonomous mobile robots for assembly, manufacturing, warehousing and distribution operations. AutoGuide’s Max N10 modular mobile robot platform is a natural feature guidance platform with a number of application-specific configurations available, including tugger, conveyor deck, car mover and pallet stacker. AutoGuide was acquired by Teradyne in late 2019. (Image source: AutoGuide Mobile Robots)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, KUKA

    KUKA

    KUKA mobile robots navigate autonomously, act in swarms and offer flexibility for industrial manufacturing. This is especially important for internal logistics. KUKA offers a mobility portfolio, from manually movable to autonomously navigating solutions. The fully autonomous variants work without any induction loops, floor markings, or magnets.  (Image source: KUKA)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Omron

    Omron

    Omron mobile robots are fundamentally built to serve human workers. Designed to meet the industry requirements, Omron mobile robots interact with people to promote a collaborative, safe working environment. Safety lasers and sonar allow our robots to detect obstacles in their path and prevent collisions. (Image source: Omron)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, iRobot, Roomba

    iRobot Corp.

    iRobot is the company that produces the Roomba. Roomba robots use Dual Multi-Surface Brushes to help thoroughly clean your floors. One brush loosens and agitates dirt, and the other moves in the opposite direction to extract and pull it in. (Image source: iRobot)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, IAM Robotics

    IAM Robotics

    IAM Robotics designs robots that act autonomously without the help of humans. This requires tight integration of acute perception, autonomous navigation, manipulation, and artificial intelligence. Te company designs operations that are optimized for both humans and robots. (Image source: IAM Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Fetch Robotics

    Fetch Robotics

    Fetch Robotics provides a cloud-driven AMR solution that addresses material handling and data collection for warehousing and intralogistics environments. The Fetch Robotics AMRs are designed to reduce costs and improve throughput, efficiency, and productivity, while working alongside people. Image source: Fetch Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug

    Aethon (TUG)

    Aethon’s TUGs can generate digital maps, routes, and delivery points, and the charging stations are simply plugged into the wall. An AGV often require fixed specialty signifiers like tracks, wires, tape, or reflectors to navigate. AMRs like TUG feature technology that includes camera- and laser-based navigational systems to allow safe operation in indoor environments. (Image source: Aethon)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Baylo Robotics

    Balyo Robotics

    Balyo robotic trucks are developed in conjunction with material handling company, Hyster Yale Group. Balyo’s range of robots are designed to perform in autonomy in tasks such as load transfer to floor, pick-up and placement of pallets on machines (conveyors, wrapping machines, etc.), medium and full-height storage, logistics train, barcode scanning, and storage in very narrow aisles. (Image source: Balyo Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Waypoint Robotics

    Waypoint Robotics

    Waypoint Robotics offers autonomous mobile robots with AMR architecture. The company has created a lineup of industrial-strength robots designed to be set up and operated by the workforce on the job today. The company can build customized mobile robots for a wide range of intended applications. (Image source: Waypoint Robotics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Canvas Technology

    Canvas Technology

    Canvas Technology was acquired by Amazon last year, one of a number of mobile robot acquisitions Amazon has made, beginning with the company’s acquisition of Kiva Systems in 2012. Kiva was rebranded as Amazon Robotics. The online retailer had rolled out more than 100,000 robots internally. (Image source: Canvas Technology)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, Material Handling Systems, MHS

    Material Handling Systems (MHS)

    MHS provides operational expertise and systems integration experience to put mobile robots to work as effective parts of complete systems. The company’s technology is designed to enable robots to build their own maps of the operating environment and use onboard sensors and cameras to process their surroundings, self-locate, and navigate based on real-time conditions. (Image source: MHS)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, Brain Corp. Whiz

    Brain Corp.

    Brain has created intelligent, self-driving technology. BrainOS enables commercial cleaning machines to work seamlessly alongside teammates. The machines are powered by BrainOS, which can autonomously navigate complex and dynamic environments with the goal of safety and cleaning performance. (Image source: Brain)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto

    Boston Dynamics

    Boston Dynamics offers a wide range of mobile robots. Some can walk, while others roll. The company combines the principles of dynamic control and balance with mechanical designs, electronics, and software for high-performance robots equipped with perception, navigation, and intelligence. (Image source: Boston Dynamics)

  • Mobile robots, Boston dynamics, Aethon Tug, Clearpath Robotics, Otto, 6 Rivier Systems, Chuck, Nextshift Robotics, GrayOrange, RealTime Robotics

    Realtime Robotics

    Realtime Robotics’ initial invention was a proprietary computer processor that quickly solved how to get a robot or vehicle to its desired target without collisions. The goal was to solve the problem of conventional motion planning which has been too slow for robot and AV applications in dynamic environments. The company is working on applying its robotics autonomy to autonomous vehicles. (Image source: Realtime Robotics)

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News . Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event,  DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard?  Register to attend !

the-bosch-virtual-visor-dynamically-blocks-the-sun-from-your-eyes

Bosch engineers are prepared to deliver us from the heartbreak of intrusive sun visors, with an LCD panel that dynamically shades only the driver’s eyes from sun glare while remaining otherwise transparent.

Though it seems that we struggle mainly to see traffic signals while waiting at a red light with the visor deployed to block the sun, a pair of University of Toronto researchers say that the risk of life-threatening crashes is 16 percent higher when the sun is bright, so the Bosch Virtual Visor has potential as a life-saving technology.

The visor itself is a single transparent LCD panel fitted with a driver-facing camera and backed by artificial intelligence facial detection and analysis software. The AI locates the landmarks on the driver’s face, identifying the eyes so that it can darken the sections of the visor that cast a shadow on the eyes. 

“We discovered early in the development that users adjust their traditional sun visors to always cast a shadow on their own eyes,” said Jason Zink, technical expert for Bosch in North America and one of the co-creators of the Virtual Visor. “This realization was profound in helping simplify the product concept and fuel the design of the technology.” 

Bosch proudly points to the ability of its employees to come up with an idea and gain corporate backing to develop it to this stage as evidence of what the company calls an “innovation culture.”

Image source: Bosch

“We’ve built a culture around empowering our associates by putting them in the driver’s seat,” said Mike Mansuetti, president of Bosch in North America. The Virtual Visor was developed by a team in North America as part of Bosch internal innovation activities. “As a leading global technology provider, we understand that innovation can come from any level of an organization, and we want to see that grow.” 

Zink and his colleagues Andy Woodrich, Arun Biyani, and Ryan Todd toiled to win budget approval to work on his idea for an active sun visor. “It was an inspiring idea,” recalled Zink. “The only part of the sun visor that needs to do any blocking is where the sun hits your eyes. The rest of it can be totally transparent.”

The team of engineers, who work in Bosch’s powertrain department, pursued this idea far outside their own area with creativity. “Like many early-stage ideas, we were working with limited capital and resources,” said Zink. “The original prototype, we used to first pitch the concept, was made from an old LCD monitor we recovered from a recycling bin.” 

The Virtual Visor has since been moved to the Bosch Car Multimedia division, which demonstrates that it has graduated from an engineer’s crazy notion to a production-ready device.

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.

lamborghini-says,-"alexa,-go-200-miles-per-hour"

Image source: Automobili Lamborghini 

Why have a plain old boring stationary cylindrical Amazon Alexa when you could have a wedge-shaped Alexa packing 640 horsepower and the ability to rocket to more than 200 mph? That’s what you get with the 2020 Lamborghini Huracan EVO, which adds Alexa integration to its 5.2-liter V10 powerplant, all-wheel drive and dynamic suspension set up.

While other carmakers have already installed Alex artificial intelligence, this is the first time it will be available in a super sports car. Also, this version will be the first to give drivers control of the car’s systems through Alexa.

Others will let you adjust your connected home thermostat using voice commands while driving, but the Huracan EVO lets you do the same thing with the car’s own climate control system. You can also cabin lighting, seat heaters, and the setting of Lamborghini Dinamica Veicolo Integrata (LDVI), Lamborghini’s dynamic suspension system. 

Of course, the usual Alexa capabilities are there too, so you can play music or ask about the weather as with any Alexa-enabled device. But the companies say they have ambitious plans to expand the collaboration, so not only will Alexa’s capabilities be updateable in the Huracan, but they are working on further connectivity and integration with Amazon Web Services for still more features in the future.

Image source: Automobili Lamborghini

“Our vision is for Alexa to become a natural, intuitive part of the driving experience, and Lamborghini has embraced that by integrating Alexa directly into its onboard infotainment systems,” adds Ned Curic, vice president of Alexa Auto at Amazon. “The integration will enable Lamborghini owners to enjoy the convenience of an intelligent voice service while focusing on the joy of the Lamborghini driving experience, and we expect it to set a new standard for in-car voice experiences when it ships this year.” 

This doesn’t mean the Huracan is reduced to a mere vessel for delivery of Alexa services, fortunately, Lamborghini promised. “The Huracan EVO is an outstanding driver’s car, and connectivity enables our customers to focus on the driving, thus enhancing their Lamborghini experience,” says Stefano Domenicali, Chairman and Chief Executive Officer of Automobili Lamborghini.

Image source: Automobili Lamborghini

Lamborghini has also announced that it will introduce a $208,571 rear-drive version of the Huracan EVO to appeal to purists, so we look forward to put the Raging Bull’s latest developments to the test soon.

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.