Back in 2013, Honda worked with UC-Davis to launch a smart home project that would consume zero net energy. It was a bold experiment and a technical IoT marvel. Human dwellers occupied the home along with over 230 built-in sensors. Both provided a wealth of data and feedback that yielded several surprising results. Foremost was the importance of collecting data in a real-life environment, analyzing it and then acting on that analysis to try out new conditions and improved technologies. Six years later, the data and details of this project have been compiled into 5 key lessons learned, which will be reviewed shortly.

First, a bit of background on the beginnings of this project are needed. Before the smart house could be built, all aspects of its design, operation and sustainability had to be understood and balanced. Even the home’s site selection was chosen to ensure the best exposure for the rooftop solar panels. Every detail of the overall design was similarly reviewed with a collaborative team consisting of an architect, HVAC designer, electrical/electronic and mechanical engineers, construction certification members, and Honda experts. Heating, cooling, lighting, operation of appliances, and water reuse activities were designed together to support zero net energy consumption while allowing the occupants to live comfortably.

From the human occupant perspective, the goal was not to significantly change specific behavior patterns. For example, if the occupants had to wash dishes, shower or run laundry, then the home had to respond immediately. If the timing of these activities required excessive use of the energy grid, then the Honda-designed Home Energy Management System (HEMS) would intervene to allow them to continue their daily routine, as well as return extra power to the grid if possible.  

The HEMS, located in the smart home’s garage, was a hardware and software system that monitored, controlled and optimize electrical generation and consumption throughout the home’s microgrid. It stored solar energy during the day and was capable of “listening” to the grid to ensure power was only drawn at the most carbon-efficient times.

Image Source: Honda Smart Home System – HEMS and EV in Garage

The project has proven to be a success. Located on the West Village campus of the University of California, Davis, the home as annually produced more energy from renewable sources than it consumes annually, including enough energy to power smart car (e.g., a Honda Fit EV) for daily commuting. Energy management systems were essential to maintaining efficient heating, cooling and lighting systems within the house.

Other sustainability factors, such as water-use, were also managed and controlled. The result a home with three times more water-efficiency than a typical U.S. home.

Proof of the zero-net energy consumption is available from yearly data accessible to anyone on everyone on the download tab of the Honda Smart Home site.

Image Source: Honda Smart Home – Data

Professor Ernst Bauer in his lab at Vienna University of Technology, where he led the development of a new material with unprecedented effectiveness for converting heat to electricity. (Source: Vienna University of Technology)

Researchers in Austria have developed material that they say is the most effective to date for converting heat into electrical energy. This unprecedented ability means the material could be used to provide an autonomous and renewable source of energy for a range of technologies, such as sensors or even small computer processors, by allowing them to generate their own power from temperature differences.

A material’s ZT value measures the amount of electrical energy that can be generated at a given temperature difference; the higher the value, the better the thermoelectric properties. This new material—created by researchers at Vienna University of Technology – is comprised of iron, vanadium, tungsten, and aluminum applied to a silicon crystal, and has a ZT value of five to six, the highest ever measured for thermoelectric materials. Modern thermoelectric materials are maxed out at values of about 2.5 to 2.8.

“The difference is a much better performance of this material, about two times larger than the best reported so far in literature,” Ernst Bauer, the team lead, and a professor in the Institute of Solid State Physics at the university, told Design News.

Composition informs behavior

Key to the high thermal conductivity of the material is a “combination of several physical properties and parameters, Bauer told Design News.

The atoms in the material are arranged in what’s called a face-centered cubic lattice, he said. The distance between two iron atoms in the material is always the same, while the same is true for the atoms that comprise the other elements found in the material. This structure on its own is irregular.

When a thin layer of the material is applied to silicon, however, there is a dramatic change in its structure. The atoms still form a cubic pattern, but in a way that the distribution of the different types of atoms becomes completely random.

“Two iron atoms may sit next to each other, the places next to them may be occupied by vanadium or aluminum, and there is no longer any rule that dictates where the next iron atom is to be found in the crystal,” Bauer said.

This change in the arrangement of the atoms also changes the material’s electronic structure, which protects the electrical charge—the portions of which are called Weyl Fermions – as it moves through the material from scattering processes, Bauer said. This results in a very low electrical resistance.

The Vienna University of Technology researchers published a paper on their work in the journal Nature.

While a thin layer of the material itself can’t generate enough energy to power even small devices, “it has the advantage of being extremely compact and adaptable,” Bauer said. He and his team aim to use the material as a component of small-scale energy generators to provide power for sensors and other small electronic devices.

Bauer and his colleagues also will continue their work by seeking new

materials with similar properties and taking a deeper look into the one they developed “to understand on a microscopic basis all relevant phenomena occurring in this material,” he told Design News.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.


Most second-year university engineering students can easily explain the differences between force, torque and pressure. The reason for their confident answers is that engineering schools typically require a term of study in both static and dynamic forces by a student’s sophomore year. However, from that point on, further studies in these areas are usually confined to aerospace, civil and mechanical engineering disciplines. Few electronic engineers need or will take advanced force mechanic courses.

But modern advances in material properties and device miniaturization as in micro-electro-mechanical systems (MEMS) and sensors mean that force, torque and pressure are relevant across all of the major disciplines. A quick technical review will help remind everyone of these basic concepts.


Simply put, a force is a push or a pull upon an object. A force can cause an object with mass to change its velocity, i.e., to accelerate. Since a force has both magnitude and direction, it is a vector quantity.

A unit of force in the International Systems (or SI) of units is a newton. One newton is defined as the unit of force which would give to a mass of one kilogram an acceleration of 1 meter per second, per second. In terms of an equation, force equals mass times acceleration (F = ma).

Actually, Newton’s Second Law of Motion defines force as the change in momentum over time, not mass through an acceleration. But the momentum equation is reduced to F=ma for basic engineering calculations.

Sometimes the word “load” is used instead of force. Civil and mechanical engineers tend to make calculations based on the load in which a system (e.g., a bridge) is resisting the force of gravity from both the weight of the bridge as well as the vehicles driving over it.

Newton’s Laws have been called the basis for space flight. According to NASA, understanding how space travel is possible requires an understanding of the concept of mass, force, and acceleration as described in Newton’s Three Laws of Motion. Consider a space rocket in which the pressure created by the controlled explosion inside the rocket’s engines results in a tremendous force known as thrust. The gas from the explosion escapes through the engine’s nozzles which propels the rocket in the opposite direction (Law #3), thus following F=MA (Law #2) which lifts the rocket into space. Assuming the rocket travels beyond Earth’s atmosphere, it will continue to move into space even after the propellant gas is gone (Law #1).

Newton’s Three Laws of Motion


Every object in a state of uniform motion will remain in that state of motion unless an external force acts on it.


Force equals mass times acceleration [F = ma]


For every action there is an equal and opposite reaction.


The first university course in static forces is usually followed by a course in dynamic forces in which the idea of rational force or torque is introduced. Torque is the tendency of a force to rotate or twist an object about an axis, fulcrum, or pivot. It is the rotational equivalent of linear force.

Formally, torque (or the moment of force) is the product of the magnitude of the force and the perpendicular distance of the line of action of force from the axis of rotation.  The SI unit for torque is the newton metre (N•m). 

Image Source: Wikipedia by Yawe (Public Domain)

Deriving the equation for torque is often done from a purely force perspective. But it can also be accomplished by looking at the amount of work required to rotate an object. This was the approach the Richard Feynman used in one of his lectures on rotation in two-dimensions.

“We shall get to the theory of torques quantitatively by studying the work done in turning an object, for one very nice way of defining a force is to say how much work it does when it acts through a given displacement,” explained Feynman.

Feynman was able to show that, just as force times distance is work, torque times angle equals work. This point is highlighted in several avionic and aeronautical examples from NASA’s Glenn Research Center where NASA designs and develops technologies for aeronautics and space exploration. Force, torque and pressure concepts continue to exert their influences far beyond the earth’s atmosphere. Concern the release of a large satellite like the Cygnus Cargo Craft from the International Space Station (ISS). The satellite is connected to a large robotic arm that removes it from the ISS prior to release into space. The robotic arm acts just like a huge moment of force in space subject to forces, torques and pressure acting in space.

Image Source: NASA Glenn Research Center


Pressure is the force per unit area applied in a direction perpendicular to the surface of an object. Many of us are familiar with gauge pressure from measuring tire pressures. Gage pressure is the pressure relative to the local atmospheric or ambient pressure. This is in contrast to absolute pressure or the actual value of the pressure at any point.  This will make more sense shortly.

Pressure is the amount of force acting per unit area. The SI unit for pressure is the pascal (Pa), equal to one newton per square meter (N/m2). Pressure is also measured in non-SI units such as bar and psi.

In his lecture on the The Kinetic Theory of Gases, Feynman introduced the concept of pressure by thinking about the force needed for a piston plunger to contain a certain volume of gas inside a box. The amount of force needed to keep a plunger or lid of area A would be a measure of the force per unit area of pressure. In other words, pressure is equal to the force that must be applied on a piston, divided by the area of the piston (P = F/A).

Image Source: CalTech – Feynman Lectures

Applications for pressure technologies exist both on and off the planet. In space, however, pressure is so low that it may almost be considered as non-existent. That’s why engineers often talk about vacuum rather than pressure in space applications. A vacuum is any pressure less than the local atmospheric pressure. It is defined as the difference between the local atmospheric pressure and the point of a measurement. 

While space has a very low pressure, it is not a perfect vacuum. It is an approximation, a place where the gaseous pressure is much, MUCH less than the Earth’s atmospheric pressure.

The extremely low pressure in the vacuum of space is why humans need space suits to provide a pressurized environment. A space suit provides air pressure to keep the fluids in our body in a liquid state, i.e., to prevent our bodily fluids from boiling due to low pressure (via PV = nRT). Like a tire, a space suit is essentially an inflated balloon that is restricted by some rubberized fabric.

Homework question: Why didn’t’ the wheels on the Space Shuttle bust while in space, i.e., in the presence of a vacuum? Look for the answer in the comments section. 

In summary, force, torque, pressure and vacuum are important physical concepts that – thanks to advances in material sciences and MEMS devices – cross all of the major disciplines. Further, these fundamental concepts continue to have relevance in applications like space systems among many others.


From breakthroughs and new innovations to established technologies, these are the inventions, gadgets, and trends that shaped the last decade.

  • It’s been a busy decade in the tech space. New innovations emerged and older ones finally matured in ways that have had a major impact. The 2010s brought us the rise of 3D printing, the rebirth of VR, and an explosion in AI technologies. The health industry was all about wearables. And a digital currency gold rush made us rethink encryption.

    As we prepare to enter the 2020s, let’s take a look back at how far we’ve come.

    Here are the 15 technologies, gadgets, and trends that had the biggest impact on the industry, and our lives, in the last decade.

    (Image source: Pete Linforth from Pixabay  )

  • 3D Printing

    A technology first developed in the 60s has become as common a phrase in manufacturing as injection molding or CNC machining. 3D printing has grown from a novel way to create tchotchkes and plastic parts into a serious technology with applications ranging from automotive and aerospace to even medical. 3D printing has become a serious option for prototyping and small-scale production. And rise of new materials and even metal 3D printing has expanded its applications. We may only be a generation or two away from seeing patients with 3D-printed organs in their bodies.

    (Image source: Airwolf 3D)

  • Artificial Intelligence

    You couldn’t open a newspaper in the 2010s without some sort of AI-related headlines. Whether it was IBM Watson winning at Jeopardy, fears of robots taking jobs, or the rise of autonomous vehicles, the last 10 years have put AI on everyone’s mind like never before. AI has potential to transform nearly every industry on the planet and already has in many cases. And the growing ethical and moral concerns around the technology only further demonstrate that it’s here to stay.

    (Image source: Gordon Johnson from Pixabay  )

  • Blockchain

    Bitcoin went from the currency of choice for Internet drug dealers to sparking a full on gold rush as investors looked to cash in on Bitcoin’s skyrocketing value. But the best thing Bitcoin did this decade was bring new attention to the technology underneath it – blockchain. Increased interest in blockchain has found the technology finding implementations in cybersecurity, manufacturing, fintech, and even video games. Blockchain made us rethink security, automation, and accountability and is going to be a key component in the ever-expanding Internet of Things going forward.

    (Image source: Pixabay)

  • Collaborative Robots

    Robots have worked alongside humans for a long time, but never like they have in recent years. The rise of collaborative robots (cobots) brought machines into factories that can work right next to human workers without the need for safety cages. The now defunct Rethink Robotics created arguably the most memorable cobot with Baxter (shown), but several major robotics companies including Boston Dynamics, Fanuc, and Universal Robots have all gotten into the game.

    Cobots also sparked a lot of debate as to their impact on jobs and the economy. But concerns haven’t slowed their growth. You’d be hard pressed to find an industrial robotics company today without at least one cobot offering in its portfolio.

    (Image source: Rethink Robotics)

  • digital twin, VR, AR headsets, machine developers, B&R

    Digital Twins

    The rise of the Internet of Things and Industry 4.0 has brought with it new ways of thinking of the design and manufacturing process. None of these has been more praised than the digital twin. Consumer electronics, automobiles, even factories themselves can be modeled in virtual space, providing real-time insights into design and production workflows without the costly expense of physical prototyping. Add VR and AR to the mix and engineers get an added layer of immersion and visualization.

     (Image source: B&R)

  • GPUs

    Chip technology overall has come a long way in the last decade, but none further than the GPU. Spearheaded by chipmakers including Nvidia (especially Nvidia), AMD, Intel, and Asus, GPUs grew from their specialized role as graphics processors into a key enabler behind the high-end computing needed for AI. Even autonomous cars have leveraged GPUs to handle their computing needs.

    It used to be that only serious video gamers cared about the quality of their GPU. Now any company, engineer, or even hobbyist developing hardware that leverages AI has to take a serious look at GPUs as a solution.

    (Image source: Nvidia)

  • The Internet of Things / Industry 4.0

    There was a time when going on about how, “everything is connected,” might have made you sound like a conspiracy theorist. Now, it makes you sound more like an IT professional. From factory automation; to devices in our homes like thermostats, locks, and cameras; even to our cars – pretty much anything that could have wireless or Internet connectivity added to it got it.

    Sure some use cases were certainly more valuable than others, but the rapid growth of the IoT made one thing certain – the future is connected. And whether you prefer cloud-based solutions or handling things on the edge, no device is ever going to be an island ever again. As staggering as it may sound, the march toward 1 trillion connected devices is far from an exaggeration.

    (Image source: jeferrb from Pixabay )

  • LiDAR

    You need a lot of technologies to create an autonomous vehicle – AI, radar, even thermal sensors – but LiDAR is what really put self-driving cars on the road. It’s not enough on its own, and needs to work alongside other sensors, but engineers have found the technology – traditionally used in meteorology and GPS – to be absolutely crucial in allowing autonomous vehicles to recognize their surroundings – including humans and animals in the road.

    (Image source: Innoviz)

  • Lithium-Ion Batteries

    The key innovators behind lithium-ion batteries received a long-overdue Nobel Prize in 2019. That’s likely because there’s no avoiding just how significant an impact lithium-ion has had – particularly in recent years. New battery technologies have made electric vehicles an attractive option for any consumer, and new battery chemistries and configurations are making our devices lighter and thinner with every generation. Researchers are always looking for better alternatives, but lithium-ion established itself as the heavyweight king of batteries in the last 10 years and it doesn’t look ready to relinquish that title anytime soon.

    (Image source: Johan Jarnestad/The Royal Swedish Academy of Sciences)

  • The Mars Rovers

    We learned more about the Red Planet than ever before thanks to NASA’s Mars exploration rovers. The rovers, Spirit and Opportunity (shown), first landed on Mars in 2004 and since then have brought scientists incredible insights about our neighboring planet – including that Mars was once wetter and had conditions that could have sustained microbial life. The knowledge gained from both will surely be carried on as NASA continues to plot a manned mission to Mars in the coming decades. Spirit ended its mission in 2011, while Opportunity operated for an unprecedented 15 years, finally ending its mission in 2018. And we’ll always remember Opportunity’s last communication to NASA – poetically interpreted as, “”My battery is low and it’s getting dark.”

    (Image source: NASA)

  • Open Source

    Open source used to be a dirty word for developers and consumers. The perception was that RISC-Vanything open source was likely to be insecure, shoddily put together, and lacking any long term support. But open source has proven to be a viable option for developers, and a valuable tool. Microsoft and IBM both made big investments in open source with the acquisitions of Github and Red Hat respectively.

    We’ve even seen the growth of open-source hardware for the first time. The open-source chip architecture has seen an ever-growing ecosystem of companies emerge around it in recent years – all aimed at changing the way we build and use processors.

    (Image source: Markus Spiske on Unsplash)

  • Raspberry Pi

    You can’t mention DIY electronics without thinking of the Raspberry Pi. Since its introduction in 2012, the single board computer has gone from a go-to platform for hobbyists and makers to a serious development platform for engineers working in IoT and even AI. Even if you use another single board computer, or even a microcontroller like the Arduino, for your projects, we all owe a debt to Raspberry Pi for bringing electric engineering a bit closer to home.

    (Image source: Raspberry Pi Foundation)

  • Smartphones

    enormous impact smartphones have had on our lives. Smartphones have grown into full-It doesn’t matter whether you prefer iOS, Android, or another option, there’s no denying the fledged computing platforms – enabling entirely new business models ranging from digital health to mobile VR. The gaming market in particular has enjoyed huge returns thanks to the computing power offered by today’s smartphones.

    (Image source: Apple)

  • VR, AR, MR, and XR (The new realities)

    Virtual reality has had a lot of starts and stops over the decades. But thanks to the Oculus Rift and other headsets such as the HTC Vive – VR is finally delivering on its promise. Ten years ago if you had asked anyone if they used VR in their workflow they might have laughed. Today, it’s become more and more commonplace.

    The rise of augmented reality (AR), mixed reality (MR), and extended reality (XR) have sparked even more use cases in both the consumer and enterprise space. Pokemon Go showed us consumers will value AR for entertainment, but plenty of big names including Microsoft, Google, and HP brought the technology into the enterprise space as well.

    (Image source: HP)

  • Wearables

    The 2010s saw technology grow from something we carry to an actual accessory that we can wear. From consumer focused products like the Apple Watch, Samsung Galaxy Gear, and even the FitBit, to serious medical devices like the AlivCor EEG, intended to track and help diagnose diseases, wearables found their way onto millions of bodies. There was certainly a wearables bubble that has since burst, but the digital health sector owes much of its success to wearables. And Google’s recent major acquisition of Fitbit shows that the tech industry believes there’s more to wearables than being a high-tech fashion statement.

    (Image source: Fitbit)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics.

The Snapdragon 865 (shown) can handle 5G and boasts an AI engine twice as powerful as the previous Snapdragon model and the ability to support 8K video and up to a 200-megapixel camera. (Image source: Qualcomm) 

Qualcomm’s latest Snapdragon platforms are aimed squarely at bringing 5G devices to consumers next year.

This week, at its annual Snapdragon Tech Summit, the chipmaker unveiled two new mobile computing platforms – the Snapdragon 765 and 865 – both targeted at 5G speeds and artificial intelligence processing for Android-based devices.

 “We need systems that put 5G and AI together,” Alex Katouzian, senior vice president and general manager, mobile at Qualcomm, told the Tech Summit audience. He outlined Qualcomm’s roadmap for 2020, where the company plans to be a part of 5G devices released at all tiers, with AI also ubiquitously integrated into them.

The Snapdragon 765 looks to be the more consumer-focused platform. With Qualcomm’s X52 5G modem integrated, the 765 supports both millimeter wave (mmWave) and sub-6 frequencies for 5G and is capable of download speeds of up to 3.7 gigabits per second (Gbit/s), according to Qualcomm. It also supports 5G SA and NSA modes, TDD and FDD with dynamic spectrum sharing (DSS), global 5G roaming, and support for multi-SIM.

Katouzian said the 765 is targeted at serving three major pillars – photo and video, AI, and multiplayer gaming. The platform is equipped with the fifth generation of Qualcomm’s proprietary AI Engine for handling various tasks such as creating better photos. The engine itself is capable of speeds of up to 5 tera (trillion) operations per second (TOPS). The ISP can capture 4K video and can support up to a 192-megapixel camera. Another version of the 765 – the 765G – will be specially optimized for online gaming experiences (the “g” stands for “gaming”). Snapdragon 765G offers a bit more performance. It’s capable of up to 5.5 TOPS and has a boosted GPU for faster graphics rendering.

On the higher end, the Snapdragon 865, which will be packaged with Qualcomm’s X55 modem-RFs (the X55 modem is not integrated in the 865 as the X52 is with the 765), kicks things up in terms of horsepower. Aimed at more premium applications, the 865 is targeting download speeds exceeding 5 Gbit/s, again using mmWave and sub-6. The 865’s AI Engine’s processing speed reaches up to 15 TOPs – double the performance of the previous Snapdragon, the 855.

Qualcomm is touting the 865 as the “worlds first 2-gigabit-per-second camera capable ISP.” The platform can capture 8K video at 30 frames per second. And when filming 4K video each frame can be captured at 64 megapixels. It also supports up to a 200-megapixel camera.

Overall, Katouzian said the 865 offers a 25% increase in graphics performance over the 855 –meaning desktop features, like high-quality gaming, can be brought into the mobile space.

The 865 and 765/G will also be available as modular platforms.

5G hardware is coming

Qualcomm has already actively secured partnerships around the 765 and 865 and has been very active in pushing for 5G-enabled hardware for both mobile and desktop applications. Earlier this year Qualcomm and long-time collaborator Lenovo unveiled Project Limitless – a concept for a 5G-enabled PC. Based on Qualcomm’s 7-nanometer 8cx 5G compute platform, Project Limitless demonstrated the idea of an “always on, always connected” PC that draws on 5G connectivity for cloud-based applications and storage as well as distributed computing.

According to Sergio Buniac, president of Lenovo subsidiary Motorola, the next generation of the newly released Razr will be based on Qualcomm’s latest platforms. Motorola turned heads (and sparked some heavy early 2000’s nostalgia) earlier this year when it announced the 2020 re-release of its once popular flagship phone, the Razr.

The new Razr is a foldable, clamshell phone based on the Snapdragon 710 platform. It’ll be the latest device in a new wave of foldable screen devices coming to market such as Samsung’s new Galaxy Fold, and Microsoft’s Surface Neo foldable, dual-screen laptop.

Chinese electronics company Xiaomi has also made a big commitment to 5G.Speaking at the Snapdragon Tech Summit, Xiaomi’s president, Bin Lin, said the company is committed to launching more than 10 5G smartphones in 2020. Among these will be the Mi 10, a Snapdragon 865-based phone that will feature a 108-megapixel camera.

Lin said Xiaomi also believes 5G will usher in new form factors for phones as well – enabling concept phones like the Xiaomi’s Mi Mix Alpha – a snartphone with a 180-degree wraparound, touchscreen display – to come to reality.

Qualcomm expects the first 765- and 865-based devices to hit markets as soon as the first quarter of 2020.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!


Tiny satellites have made space accessible to a new generation of university students, private companies and even helped cash-strapped government agencies like NASA. Generally known as nano-satellites (nanosats) or cube-satellites (cubesats), this technology has been made possible by the semiconductor driven miniaturization of electronic and electro-mechanical systems. In recognition of the trend, the IEEE has even launched a new journal on, “ Miniaturization for Air and Space Systems (J-MASS).”

Mass is a premium consideration when placing anything into space. That’s why the names of tiny satellites depends upon their mass. Nanosats are the general category for any satellite with a mass from 1 kg to 10 kg. Nanosats include the categories of well-known cubesats and perhaps less well known PocketQubes, TubeSats, SunCubes, ThinSats and non-standard picosatellites. Chipsats – cracker-size, gram-scale wafer miniprobes – are not considered nanosats but have been called attosats by some.

Cubesats (cubesatellite, cube satellite) are a type of nanosatellites defined by the CubeSat Design Specification (CSD), unofficially called the Cubesat standard.

The original goal of all these tiny, miniature satellites was to provide affordable access to space for the university science community. Many major universities now have a space program, as do several private company startups and even government agencies like NASA and the DoD.

The focus of this slideshow is to show nanosat technologies, from the carriers and launch mechanisms to several NASA cubesats performing a variety of missions. We’ll end with an example of a chipsat. Let’s begin!

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.


There’s a scene in the 2019 Child’s Play remake in which an elderly woman has her first experience in an autonomous car. Naturally, being a horror movie and all, everything goes terribly wrong – the car gets hacked by a killer Bluetooth-enabled doll (yes, that really happens!) and the woman winds up dead.

As farcical as that scene is it does play into to a very real concern around self-driving cars – consumer fear. It won’t matter if we reach full Level 5 autonomy if people are afraid to get into the cars. Even now, years – perhaps decades – before we’ll actually see a fully autonomous car on a public road, people are already concerned about how safe the vehicles are.

AAA has been surveying consumers about autonomous vehicles since 2016, and have never found the majority of people to be completely comfortable with the idea of a self-driving car.

In its latest survey. AAA found that 71 percent of US drivers are afraid of getting into a fully autonomous vehicle. That’s an eight percent increase from 2018 (63%).

Greg Brannon, AAA’s director of Automotive Engineering and Industry Relations believes the sustained level of fear can be attributed to several highly publicized accidents involving autonomous vehicles, including an incident in Tempe, Arizona in which a self-driving Uber killed a pedestrian.

“Also, it could simply be due to a fear of the unknown. AAA’s research has found that the more exposure drivers have to similar types of technology such as advanced driver assistance systems, comfort level improves,” Brannon wrote in a press statement.

Speaking as part of a panel on autonomous vehicle trust at the recent Automobility LA conference Steve Koenig , VP of research at the Consumer Technology Association (CTA) said the future of consumer/autonomous vehicle relations lies in “two Es – education and experience.” He said this will be particularly important in the next two to three years as more innovations come to the market and commercial deployments take place.

Fellow panelist, Amitai Bin-Nun, VP of autonomous vehicles and mobility innovation at Securing America’s Future Energy (SAFE), agreed:

“The first major grant to develop autonomous vehicles happened in 1983. What matters is not whether it hits the road in 2023 or 2030, but that it’s done in a responsible manner,” Bin-Nun said. “

AAA’s annual automated vehicle survey has consistently shown the majority of US consumers have a fear of self-driving cars. (Image source: AAA)

The big ‘why’ of Level 5

But is reaching Level 5 autonomy even necessarily, particularly given the fears and concerns around it? “What happens at Level 4 [autonomy] is that you start opening things up for the disabled, elderly, and others who can’t drive,” Bin-Nun said. “[AVs] need to be implemented in a way that solves problems and creates value. AVs by themselves don’t’ solve [traffic] congestion, for example, but they do things that are very helpful.”

“We have the ability to eliminate human driver error,” panelist Bert Kaufman, head of corporate and regulatory affairs at Zoox, a developer of AV technologies, said. “To get to trust you have to have safety. And there is an incredible safety opportunity with this technology.”

“AVs have a potential to change society in ways we haven’t seen,” panelist Steven Surhigh, vice president and general manager, cloud services for Harman International said.” There will be no need for parking lots and garages, for example. It allows you to change the dynamics of green space – creating businesses and jobs that we can’t even imagine yet.”

For Surhigh the real question is, “How do we make it so consumers don’t think about whether they trust the vehicle or not?” Harman International, a subsidiary of Samsung, develops connected electronics technologies for the automotive industry. In his work Surhigh says he encourages companies to focus on “experiences per mile” when examining consumer experience with AVs. The idea is to provide consumers with such an immersive, entertaining, or productive experience in the vehicle that issues of trust fade into the background.

“There has to be a benefit to the rider in terms of productivity or entertainment,” Surhigh said. “In a fully autonomous vehicle we can leverage things like augmented reality to create unique experiences. We can create unique and personalized experience in vehicles beyond just something you can do in your couch.”

Cases like what Surhigh is talking about are already happening with traditional autos. This past Halloween, Universal Studios theme parks partnered with holoride, an immersive entertainment company, to create an attraction that combined VR with a 2020 Ford Explorer to create a virtual haunted hayride-like experience.

holoride turns vehicles into moving theme parks using VR. Could attractions like this ease consumer worries about autonomous cars?

Riding the curve

If there is a silver lining to AAA’s survey, it’s that it does show consumers aren’t opposed to being better educated about autonomous vehicles.

“Despite fears still running high, AAA’s study also shows that Americans are willing to take baby steps toward incorporating this type of technology into their lives,” AAA’s Brannon said in a press statement. “Hands-on exposure in more controlled, low-risk environments coupled with stronger education will play a key role in easing fears about self-driving cars.”

AAA’s findings fall in line with similar discoveries made by the CTA. “At the CTA we do a lot of research and a lot of this is grounded in sentiments,” Koenig said. “Typically, the adoption curve is S-shaped – where there’s some inflection point that causes the market to grow. Where are we now with AVs? We’re in this very beginning stage.”

Koenig said CTA’s research has shown more than any particular technology it will be helping consumers understand the “why” that really pushes consumers towards AVs. “There are key ingredients like 5G, but the consumer doesn’t care about a lot of these nitty-gritty details. When you talk about trust it really comes around to educating people about the benefits of autonomous vehicles.”

He continued: “We ask consumers,’Would you ride in a self-driving vehicle?’ and they say, no. Why? Because they don’t understand what it is or the use case. But when we talk about it in terms of the benefits – what if it could save time; reduce accidents? – we see a vast majority endorsement of these benefits.”

The technologies behind , from artificial intelligence to 5G connectivity, are going to continue to roll out and improve. What’s going to be crucial for consumers in many experts’ minds is how those initial rollouts are handled – even if they are small and limited in scale.

“We talk about getting autonomy on the road; it’s about doing it in a judicious manner. OEMs are looking to make sure it’s done right and done safely,” Harman’s Surhigh said. “Getting from Level 3 to 4 means the whole system has to come together. And how the vehicle communicates with the occupants is an important factor.”

“We can’t let fear be the enemy of good,” Koenig added. “The AI will improve; services will get better. Very soon, as 5G networks start to build out and more in-car experiences happen, these solutions will start to take their place in that multimodal mosaic of transportation options.

“It’s going to be baby steps…We need to build trust around these initial experiences. It is still true today that word of mouth is the real gold in marketing.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!


From robotics, to blockchain transactions, to VR and AR-enhanced experiences, and more – meet 10 companies transforming the way we shop.

  • Despite the explosion in online shopping, physical retail still reigns supreme. According to the Commerce Department online only accounts for about 12% of sales, leaving the rest to be made up by physical stores. And though online shopping is gradually gaining on traditional shopping, there are many companies devoted to enhancing and streamlining the brick and mortar experience. Whether it’s a Black Friday rush or just running an errand, technology is bringing some big changes to retail in the coming years.

    Here are 10 companies whose technologies will transform the way we shop.

  • AiFi

    Santa Clara, Calif. startup AiFi is using artificial intelligence to fully automate the retail experience. Using off-the-shelf cameras, the company’s  Open Autonomous Store Infrastructure and Services (OASIS) technology is able to track individual consumers through a store as well as the items they pick up off the shelf. Simply walk into the store, grab what you want, walk out, and the store will charge you for what you’ve taken via a mobile app. AiFi is looking to automate larger stores and is also creating “nano-store” – automated kiosks that sit in airports and neighborhoods.

    For more on AiFi and its technology read our feature on the company.

    (Image source: AiFi)

  • Bossa Nova Robotics

    Bossa Nova is a combination robotics and AI analytics company. First spun off from Carnegie Mellon University’s Robotics Institute in 2005, the company made headlines when it began supplying autonomous shelf-scanning robots to Walmart stores in the US in 2017. The company’s robot, the Auto-S, roams store aisles scanning and tracking inventory – all while avoiding obstacles and customers. In 2019 Walmart expanded its partnership with Bossa Nova, pledging to place Auto-S robots in 350 stores. Bossa Nova says its newest robot, which will be releasing in 2020 will be slimmer and feature advanced cameras and sensors for more accurate tracking and performance.

    (Image source: Bossa Nova Robotics)

  • Brain Corp

    San Diego’s Brain Corp is introducing autonomous floor cleaning robots into retail stores. Similar to the Roomba, these robots are able to navigate retail spaces and work alongside store employees. The company’s proprietary operating system, BrainOS, is a cloud-based OS for controlling autonomous cleaning robots, and for adding autonomy to existing machines. The company has over 3,000 industrial cleaning robots currently deployed in retail. Robots that use BrainOS include: Whiz, a robotic vacuum from Softbank Robotics; the ICE RS26 industrial floor care robot (shown above) manufactured by Intelligent Cleaning Equipment (ICE); and the Minutemen Roboscrub 20.

    (Image source: Intelligent Cleaning Equipment / Brain Corp)

  • Cybera

    Cybera works behind the scenes of retail and offers IoT solutions to smaller businesses that may not be able to take advantage of more expensive technologies such as robotics or augmented reality. The company says that its software-defined wide area network (SD-WAN) platform can be a great equalizer for smaller retailers looking to compete with major e-commerce stores. By separating at network’s hardware from its control mechanism, Cybera’s SD-WAN platform lowers time-to-market and decreases networking costs – enabling smaller retailers to implement new IT technologies on a tighter budget.

    (Image source: Cybera)

  • DLT Labs

    With the added security benefits along with the use of digital currency, blockchain and retail seem like a match made in heaven. DLT Labs is the developer of DL Asset Track, a blockchain-based application for automating freight and payments. The company recently announced a partnership with Walmart Canada wherein Walmart stores will be using DL Asset Track to to track deliveries, verify transactions, automate payments, and reconcile inventory among stores. Using blockchain’s distributed ledger will allow Walmart securely and easily automate various aspects of its supply chain as well as create a secure and verifiable system of accountability through its network.

    (Image source: Walmart Canada)

  • FlowFound

    No self-respecting consumer would ever purchase a vehicle without a test drive. But test drives can also be time consuming and difficult to coordinate with a busy schedule. Atlanta-based FlowFound is betting that virtual reality (VR) and extended reality (XR) can bridge this gap. The company uses 360-degree video footage to deliver virtual test drives to consumers to allow them to remotely experience what it’s like behind the wheel using either a phone, tablet, or VR headset. The advantage for consumers is being able to at least get a preview of a vehicle before heading down to the dealership. Dealerships like the idea of being able to provide a remote service for potential customers and capture data analytics on the consumer experience. FlowFound is currently working with Ford, BMW, Hyundai, and Lexus among other major automakers.

    (Image source: FlowFound)

  • Pixvana

    Human employees are still a valuable part of the retail experience – whether it’s in warehouses or customer-facing roles. Pixvana is a creator of XR (extended reality) platforms targeted at corporate learning and training. The company creates virtual environments and training guides to help warehouse, retail, and other employees achieve better outcomes in their job training. By the time workers show up to work for the first time they’re already familiar with the layout and processes. Pixvana recently announced a partnership with sports and lifestyle gear retailer evo, wherein Pixvana will create an XR platform to train evo’s warehouse employees in anticipation of the Black Friday rush.

    (Image source: Pixvana)

  • RocketFuel

    Las Vegas-based RocketFuel develops blockchain technologies for ecommerce checkout systems. The company’s main focus is leveraging blockchain to protect consumer data and privacy as more and more retail payments are done via apps and smartphones. Since RocketFuel’s payment rails are purely blockchain-based, it means full checkout platforms can be hosted on third-party websites and even embedded into ads, without the need for a merchant server to capture payment and shipping information.

    (Image source: mohamed Hassan from Pixabay )

  • Simbe Robotics

    San Francisco’s Simbe Robotics is the manufacturer of Tally, an autonomous shelf-scanning robot for store inventory tracking and analytics. Tally works on a cloud-based system and doesn’t require infrastructure changes to be implemented into stores. The latest version of Tally also incorporates RFID tracking and machine learning in order to track tagged merchandise in stores in real time. Simbe Robotics’ machines are deployed in retail stores including the Decathlon chain of sporting goods stores (shown above). Tally is also built on the open-source Robot Operating System.

    (Image source: Simbe Robotics)

  • ultrahapics, ultrasonic, hardware, innovation, tactile feedback


    The recent merger between Ultrahaptics and Leap Motion holds promise to transform interactive retail experiences in areas including kiosks, self-checkouts, and more. Ultrahaptics creates a technology that provides mid-air haptic feedback using ultrasonic waves – allowing devices to return a touch sensation to users without the need for gloves or other wearable hardware. With the acquisition of Leap Motion, the company can now integrate its haptic feedback solution with Leap Motion’s hand tracking technology. Imagine, for example, touch-free self-checkouts in stores that allowed you to navigate menus by moving your bare hands through the air and provided haptic feedback for button presses and other actions for improved accuracy.

    (Image source: Ultraleap)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics.

Now that the holiday season is upon us, 3D printers, both personal and commercial, are busy churning out festive decorations. But did you know that this technology can also create food, from desserts and simple side dishes to (in the near future) more complex, layered foods like mashed potatoes. Still, you don’t need a 3D printer to have a high-tech Thanksgiving with your friends and family. There are plenty of commercial IoT devices to aid the traditional cook – and even the sous chef of the future.

3D Printing 

First, let’s consider the decorations. A wide variety of colorful 3D printed Thanksgiving-themed prints are available to enliven the holiday table. If a nice centerpiece is needed, then download the CAD file for a simple turkey puzzle from Maker’s open 3D printing community known as Thingiverse, load up the appropriate filaments, print and assemble.

Image Source: Simple Turkey Puzzle, Thingiverse, by corben33  

If you’re a bit more adventuresome, you might try downloading and printing a slightly more complex Thanksgiving turkey light box. Be careful though to use tealight LED candles, otherwise the 3D printed lightbox will probably melt. That would be a nailed-it failed-it!

Now let’s consider the Thanksgiving meal. The same technology that can create almost anything – from an entire car to a tiny microchip – can also be used to produce edible food. In fact, the global 3D food printing market is a growth industry according to a recent Research and Markets report: Global $525 Million 3D Food Printing Market Analysis & Forecast 2018-2023.

3D food printing uses different pastes and materials to recreate food by relying on technology similar to fused deposition modeling (FDM) but with a dough instead of a plastic filament. This dough may consist of chocolate, sugar, chewing gum, tomato sauce and more.

While you can’t yet 3D print an entire Thanksgiving size edible turkey, you may soon be able to reproduce many of the side dishes like mash potatoes. The challenge for mash potatoes is that current food printers use only one printhead to extrude a single or a mixture of materials. Such a printhead cannot control the materials distribution on a plate whereas a multi-extruder printer could create a more visually appealing layering and texturing of foods such as mash potatoes. While still in the prototyping stages, a team of Chinese researchers at Jiangnan University have recently applied for a US patent on the 3D printing of mashed potatoes.

Holiday desserts are a bit easier to make as they typically require a single print head. Check out the aesthetically pleasing shape of a chocolate dome from 3DByFlow. I haven’t yet sampled their wares, but the chocolate sure looks good. ByFlow, founded in the Netherlands as a family business in 2015, is one of the companies in the growing market of 3D food printing.

IoT Eases Traditional Cooking Chores

Don’t yet have a 3D printer but still want to have a high-tech Thanksgiving? The IoT is here to help. Consider June, a connected oven that lets you control your baking and view your food from a smartphone. Or how about cooking that turkey (or Tofurky for your vegetarian friends and family members) in the “Crock-Pot Smart Slow Cooker.” It comes with a companion application for your smartphone to adjust time, temperature, and other factors.

When it comes to cooking, one of the more futuristic gadgets will soon be the solid-state RF cooker. The advantage of RF technology over the traditional magnetron-based microwave ovens are significant. The magnetron oven generates one power level at a time. In contrast, the solid-state cooker based on RF technology uses both power level control and frequency tuning to adjust the cooking conditions throughout the oven. In other words, you can cook a variety of different foods at the same time, e.g., turkey slices, potatoes and gravy, vegetables, and the like. Such precise cooking temperatures and locations on the plate are made possible by a number of solid state power amplifiers and antennas with closed-loop control to RF systems.’

While this technology has been available for a decade or so, it’s only recently been ready for prime time. Foremost in this effort has been Goji Food Solutions, an Israeli company that had developed an oven using solid-state power chips, RF energy devices and proprietary software. Together, these technologies and software allow the Goji-based oven to cook a variety of foods on a single plate even in the presence of utensils and metal cups.

For now, the first consumer RF cooking appliance that uses Goji’s technology will be in the industrial market. But competition in the commercial areas are already emerging. For example, Chinese appliance manufacturer Midea, in partnership with NXP Semiconductors, is developing the Semiconductor Heating Magic Cube. The Magic Cube combines NXP’s LDMOS RF power transistors that support the RF cooking module.

Image Source: IEEE IMS Show 2015 (Freescale-NXP demonstration, JB)

Whether you have access to a futuristic RF cooker, an IoT-enabled traditional oven or crock-pot, or a 3D food printer, technology can make this year a high-tech cooking adventure. Just be sure to include a few festive decorations and you will have nailed it.

AEye’s iDAR combines LiDAR (right) with HD cameras (right) and AI to give vehicles a sense of human-like perception. (Image source: AEeye)

AEye’s president Blair LaCorte, thinks his company’s sensor technology is not unlike the Indoraptor from Jurassic World: Fallen Kingdom. If you’ve seen the movie you’ll recall the genetically engineered dinosaur, which attacked on command using a laser guidance system.

LaCorte showed a clip from the film to an audience at Automobility LA, where he discussed how AEye’s fusion of LiDAR and HD cameras – intelligent detection and ranging (iDAR) – is going to take sensing in autonomous vehicles to a level that exceeds even human capabilities.

LaCorte said he liked to show that clip – first to wake people up – but also because it demonstrates a key tenet behind AEye’s technology development. “It’s not enough to detect, you have to identify a target and acquire a target,” he said.

The team behind AEye comes from a defense background and have previously worked on creating missile defense systems. Working in the defense industry taught them a lot about working in scenarios, “where you can’t afford to miss anything entering your scene and you have to determine what objects are quickly,” LaCorte said.

But AEye’s latest mission hits closer to home. The company wants to bring military-grade perception to autonomous vehicles – and its says iDAR can do just that. Rather than creating a new sensing solution, LaCorte said AEye has taken cues from the biology of the human visual cortex to develop a system that not only senses objects, but perceives them, “a perception system that can out-perceive the human eye,” he said. “We’re using human biomimicry and, instead of finding things to kill, we’re finding out how not to kill things.”

Current LiDAR systems are great for identifying the presence of objects in the road. The problem is that they do not have an understanding of context. They don’t take environmental conditions, or how they may change, into account. They also can’t balance competing priorities. As far as LiDAR is concerned a trash bag rolling across the road is no more important than a human or animal.

“Systems today bring in data passively,” LaCorte said. “They spend as much time on the sky as on the leaves on a tree, or the girl crossing the street.”

iDAR overcomes this limitation by giving systems an understanding of the importance of objects. By combining a camera and LiDAR, iDAR is able to acquire addition information such as color and depth as it senses. With the additional data it captures, combined with computer vision algorithms, iDAR is able to classify an object and estimate the object’s center of mass, width, height, and depth, as well as its speed and velocity.

And AEye emphasizes this is all done on the edge within the sensor, not via sensor fusion or on the cloud. “You’re actually getting information at the point of acquisition,” LaCorte said. Doing this also cuts down on the processing time and compute power requirements of traditional LiDAR, he added. “By looking at how we comprehend things we’ve been able to teach a computer to comprehend things better than a human does.”

Since its system needs less data than traditional LiDAR and requires less processing, AEye is promising that iDAR can increase the speed of a car’s artificial perception up to 10 times, and also reduce power consumption by five to 10 times.

But the secret that changes everything, LaCorte said, is in iDAR’s motion forecasting capability. The system will not only recognize where an object is, but also where it is headed. Is that other car about to drift into your lane? Is that ball bouncing down the sidewalk heading into the street? Maybe that dog is about to dart across traffic. With iDAR equipped, AEye says it’s possible for a self-driving car to detect and respond to these sorts of things.

The AE110 is AEye’s first hardware product for autonomous vehicle applications. (Image source: AEye)

“No one has ever been able to do this at the edge of the network,” LaCorte said. “With this capability you give [engineers] the tools they need to match heterogenous nodes so they can decide where the decision is made, whether that be in the sensor, the trunk [of the vehicle], or in the cloud. This is about data being turned into information.”

AEye currently offers one iDAR hardware product, the AE110, which fuses 1550 nanometer, solid-state agile MOEMS LiDAR with a low-light HD camera, and embedded AI for autonomous vehicle applications. In 2020 the company is planning to roll out another sensor, the AE200, which will be targeted at level 3 advanced driver assistance systems (ADAS) applications. The company says the AE200 will be a modular system that provide long-range sensing up to 200 meters and a short range performance configuration of 50 meters, both with 10% reflectivity.

“We believe the power and intelligence of the iDAR platform transforms how companies can create and evolve business models around autonomy without having to wait for the creation of full level 5 robotaxis,” LaCorte said. “Automakers are now seeing autonomy as a continuum, and have identified the opportunity to leverage technology across this continuum. As the assets get smarter, OEMs can decide when to upgrade and leverage this intelligence. Technology companies that provide software-definable and modular hardware platforms now can support this automotive industry trend.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!