hyundai-debuts-a-vertical-take-off-air-taxi

The full-size S-A1 mock-up displayed at CES. Image source: Hyundai Motor Co.

Carmaker Hyundai Motor Co. revealed plans at the Consumer Electronics Show in Las Vegas to manufacture electric Vertical Take-Off and Landing (eVTOL) tilt rotor aircraft to serve as air taxis for a planned Uber passenger service called Uber Elevate.

This four-rotor aircraft looks like a cross between a radio-controlled drone and the Bell V-280 Valor military tilt rotor aircraft. Like a drone, the Hyundai S-A1 aircraft has numerous rotors that are powered by electric motors. Like the V-280, the S-A1 can be piloted by a human, carries passengers, and tilts its rotors forward for high-speed flight between take-off and landing.

Bell V-280 Valor tilt rotor aircraft. Image source: Bell Textron Inc.

The S-A1 has a cruising speed of 180 mph, with an operating ceiling of 2,000 feet and enough battery capacity for a 60-mile range. The company says it will be able to recharge in just five to seven minutes. The four main rotors simultaneously provide redundancy in case of failure and reduced noise compared to using fewer, larger rotors, according to Hyundai. In addition to the four tilting rotors, there are two pairs of what look to be counter-rotating rotors fixed in the horizontal position.

As with the V-280, and the U.S. Marines’ V-22 Osprey, the S-A1 tilt rotor pivots the rotors to face forward in flight, relying on wings for lift while the rotors serve as propellers. With the rotors in the upward-facing position, the S-A1 can take off and land like a helicopter or a recreational drone.

Image source: Hyundai Motor Co.

The S-A1 will employ a human pilot initially, but Hyundai plans for the aircraft to become autonomous eventually. It has seats for four passengers, so it is much smaller than the V-280, which carries a dozen passengers at speeds as high as 320 mph. Hyundai has provided no technical details on the S-A1’s battery capacity or the power of the motors.

Unlike aircraft manufacturers such as Bell Textron, Inc., maker of the V-280, Hyundai is experienced building vehicles in volume, at low cost. Hyundai has also been active in its development of electric vehicles, which gives it a foundation in that technology for application to this air taxi concept.

Image source: Hyundai Motor Co.

“Hyundai is our first vehicle partner with experience of manufacturing passenger cars on a global scale,” said Eric Allison, head of Uber Elevate. “We believe Hyundai has the potential to build Uber Air vehicles at rates unseen in the current aerospace industry, producing high quality, reliable aircraft at high volumes to drive down passenger costs per trip.” 

Though the partners announced no timetable, Hyundai expects that Uber will develop a transportation network that will make the S-A1 a viable product. “We are looking at the dawn of a completely new era that will open the skies above our cities. Urban Air Mobility will liberate people from grid-lock and reclaim time for people to invest in activities they care about and enjoy,” said Jaiwon Shin, Executive Vice President and Head of Urban Air Mobility Division at Hyundai Motor Company.

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.

top-10-tech-failures-from-2019-that-hint-at-2020-trends
  • As the last year of the last decade, 2019 had a lot to live up to. Within the span of 10 short years, service apps like Uber, Lyft, AirBnB and others on mobile phones became big business. Mobile phone companies introduced amazing personal features like voice assistance (e.g., Siri and Alexa), iCloud connections for fast video streaming, and very high-resolution HD cameras. Not to be outdone, the automobile was transformed with automation tech and electrification. A Tesla electric vehicle even made it into space.

    Space technology flourished in the last decade with the commercialization of space rockets, the launch of hundreds upon hundreds of communication satellites and the increasing popularity of Cubesats. Back on earth, homes and buildings became smarter while alternative forms of energy continued to improve in efficiency. And the list goes on.

    But there were several notable failures in the last decade, many seeming to culminate in 2019. Here is the short list of the 10 tech failures most worthy of mention, in no particular order.

  • #1 Glitchy Spacecraft Launch

    Boeing suffered several major setbacks this year. The first one was an incomplete demonstration flight of its new astronaut capsule. The mission of Boeing’s CST-100 Starliner spacecraft began successfully but suffered technical problems that prevented it from reaching the International Space Station (ISS). Many observers believe that the Starliner capsule on top of an Atlas rocket simply burned too much fuel as it climbed into space, leaving an insufficient amount to reach the ISS. Some have suggested the failure was from a glitchy timer system that turned off the rocket thrusters too soon.

    The demonstration test wasn’t a complete failure as the Starliner did land successfully in the deserts of New Mexico.

  • #2 Andromeda Strain revisited?

    Remember the Andromeda Strain? It was a techno-thriller novel from 1969 written by Michael Crichton that centered around the efforts of a team of scientists investigating the outbreak of a deadly extraterrestrial microorganism in Arizona.

    Fast forward to 2019. A company in Israel launched its first lunar lander that unfortunately crashed-landed on the moon. The small robotic spacecraft called Beresheet was created by the SpaceIL and Israel Aerospace Industries (IAI). It failed just moments before landing on the moon.

    This was an unmanned operation, but not one devoid of life. A US-based nonprofit had added tardigrades, or water bears, to the capsule. These microscopic, eight-legged creatures could survive in a dormant state through harsh conditions, and maybe even on the moon.

    In other words, earth-based lifeforms have now been introduced to the moon’s ecosystem. Without some water, the tardigrades aren’t likely to revive and spread. But this failure highlights the need for planetary protections – both on the moon and earth.

    It should be noted that the goal of the Arch Mission Foundation was not to contaminate the moon but rather to, “create multiple redundant repositories of human knowledge around the Solar System.” The foundation tests out technologies for long-lasting archives, like securing information in DNA strands or encapsulating insects in artificial amber. In addition to water bears, the Arch’s payload included nickel sheets nanopatterned with thousands of pages of Wikipedia and other texts.

    One of Arch’s first missions was launched by SpaceX on the Falcon Heavy rocket and is now entering an orbit around the Sun for millions of years.  The first books in the Solar Library were Isaac Asimov’s Foundation Trilogy. Can you guess where they are located? The books containing Asimov’s Foundation Trilogy were placed in the glovebox of the Cherry Red Tesla Roadster that will soon be orbiting the Sun.

  • #3 Communication Failures (again)

    Both Boeing and the FAA have been cited for oversight breakdowns that contributed to 737 Max failure. But the actual cause of the tragedy that resulted in the crash of two Boeing 737 Max aircrafts seems to be broad failures in the automated system that controls the new planes. The report by the Joint Authorities Technical Review panel said that assumptions about critical aspects of the plane’s design were “not adequately reviewed, updated, or validated.”

    This lack of communication and incorporation of warnings from the engineering teams is a common problem with very complex, modern systems, e.g., the Challenger Space Shuttle and others.

  • #4 Disappearing Bitcoin Miners

    While 2019 was overall a profitable year for the semiconductor chip development market, there were a few noticeable declines. One was the system-on-chip (SoC) devices made specifically for bitcoin mining. The cost of mining for bitcoins dramatically increased in 2019, leading to a drop in the need for hardware SoC-based equipment.

    In essence, it took much more effort for bitcoin miners to solve the equations required to validate transactions on the Bitcoin network. This increase in mining difficulty reflects the increased competition.

    Another slowdown was in the market for automotive chips and electronics, as companies and drivers realized that autonomous car technology won’t really be ready for several more years. This corresponds well to Gartner’s famous “trough of disappointment” portion in its hype cycle for emerging technologies.

  • #5 Cloud Buckets

    A new type of cybersecurity issue has emerged in which millions of people have had their personal information exposed through file storage systems known as cloud buckets. Such storage areas typically consist of public resources that are easily accessed by a variety of web service applications. Cloud buckets are like public file folders which contain user information.

    Placing sensitive user data information in the cloud offers companies the capability to offload their security to big firms like Google, Apple, Amazon or Microsoft. The problem is that the buckets are not configured by these firms but rather by the companies who use their cloud networks.

    Not all of these companies are storing their customer information properly. This lack of security is easy pickings for identity thieves. It is an example of readily available information that doesn’t require any hacking.

  • #6 Hacks of the Year

    Speaking of hacks, this year experienced even more cybersecurity breaches. In 2018, there were 500 million personal records stolen, according to the Identity Theft Resource Center. But that number was miniscule compared to the 7.9 billion records exposed in 2019 by over 5,000 breaches, as reported by Risk-Based Security. Compared to the 2018 Q3 report, the total number of 2019 breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent. Here’s just a small sampling of the more infamous breaches (more details here):

    > ElasticSearch Server Breach

    > Canva Data Breach

    > Facebook App Data Exposure 

    > Orvibo Leaked Database

    > Social Media Profiles Data Leak

    Sadly, the common theme in many of these data exposures is that data aggregators obtained and used personal information in a way the owners never imaged or gave their consented. This is a legal problem as much as a technical one.

  • #7 Google Glass

    In 2019, Google announced a new $999 Glass augmented reality headset that looked suspicious like the failed Google Glass from the past.

    Early in 2012, Google co-founder Sergey Brin debuted Google Glass. A year later, the founder and head of the Google Glass Project, Babak Parviz, delivered a keynote about the technology at the IEEE Hot Chips event at Stanford.

    One of the ongoing leading smart phone trends is the ever-improving screen resolution and larger screen size. During his keynote, Parviz argued that there was a physical limit to this trend, but glass offered the next display form factor evolution, i.e., immersion with one’s surroundings. This will be especially important in augmented reality applications.

    Originally, Google Glass was a standalone unit (not yet cloud-based) that included internet access, voice controls, and a camera for pictures and videos. It accomplished all of this with dual core processors running at more than 1 GHz. Five MEMS sensors capture all the environmental data. It had a two-dimensional touch panel on side of glass.

    Why was this technology a failure? It wasn’t because of the technology, but rather because it wasn’t clear to the customer what problem it solved or why they needed it. Additionally, many felt it was intrusive as a user of the device could take pictures and short film snippets of people without their knowledge.

    In January 2015, Google announced that they would no longer be developing Google Glass. But that wasn’t the end of the project. Instead, Google pivoted to the business sector by launching Glass Enterprise Edition for workplaces like factories in 2017. This year, Google announced the Glass augmented reality headset.

  • #8 Folding Phone

    Samsung’s Galaxy folding phone was billed as a new dawn in display technology. The phone levered open into a 7.3-inch dynamic AMOLED display.

    Unfortunately, the company had to postpone the launched of the folding phone after early review models broke, delaminated, and got filled with gunk. The problem seemed to be potential defects with a weak hinge as well as substances found inside the device.

    As with many new technologies, the price tag also presented a barrier to anyone but early adopters. A reengineered and improved version is now on sale for near $2,000.

  • #9 Machine-Bias or Garbage-in, Garbage-out

    The challenge of machine-bias came clearly into focus in 2019. Similar to human-bias, machine-bias occurs when the learning process for a Silicon-based machine makes erroneous assumptions due to the limitations of a data set and pre-programming criteria. One example of machine-bias was recently revealed in Apple’s new credit card, which contained an algorithm to decide how much trustworthy (or risky) a user might be. This evaluation used to be done by trained humans but now is often performed by AI based algorithms.

    Apple’s credit card was shown to have a gender bias. Males are more likely to get a higher credit line limit than females. This bias was highlighted when a male entrepreneur was assigned a spending limit 10 times higher than that of his wife, even though they have a common account.

    How does a machine get a bias? A report from IBM Research outlines two main ways AI systems could inherit biases. First, the AI software might contain errors and dependencies. Second, the data set from which AI learns its task may have flaws and bias. These data points come from the real world which contains many biases, e.g., favoring white men to the exclusion of women and minorities. Algorithms are only as smart as the data you feed them. This is a modern update of the old computer data expression, “garbage-in, garbage-out.”

  • #10 Software App Failures

    No list of tech failures would be complete without mention of the apps that didn’t make it. The range of the applications that failed is wide.

    Consider first British Airways (BA) glitch, whose computer system completely wend down during a peak travel season. Over a hundred flights of BA were cancelled and near to 300 delayed. Thousands of passengers were affected. Sadly, this wasn’t the first time the system had failed, which suggests a systemic problem that has not been properly addressed by management.

    Or how about the Facebook 2019 failure that prevented users from viewing or loading images form the newsfeed? Several other social media apps had a similar problem, including Instagram, WhatsApp and Messenger. In each case, users were prevented from sending messages, media files and the like.  Facebook claimed their problem was the result of an accident during routine maintenance.

    Several app failures or hacks from 2019 include Apple’s Facetime bug and the Ring security camera intrusions. The later may have been more of a customer problem as Ring notes that the system invasion was likely the result of the hacker gaining access to the family’s account through weak or stolen login credentials.

what-is-middle-out-systems-engineering?

In school, systems engineering is taught as a top-down process, but in actual practice it involves bottom-up techniques. In the former, the desired system is broken down or partitioned into smaller subsystem parts in order for requirements, functions and architectures to be decomposed to a point where engineers can begin to build hardware, software, networks, etc.

Conversely, the bottom-up approach begins with the integration of lower level hardware, software, network and other components. These subsystems are tested and built-up until the original desired systems is created.  Almost all of the traditional engineering disciplines (like electronic, mechanical, software and network engineering) follow a subsystem or component bottom-up approach to design and test.

Most engineers and managers in the real world follow a middle-out or inside-out approach. As the name implies, the “middle-out” systems engineering method consists of concurrent bottom-up and top-down systems engineering activities. The bottom-up tasks are built on a detailed knowledge of component parts and subsystems.  The concurrent top-down activities will preserve the customer-focused, requirements-driven emphasis that keeps the system development in a functional domain.

One of the key benefits of the middle-out approach is the traceability afforded by combining the top-level requirements-function-synthesis process with the known requirements and functions from bottom-level implemented system elements. Both executive level and component/subsystem engineers are brought together in this activity to ensure the traceability of requirements. Critical members from both groups will then be involved in the design and integration decisions.

Image Source: Wiley – JB Systems

Several experts and practitioners agree – to varying degrees – that most real world systems engineering projects follow a middle-out approach.

“I agree that many projects “should” take middle-out approaches since so few projects today are creating new systems from complete scratch,” observed Cary Bryczek, Principle Solutions Architect for aerospace and defense for Jama Software. “Things like modernization efforts, developing product variants, and the Internet of Things are all requiring a consideration where the future environment itself is uncertain. But I also still see many projects in safety critical spaces – like defense and automotive – are taking traditional top down systems engineering approaches. I suspect a lot of this is driven by contract vehicles.”

Mark Sampson, product manager at Siemens, agrees that a majority of projects involve changes to existing products. However, he prefers the phrase inside-out over middle-out as the former focuses on understanding the impact of a change (e.g., add, remove, or update).

“Today that development process relies on knowledge, talking with experienced people, etc. rather than models to understand the impacts,” explains Samspon. “Of course it all gets much easier if you’ve designed your product for evolving changes by considering up front what the architecture of the product would be and where the possible areas of change are over time.

Regardless of the name, most systems engineers must meet both top-down, corporate objectives and bottom-up, product requirements. Fortunately, the growth of the Model-Based Systems Engineering (MBSE) paradigm supports a middle-out approach. Models can be used in both the top-down, multiple domain architectural and requirements design as well as the bottom-up simulation and prototyping of preliminary system, subsystems and component evaluation and verification. Together, these models provide a platform that combines high-level system models with specific component and subsystem oriented executable models.

The middle-out approach is familiar to the electronics space. Consider the PCB design tool space where vendors are now being driven both from the top-down and also from the middle-out, notes Paul Dempsey, co-founder of the Tech Design Forum. “For example, Altium community beta members have explicitly reached out to the maker community for many middle-out activates.”

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

keynotes-worth-seeing-at-designcon-2020

What do these topics have in common?

  1. The Future of Fiber Optic Communications: Datacenter and Mobile
  2. Design for Security: The Next Frontier of Smart Silicon
  3. Microchips in Space: How Device Design Enables Amazing Astronomy

The answer is that all use microchips and microsystems but in very different ways and for differing motivations.

In the first one, complex system-on-chips (SoC) are integrated with fiber optics to enable dizzyingly fast high-speed connections between processors, memory storage, and interfaces in data rooms and mobile devices across the world.

With so much going on in the world of fiber optic communications, it’s important for designers to keep up to date with the basic engineering issues. The catalyst for this interest is that the global fiber optics market is predicted to grow from 5 billion USD in 2018 to 9 billion USD by the end of 2025.

In his upcoming keynote at Designcon 2020, Chris Cole, VP of Advanced Development at II-VI, will discuss past trends and new developments in fiber optics for datacenter and mobile applications. Two ongoing trends are the replacement of copper wires by fiber optics in the data room as well as the replacement of direct detection by coherent detection in optical systems.

Cole will also explain the major limitations of power and density in communications, and new technologies like Silicon Photonics (SiPh) and co-packaging. Silicon photonics involves the study of optical properties of the group-IV semiconductor and how it can be used to generate, manipulate and detect light. Silicon is prevalent in photodetectors and solar cells, among other technologies.

To learn more, visit: The Future of Fiber Optic Communications: Datacenter

Image Source: Imec
tutorial:-what-are-the-differences-between-force,-torque,-pressure-and-vacuum?

Most second-year university engineering students can easily explain the differences between force, torque and pressure. The reason for their confident answers is that engineering schools typically require a term of study in both static and dynamic forces by a student’s sophomore year. However, from that point on, further studies in these areas are usually confined to aerospace, civil and mechanical engineering disciplines. Few electronic engineers need or will take advanced force mechanic courses.

But modern advances in material properties and device miniaturization as in micro-electro-mechanical systems (MEMS) and sensors mean that force, torque and pressure are relevant across all of the major disciplines. A quick technical review will help remind everyone of these basic concepts.

Force

Simply put, a force is a push or a pull upon an object. A force can cause an object with mass to change its velocity, i.e., to accelerate. Since a force has both magnitude and direction, it is a vector quantity.

A unit of force in the International Systems (or SI) of units is a newton. One newton is defined as the unit of force which would give to a mass of one kilogram an acceleration of 1 meter per second, per second. In terms of an equation, force equals mass times acceleration (F = ma).

Actually, Newton’s Second Law of Motion defines force as the change in momentum over time, not mass through an acceleration. But the momentum equation is reduced to F=ma for basic engineering calculations.

Sometimes the word “load” is used instead of force. Civil and mechanical engineers tend to make calculations based on the load in which a system (e.g., a bridge) is resisting the force of gravity from both the weight of the bridge as well as the vehicles driving over it.

Newton’s Laws have been called the basis for space flight. According to NASA, understanding how space travel is possible requires an understanding of the concept of mass, force, and acceleration as described in Newton’s Three Laws of Motion. Consider a space rocket in which the pressure created by the controlled explosion inside the rocket’s engines results in a tremendous force known as thrust. The gas from the explosion escapes through the engine’s nozzles which propels the rocket in the opposite direction (Law #3), thus following F=MA (Law #2) which lifts the rocket into space. Assuming the rocket travels beyond Earth’s atmosphere, it will continue to move into space even after the propellant gas is gone (Law #1).

Newton’s Three Laws of Motion

1.

Every object in a state of uniform motion will remain in that state of motion unless an external force acts on it.

2.

Force equals mass times acceleration [F = ma]

3.

For every action there is an equal and opposite reaction.

Torque

The first university course in static forces is usually followed by a course in dynamic forces in which the idea of rational force or torque is introduced. Torque is the tendency of a force to rotate or twist an object about an axis, fulcrum, or pivot. It is the rotational equivalent of linear force.

Formally, torque (or the moment of force) is the product of the magnitude of the force and the perpendicular distance of the line of action of force from the axis of rotation.  The SI unit for torque is the newton metre (N•m). 

Image Source: Wikipedia by Yawe (Public Domain)

Deriving the equation for torque is often done from a purely force perspective. But it can also be accomplished by looking at the amount of work required to rotate an object. This was the approach the Richard Feynman used in one of his lectures on rotation in two-dimensions.

“We shall get to the theory of torques quantitatively by studying the work done in turning an object, for one very nice way of defining a force is to say how much work it does when it acts through a given displacement,” explained Feynman.

Feynman was able to show that, just as force times distance is work, torque times angle equals work. This point is highlighted in several avionic and aeronautical examples from NASA’s Glenn Research Center where NASA designs and develops technologies for aeronautics and space exploration. Force, torque and pressure concepts continue to exert their influences far beyond the earth’s atmosphere. Concern the release of a large satellite like the Cygnus Cargo Craft from the International Space Station (ISS). The satellite is connected to a large robotic arm that removes it from the ISS prior to release into space. The robotic arm acts just like a huge moment of force in space subject to forces, torques and pressure acting in space.

Image Source: NASA Glenn Research Center

Pressure

Pressure is the force per unit area applied in a direction perpendicular to the surface of an object. Many of us are familiar with gauge pressure from measuring tire pressures. Gage pressure is the pressure relative to the local atmospheric or ambient pressure. This is in contrast to absolute pressure or the actual value of the pressure at any point.  This will make more sense shortly.

Pressure is the amount of force acting per unit area. The SI unit for pressure is the pascal (Pa), equal to one newton per square meter (N/m2). Pressure is also measured in non-SI units such as bar and psi.

In his lecture on the The Kinetic Theory of Gases, Feynman introduced the concept of pressure by thinking about the force needed for a piston plunger to contain a certain volume of gas inside a box. The amount of force needed to keep a plunger or lid of area A would be a measure of the force per unit area of pressure. In other words, pressure is equal to the force that must be applied on a piston, divided by the area of the piston (P = F/A).

Image Source: CalTech – Feynman Lectures

Applications for pressure technologies exist both on and off the planet. In space, however, pressure is so low that it may almost be considered as non-existent. That’s why engineers often talk about vacuum rather than pressure in space applications. A vacuum is any pressure less than the local atmospheric pressure. It is defined as the difference between the local atmospheric pressure and the point of a measurement. 

While space has a very low pressure, it is not a perfect vacuum. It is an approximation, a place where the gaseous pressure is much, MUCH less than the Earth’s atmospheric pressure.

The extremely low pressure in the vacuum of space is why humans need space suits to provide a pressurized environment. A space suit provides air pressure to keep the fluids in our body in a liquid state, i.e., to prevent our bodily fluids from boiling due to low pressure (via PV = nRT). Like a tire, a space suit is essentially an inflated balloon that is restricted by some rubberized fabric.

Homework question: Why didn’t’ the wheels on the Space Shuttle bust while in space, i.e., in the presence of a vacuum? Look for the answer in the comments section. 

In summary, force, torque, pressure and vacuum are important physical concepts that – thanks to advances in material sciences and MEMS devices – cross all of the major disciplines. Further, these fundamental concepts continue to have relevance in applications like space systems among many others.

the-15-most-influential-technologies-of-the-decade

From breakthroughs and new innovations to established technologies, these are the inventions, gadgets, and trends that shaped the last decade.

  • It’s been a busy decade in the tech space. New innovations emerged and older ones finally matured in ways that have had a major impact. The 2010s brought us the rise of 3D printing, the rebirth of VR, and an explosion in AI technologies. The health industry was all about wearables. And a digital currency gold rush made us rethink encryption.

    As we prepare to enter the 2020s, let’s take a look back at how far we’ve come.

    Here are the 15 technologies, gadgets, and trends that had the biggest impact on the industry, and our lives, in the last decade.

    (Image source: Pete Linforth from Pixabay  )

  • 3D Printing

    A technology first developed in the 60s has become as common a phrase in manufacturing as injection molding or CNC machining. 3D printing has grown from a novel way to create tchotchkes and plastic parts into a serious technology with applications ranging from automotive and aerospace to even medical. 3D printing has become a serious option for prototyping and small-scale production. And rise of new materials and even metal 3D printing has expanded its applications. We may only be a generation or two away from seeing patients with 3D-printed organs in their bodies.

    (Image source: Airwolf 3D)

  • Artificial Intelligence

    You couldn’t open a newspaper in the 2010s without some sort of AI-related headlines. Whether it was IBM Watson winning at Jeopardy, fears of robots taking jobs, or the rise of autonomous vehicles, the last 10 years have put AI on everyone’s mind like never before. AI has potential to transform nearly every industry on the planet and already has in many cases. And the growing ethical and moral concerns around the technology only further demonstrate that it’s here to stay.

    (Image source: Gordon Johnson from Pixabay  )

  • Blockchain

    Bitcoin went from the currency of choice for Internet drug dealers to sparking a full on gold rush as investors looked to cash in on Bitcoin’s skyrocketing value. But the best thing Bitcoin did this decade was bring new attention to the technology underneath it – blockchain. Increased interest in blockchain has found the technology finding implementations in cybersecurity, manufacturing, fintech, and even video games. Blockchain made us rethink security, automation, and accountability and is going to be a key component in the ever-expanding Internet of Things going forward.

    (Image source: Pixabay)

  • Collaborative Robots

    Robots have worked alongside humans for a long time, but never like they have in recent years. The rise of collaborative robots (cobots) brought machines into factories that can work right next to human workers without the need for safety cages. The now defunct Rethink Robotics created arguably the most memorable cobot with Baxter (shown), but several major robotics companies including Boston Dynamics, Fanuc, and Universal Robots have all gotten into the game.

    Cobots also sparked a lot of debate as to their impact on jobs and the economy. But concerns haven’t slowed their growth. You’d be hard pressed to find an industrial robotics company today without at least one cobot offering in its portfolio.

    (Image source: Rethink Robotics)

  • digital twin, VR, AR headsets, machine developers, B&R

    Digital Twins

    The rise of the Internet of Things and Industry 4.0 has brought with it new ways of thinking of the design and manufacturing process. None of these has been more praised than the digital twin. Consumer electronics, automobiles, even factories themselves can be modeled in virtual space, providing real-time insights into design and production workflows without the costly expense of physical prototyping. Add VR and AR to the mix and engineers get an added layer of immersion and visualization.

     (Image source: B&R)

  • GPUs

    Chip technology overall has come a long way in the last decade, but none further than the GPU. Spearheaded by chipmakers including Nvidia (especially Nvidia), AMD, Intel, and Asus, GPUs grew from their specialized role as graphics processors into a key enabler behind the high-end computing needed for AI. Even autonomous cars have leveraged GPUs to handle their computing needs.

    It used to be that only serious video gamers cared about the quality of their GPU. Now any company, engineer, or even hobbyist developing hardware that leverages AI has to take a serious look at GPUs as a solution.

    (Image source: Nvidia)

  • The Internet of Things / Industry 4.0

    There was a time when going on about how, “everything is connected,” might have made you sound like a conspiracy theorist. Now, it makes you sound more like an IT professional. From factory automation; to devices in our homes like thermostats, locks, and cameras; even to our cars – pretty much anything that could have wireless or Internet connectivity added to it got it.

    Sure some use cases were certainly more valuable than others, but the rapid growth of the IoT made one thing certain – the future is connected. And whether you prefer cloud-based solutions or handling things on the edge, no device is ever going to be an island ever again. As staggering as it may sound, the march toward 1 trillion connected devices is far from an exaggeration.

    (Image source: jeferrb from Pixabay )

  • LiDAR

    You need a lot of technologies to create an autonomous vehicle – AI, radar, even thermal sensors – but LiDAR is what really put self-driving cars on the road. It’s not enough on its own, and needs to work alongside other sensors, but engineers have found the technology – traditionally used in meteorology and GPS – to be absolutely crucial in allowing autonomous vehicles to recognize their surroundings – including humans and animals in the road.

    (Image source: Innoviz)

  • Lithium-Ion Batteries

    The key innovators behind lithium-ion batteries received a long-overdue Nobel Prize in 2019. That’s likely because there’s no avoiding just how significant an impact lithium-ion has had – particularly in recent years. New battery technologies have made electric vehicles an attractive option for any consumer, and new battery chemistries and configurations are making our devices lighter and thinner with every generation. Researchers are always looking for better alternatives, but lithium-ion established itself as the heavyweight king of batteries in the last 10 years and it doesn’t look ready to relinquish that title anytime soon.

    (Image source: Johan Jarnestad/The Royal Swedish Academy of Sciences)

  • The Mars Rovers

    We learned more about the Red Planet than ever before thanks to NASA’s Mars exploration rovers. The rovers, Spirit and Opportunity (shown), first landed on Mars in 2004 and since then have brought scientists incredible insights about our neighboring planet – including that Mars was once wetter and had conditions that could have sustained microbial life. The knowledge gained from both will surely be carried on as NASA continues to plot a manned mission to Mars in the coming decades. Spirit ended its mission in 2011, while Opportunity operated for an unprecedented 15 years, finally ending its mission in 2018. And we’ll always remember Opportunity’s last communication to NASA – poetically interpreted as, “”My battery is low and it’s getting dark.”

    (Image source: NASA)

  • Open Source

    Open source used to be a dirty word for developers and consumers. The perception was that RISC-Vanything open source was likely to be insecure, shoddily put together, and lacking any long term support. But open source has proven to be a viable option for developers, and a valuable tool. Microsoft and IBM both made big investments in open source with the acquisitions of Github and Red Hat respectively.

    We’ve even seen the growth of open-source hardware for the first time. The open-source chip architecture has seen an ever-growing ecosystem of companies emerge around it in recent years – all aimed at changing the way we build and use processors.

    (Image source: Markus Spiske on Unsplash)

  • Raspberry Pi

    You can’t mention DIY electronics without thinking of the Raspberry Pi. Since its introduction in 2012, the single board computer has gone from a go-to platform for hobbyists and makers to a serious development platform for engineers working in IoT and even AI. Even if you use another single board computer, or even a microcontroller like the Arduino, for your projects, we all owe a debt to Raspberry Pi for bringing electric engineering a bit closer to home.

    (Image source: Raspberry Pi Foundation)

  • Smartphones

    enormous impact smartphones have had on our lives. Smartphones have grown into full-It doesn’t matter whether you prefer iOS, Android, or another option, there’s no denying the fledged computing platforms – enabling entirely new business models ranging from digital health to mobile VR. The gaming market in particular has enjoyed huge returns thanks to the computing power offered by today’s smartphones.

    (Image source: Apple)

  • VR, AR, MR, and XR (The new realities)

    Virtual reality has had a lot of starts and stops over the decades. But thanks to the Oculus Rift and other headsets such as the HTC Vive – VR is finally delivering on its promise. Ten years ago if you had asked anyone if they used VR in their workflow they might have laughed. Today, it’s become more and more commonplace.

    The rise of augmented reality (AR), mixed reality (MR), and extended reality (XR) have sparked even more use cases in both the consumer and enterprise space. Pokemon Go showed us consumers will value AR for entertainment, but plenty of big names including Microsoft, Google, and HP brought the technology into the enterprise space as well.

    (Image source: HP)

  • Wearables

    The 2010s saw technology grow from something we carry to an actual accessory that we can wear. From consumer focused products like the Apple Watch, Samsung Galaxy Gear, and even the FitBit, to serious medical devices like the AlivCor EEG, intended to track and help diagnose diseases, wearables found their way onto millions of bodies. There was certainly a wearables bubble that has since burst, but the digital health sector owes much of its success to wearables. And Google’s recent major acquisition of Fitbit shows that the tech industry believes there’s more to wearables than being a high-tech fashion statement.

    (Image source: Fitbit)

Chris Wiltz is a Senior Editor at   Design News  covering emerging technologies including AI, VR/AR, blockchain, and robotics.

10-tiny-satellite-technologies

Tiny satellites have made space accessible to a new generation of university students, private companies and even helped cash-strapped government agencies like NASA. Generally known as nano-satellites (nanosats) or cube-satellites (cubesats), this technology has been made possible by the semiconductor driven miniaturization of electronic and electro-mechanical systems. In recognition of the trend, the IEEE has even launched a new journal on, “ Miniaturization for Air and Space Systems (J-MASS).”

Mass is a premium consideration when placing anything into space. That’s why the names of tiny satellites depends upon their mass. Nanosats are the general category for any satellite with a mass from 1 kg to 10 kg. Nanosats include the categories of well-known cubesats and perhaps less well known PocketQubes, TubeSats, SunCubes, ThinSats and non-standard picosatellites. Chipsats – cracker-size, gram-scale wafer miniprobes – are not considered nanosats but have been called attosats by some.

Cubesats (cubesatellite, cube satellite) are a type of nanosatellites defined by the CubeSat Design Specification (CSD), unofficially called the Cubesat standard.

The original goal of all these tiny, miniature satellites was to provide affordable access to space for the university science community. Many major universities now have a space program, as do several private company startups and even government agencies like NASA and the DoD.

The focus of this slideshow is to show nanosat technologies, from the carriers and launch mechanisms to several NASA cubesats performing a variety of missions. We’ll end with an example of a chipsat. Let’s begin!

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

Can you name the leading private companies and nations that have the launch capability to maintain a presence in space?

  • It seems like every few months news of yet another successful rocket launch into space is announced. Most – but not all – of these announcements come from private companies launching communication satellites payloads. The rocket companies are hoping to capitalize on the coming Internet of Space (IoS), which promises global broadband communications.

    The rise of private spaceflight companies – primarily in the US – means more rockets are launching into space than ever before. Most of these rockets launch satellites into low orbit but some carry astronauts into much higher orbits. In the near future, some launch vehicles will even carry tourists, e.g., SpaceX’s Dragon, Boeing’s CST-100 Starliner, Virgin Galactic, and Blue Origin. (The latter two seem primarily focused on the nascent space tourist industry).

    Here are 10 major space launch vehicle companies and national organizations.  

  • SpaceX-Dragon

    SpaceX

    Space Exploration Technologies, also known as SpaceX, was founded by Elon Musk with the aim of reducing space transportation costs to enable the colonization of Mars. SpaceX has developed the Falcon launch vehicle family and the Dragon spacecraft family, among others. The company made history again in 2012 when its Dragon spacecraft became the first commercial spacecraft to deliver cargo to and from the International Space Station (ISS). The company’s Starlink mission is a satellite constellation design to provide satellite Internet access. The constellation will consist of thousands of mass-produced small satellites.

  • United Launch Alliance

    ULA was formed by the union of Boeing and Lockheed Martin in 2005, using the Delta IV Heavy launcher to get large payloads into space. But the price tag for that rocket is high, several times more than SpaceX’s Falcon Heavy platform. That’s why ULA is working on a more powerful and semi-reusable launcher known as the Vulcan. The recyclable portion of the Vulcan system will be the engines.

  • ArianeGroup

    The ArianeGroup is billed as the guarantor of Europe’s autonomous, reliable access to space. In 2015, the ArianeGroup was founded as a joint venture of the European aerospace company Airbus and the French group Safran. The ArianeGroup is the primary contractor for manufacturing of the Ariane 5 launch vehicle, and provides commercial launch services through its subsidiary Arianespace. In 2018, Ariane 5 celebrated its 100th launch. The company is working on the Ariane 6 to carry heavier payloads, which may launch in 2020.

  • Amazon/Blue Origin

    Amazon billionaire Jeff Bezos founded Blue Origin in 2001 to be initially focused on suborbital spaceflight. Several of its suborbital New Shepard vehicles have been built and flown, although only a handful of satellite missions have been performed. The main focus of the company seems to be launching people to space aboard the New Shepard. In September 2016, Blue Origin announced its plans for the enormous, reusable, and orbit-capable New Glenn rocket system.

  • Virgin Orbit

    The company was formed in 2017 to developed an air-launched rocket carried by the Cosmic Girl aircraft – a previous project of Sir Richard Branson’s Virgin Galactic. Virgin Orbit (part of the Virgin Group) plans to provide launch services for small satellites. In July 2019, the company announced that it had completed a key drop test of its LauncherOne vehicle, the last major step in the development program of the launch service. More recently, Virgin Galactic opened a ‘Gateway to Space’ in New Mexico for its tourists in space program using the USS Unity aircraft.

  • Stratolaunch Systems

    Stratolaunch Systems, founded by Paul G. Allen, consists of a carrier aircraft called the Stratolaunch and a multi-stage payload launch vehicle (still being built). The payload vehicle would be launched at high altitude into space from under the carrier aircraft. In April 2019, the Stratoluanch aircraft completed its first complete flight. In October 2019, the company announced continuing regular operation and change of ownership without naming the owner.

  • National Rockets

    US – NASA and Military

    The US Government/Military still rely heavily on Delta IV, Atlas V and more recently SpaceX Falcon 9 rockets to launch most of their satellite and other payloads. Meanwhile, NASA has designed and is testing the Space Launch System (SLS)as the foundation for a generation of human exploration missions to deep space, including missions to the Moon and Mars. The SLS will send the Orion spacecraft, its astronaut crew and cargo to deep space.

  • China

    The China National Space Administration (CNSA) is the national agency that co-ordinates the country’s space activities. In 2019, China launched 27 orbital missions – more than Russia or the US separately for the same time period. One of the recent launches was for the Beidou navigation satellite launched by a Long March 3b carrier rocket from the Xichang Satellite Launch Center.

  • India

    The Indian Space Research Organisation (ISRO) is the space agency of the Government of India. The Polar Satellite Launch Vehicle (PSLV) is the workhorse and third generation launch vehicle of India. The Geosynchronous Satellite Launch Vehicle Mark II (GSLV Mk II) is currently the largest Indian launch vehicle. The most upcoming event will be the launch of PSLV-C47 carrying Cartosat-3 scheduled on November 25, 2019.

  • Russia

    The Agency that coordinates the space activities for Russian is known as Roscosmos. It performs numerous civilian activities including Earth monitoring and the astronaut program. Roscosmos launch vehicles include the R-7 (commonly known as the Soyuz rocket) and the Proton.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

how-to-improve-your-cad-productivity-by-measuring-true-workstation-performance

Cars all have the same basic components that function in similar ways, but you wouldn’t test-drive a sedan to make a buying decision on an SUV.

The same rationale should apply to CAD workstation performance. Benchmarks that use generic CAD models and datasets to characterize workstation performance will never give you a true picture of your real-world experience. That vague picture might be fine if your work isn’t critical to the operation of your company, or if productivity and efficiency don’t matter. But who has that luxury?

Chart from “The Economic Value of Rapid Response Time”

Faster response time = greater productivity

“When a computer and its users interact at a pace that ensures that neither has to wait on the other, productivity soars, the cost of the work done on the computer tumbles, employees get more satisfaction from their work, and its quality tends to improve. Few online computer systems are this well balanced; few executives are aware that such a balance is economically and technically feasible.” 

The quote above is from a paper titled “The Economic Value of Rapid Response Time,” and it’s just as true today as it was when it was published in 1982. It seems basic, but it is worth reiterating: The more computing speed you can pack into an engineer’s day, the more benefits it reaps, including:

  • Cost savings

  • Improved individual productivity

  • Shortened production schedules

  • Faster time to market

  • Ability to do more testing and prototyping

  • Increased product quality

Beyond economic issues, there is the mission-critical nature of CAD/CAM work. When events such as an airplane crash or automotive recall happen, CAD applications are placed in environments that demand the best in terms of people, process, and products. There is no room for poor or mediocre workstation performance.

Differences that matter

CAD workstations are all doing the same basic things, but it is the way they do those things – based on the packages they run, the optimizations they provide, the specific models they are building, and the way they render those models – that adds up to performance differences.

Aside from the obvious differences in the objects being designed and engineered, there are many other differentiating factors affecting CAD performance, including:

Unique industry requirements – Product design is far from one approach fits all. Architecture, engineering, and construction (AEC) have requirements such as building information modeling (BIM) processes and construction documentation that are unique to the industry. Aerospace and medical device products need to account for certification requirements. Different industries require different levels of detail, accuracy, and technical specifications.

Different approaches to engaging the CPU and GPUOne of the fastest-moving areas of CAD innovation is how different packages handle rendering in order to deliver a finished photorealistic model in the least time possible. There is a delicate balancing act in managing the work performed by the CPU with that of the GPU, enabling their respective architectures to complement one another through careful partitioning of the different methods and stages of rendering. The approaches to rendering differ not just among CAD packages, but when newer versions of the same package are introduced.

Certification and optimizations – Graphics card vendors and workstation OEMs probably work closer with CAD software vendors than with any other type of ISVs. There are typically more certification requirements than with other applications, along with intense competition to make CAD packages run faster and more intuitively. Change is a constant.

The new SPECapc for Solidworks 2019 benchmark exercises a full range of graphics and CPU functionality used within real-world operations. (Image source: SPECapc group)

Implementation of external referencing – Many CAD packages rely on externally referenced geometry to fully populate a model and streamline the modeling process. If your model has 100 sockets that are identical, why model or duplicate each individually? Instead, a part is saved to a file outside of the working assembly and a placeholder is imported into the main workspace that references that file. How a CAD package implements this functionality can make a difference in performance.

Support for different types of modeling – CAD packages typically go down different routes to the same destination in the type of modeling they support. Parametric modeling – a step-by-step process where an entire history of the model is recorded and can be adjusted at any time in the product development process – is implemented differently in Catia, Solidworks, NX, and other CAD packages. Direct modeling – which forgoes the object’s history and allows the user to adjust faces, vertices, or sections of the model directly – also works differently depending on the package. Then there’s the hybrid approach, using a combination of parametric and direct modeling. Same basic functionality; different implementations.

What happens after design is done – A lot of the major performance differences among CAD packages comes after the modeling is done. This includes functionality such as PLM/PDM integration, photorealistic rendering, management of different design iterations, and storage and retrieval of models.

The perpetual need to do more – In CAD/CAM/CAE, the prevailing trend is always to do more with larger and larger models. Benchmarks that measure basic functionality on small models are out of step with the innovation driving the industry toward integrating mechanical, software, electrical, and other elements in the same huge model. This leads to larger datasets that place pressure on the critical need for increased productivity and reliability.

Fortunately, there are choices

The complexity of measuring workstation performance based on professional CAD applications might be enough to have users throwing their hands in the air in despair. Fortunately, there are good options besides the generic benchmarks that are the equivalent of a plain brown wrapper.

The Standard Performance Evaluation Corporation’s Graphics and Workstation Performance Group (SPEC/GWPG), a non-profit organization that’s been around for more than 30 years, develops graphics and workstation benchmarks based on actual applications and real-world workloads, including ones for CAD/CAM. The really good news is that these benchmarks are free as long as your company is not a vendor of computer products or services.

Benchmark models should be representative of those used in a variety of day-to-day CAD work. Models within the SPECapc for Solidworks 2019 benchmark range in size from 392 MB in memory to this large model of a NASA Crawler Transporter, which takes up 2.3 GB in memory. (Image source: Jay Patterson)

Covering all the bases

If you want the best representation of total performance for a specific application, go for SPEC application performance characterization (SPECapc) benchmarks. These benchmarks require installing a specific version of the application you want to benchmark. They use models and workloads that represent those used by CAD/CAM professionals and provide a comprehensive picture of CPU, GPU and I/O performance.

Current SPECapc benchmarks for CAD/CAM cover Creo 3.0, Solidworks 2019, and NX 9.0/10.0. There are also SPECapc benchmarks for Autodesk Maya and 3ds Max.

Measuring graphics performance

If you don’t have easy access to the application you wish to benchmark and you’re interested primarily in graphics performance, you can run SPECviewperf. SPECviewperf measures the 3D graphics performance of systems running under the OpenGL and Direct X application programming interfaces. There is also a version for Linux.

SPECviewperf workloads, called viewsets, represent graphics content and behavior extracted from professional applications, without the need to install the applications themselves. SPECviewperf 13, the current version, includes viewsets for Catia, Creo, NX, Solidworks, and Autodesk Showcase.

A model from the Catia viewset in SPECviewperf 13. (Image source: SPECapc group)

Total workstation performance

If you are interested in total workstation system performance – CPU, GPU, I/O, and memory bandwidth – you can download SPECworkstation 3, which includes all of the CAD/CAM viewsets listed above, plus additional product development, media and entertainment, financial services, energy, life sciences, and general operations workloads.

Like SPECviewperf, SPECworkstation is a self-contained software package that doesn’t require you to install or run the applications themselves.

Speed saves when applied correctly

There’s a saying that speed kills, but it can also save – time and money. But in order for speed to deliver on its promise it must be applied in the proper way.

ISVs, workstation OEMs, graphics card manufacturers, and other component makers are in a perpetual race to improve the productivity of CAD engineers. Good CAD workstation benchmarking is a way to show how vendor innovations translate into real-world performance for very specific products and functionality, so you can make improvements in the areas that reap the greatest ROI.

Trey Morton is chair of the SPECapc subcommittee and a performance engineer in the Dell Client Solutions Group.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

nominate-someone-outstanding-to-be-the-2020-designcon-engineer-of-the-year

Each year during DesignCon, we recognize an engineer who is deemed by their peers to be the best of the best in engineering and new product advancements at the chip, board, or system level. The winner will be selected based on his or her leadership, creativity, and out-of-the-box thinking brought to design/test of chips, boards, or systems, with particular attention paid to areas of signal and power integrity. 

Click here to go directly to the Nomination Form. 

Vishram Pandit presented with the 2019 DesignCon Engineer of the Year Award
Vishram Pandit, 2019 Engineer of the Year, accepted the award from Naomi Price, Conference Content Director for DesignCon.

Last year we presented the award to Vishram Pandit. His goal is to share knowledge with the technical community that will one day create the CPUs for next-generation cars, phones, and servers. He is well on his way to making that happen; to date he has co-authored a book on Power Integrity for I/O Interfaces and is co-author of approximately 30 conference and journal publications, out of which 19 were presented at DesignCon. Those papers have received 3 best paper awards and 3 finalist awards. Other past award winners have been industry greats Dr. Mike Li in 2018, Heidi Barnes in 2017, Eric Bogatin in 2016, and Michael Steinberger in 2015.

Nominations are open from now until Tuesday, December 3, 2019. To be considered for this award , nominees must be active members of the DesignCon community and cannot be employed by the same company as the winner from the previous year.

Members of the Design News editorial staff will choose finalists from the nominees, and then the DesignCon and Design News communities will have the opportunity to vote for the engineer who will receive the 2020 Award. Watch DesignNews.com for the announcement of the finalists and voting.

The winner of DesignCon’s Engineer of the Year Award will be provided with a $1,000 grant or scholarship to present to the educational institution of his or her choice.

You may nominate multiple people for the award, but please only nominate each person once. Multiple people nominating the same one engineer is encouraged. Feel free to nominate yourself or another engineer.

Click here to go to the Nomination Form. 

Click here to see the Official Rules and Regulations of the Award. 

Click here to learn more about DesignCon and register for the event. 

Contact Naomi Price with questions.