8-criteria-to-evaluate-when-selecting-an-rtos

Real-time operating systems (RTOS) are finding their way into nearly two-thirds of all applications. The timing requirements for a modern embedded system have become so complex that attempting to build a system without an RTOS, while possible, is just fraught with additional complexities that can sink a product before it ever launches.

While RTOS’s have become very popular, I’m often concerned with the way in which teams go about selecting their RTOS. For most teams I talk to, when they consider an RTOS, the one criterion they look at is cost. But as engineers we should be evaluating several different criteria, and not just using data but also following a process to ensure success and repeatability.

In this post, we are going to examine the eight criteria that I consider when selecting an RTOS for an application.

1.) Security

For any device that is going to be connected to the internet, selecting an RTOS that has gone through security certifications and was built for security is a must. Security certifications for an RTOS are relatively new but will be an important criterion for any team that wants to make sure every component in their system is designed with security in mind. One immediate example is verifying that the RTOS has Arm’s Platform Security Architecture (PSA) certification at least to level one.

2.) Ecosystem

Having a strong ecosystem around an RTOS is critical to ensuring success. For example, developers should be looking at and asking questions such as:

  • Is this RTOS highly adopted in the industry?

  • Does it support the major hardware architectures?

  • Does it have a vibrant community?

A strong ecosystem will ensure that the RTOS is not only used and supported far into the future, but may also determine whether the team can get fast support for the community.

3.) Features

The feature that are available on the RTOS can make a big difference, not only on the amount of time spent debugging, but can also impact the software architecture. Developers need to look at the features that are available in their RTOS and verify that it:

  • Supports static memory allocation

  • Has real-time tracing features

  • Integrates with the target memory protection unit (MPU)

These are just a few RTOS features that can dramatically improve the application robustness that I rarely see teams give full consideration to.

4.) Vendor

The vendor, the company that created and maintains the RTOS, should be carefully considered as well. A quick web search will reveal that there are over 100 RTOS’s in existence, many of which are no longer supported or used. Teams need to carefully look at the RTOS vendor and ask themselves:

  • How long has this company been in business, and will they still be in business five years from now?

  • Will they respond quickly to support tickets and questions?

  • Do they provide quality code and documentation?

There is nothing worse than selecting an RTOS and then being left to your own devices to work through problems or issues. The RTOS vendor should be looked at as a strategic partner that is critical to the products success.

5.) Middleware

An RTOS to some degree provides a foundation for the application software. That foundation is one piece of the product development puzzle. That puzzle also includes other components such as low-level drivers, file systems, graphical user interfaces, TCP/IP stacks, encryption engines, and much more. Developers should evaluate the middleware that is directly compatible with their RTOS.

Now, you might be thinking that you can use any middleware with any RTOS and compatibility is not an issue. This is true, but if you select an RTOS that has proven and supported middleware that has examples already available – development will go much faster and smoother. You may even get lucky and find that the RTOS and the middleware are written using the same coding standards and have a consistent look and feel.

6.) Performance

I rank performance a little bit lower on the list because I think this is one of the obvious criteria that developers will look at. Performance includes criteria like:

  • RAM footprint

  • ROM footprint

  • Reliability and determinism

To some degree, performance between RTOS’s is starting to become a moot point. If you are running a modern processor at 200 MHz and up, losing a few clock cycles here or there is not going to be critical. However, if you are working with a more traditional microcontroller running below 100 MHz, every clock cycle will count. It’s important to take your application into account and determine if performance is even a concern.

7.) Cost

Yes, cost! The one factor that is at the top of every manager and bean counters list! As I mentioned earlier, the cost for an RTOS is important, but when you compare it to the labor for development and maintenance, even most commercial RTOS’s that require a royalty barely make the decimal point.

I don’t want to undervalue cost, but I’ve noticed we give this one criterion far too much weight when making engineering decisions. (Yes, software and tool vendors are probably dancing and giving a loud amen to that sentence).

8.) Engineering team

The last criterion that I often consider, although it does not necessarily hold a large weight in the decision is the engineering team. There are factors to consider about the engineer when selecting an RTOS such as:

  • Minimizing developer stress

  • Minimizing labor intensity to learn an RTOS

  • Shallowest learning curve

The engineers’ backgrounds, skillsets, and years of experience can all come into play when selecting the RTOS. Sometimes I’ve seen teams select a particular RTOS simply because the engineers were interested in using that RTOS. The choice was made purely on personal preference and not on any metric or application requirement. I think the engineers are important to consider and that they should not be overlooked.

Conclusions

As we have seen in today’s post, there is a lot more to consider when selecting an RTOS than just cost. Cost is certainly an important factor, but when considering the overall costs for developing and manufacturing a product, even a commercial RTOS would be less than a percent.

What’s more important is evaluating all these criteria and selecting an RTOS that best meets the application and development team’s needs. It may be that the cost for the RTOS is free, but then again it may not be. Only by carefully evaluating all the criteria can a development team make the right choice.

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [email protected], at his website, and sign-up for his monthly Embedded Bytes Newsletter.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

augmented-reality-won't-advance-until-its-optical-issues-are-solved

Augmented reality (AR) is the gateway to the most immersive phase in computing history. It promises to radically transform the way we work, but it will never realize its potential unless fundamental performance and usability issues are addressed. A recent report from specialist AR/ VR industry analysts Greenlight Insights has shown solving two fundamental optical issues would unlock an additional $10 billion in spending on enterprise AR applications by 2026.

Currently, all AR experiences suffer from vergence-accommodation conflict (VAC) and focal rivalry. These terms may be relatively unknown, but anyone who has worn an AR headset for any length of time will recognize them as eye fatigue, an inability to read text up close, and the struggle to complete precision tasks because virtual content is not well integrated with the real world. Greenlight’s analysts estimate that 95% of current enterprise AR applications would see an immediate benefit if these issues were solved.

What is vergence-accommodation conflict and focal rivalry?

AR headsets like Microsoft’s Hololens suffer from some key optical issues that are keeping the technology from becoming fully immersive and useful for enterprise users. (Image source: Microsoft)

VAC breaks the natural way our eyes focus. In the real world as an object gets closer our eyes turn inwards to triangulate on it, stimulating the eyes to focus at the right distance. This doesn’t happen in AR (or VR for that matter) because lenses in headsets are set at a fixed focal distance. Our eyes are simply not fooled by clever software manipulation of virtual content.

The other major challenge facing AR is focal rivalry, a phenomenon that occurs when we’re viewing real and the virtual content together. Our eyes cannot naturally integrate real and virtual content into a genuine mixed reality unless they are in the same focal plane. So if you want to place virtual content at different distances and interact with it in a natural way you have to deal with the issue.

Why is this a $10 billion issue for the augmented reality industry?

Microsoft’s Hololens is one of the most advanced AR headsets available today and it acknowledges the problem of VAC. The Hololens developers guide advises content developers to place virtual content beyond arm’s reach to avoid an uncomfortable experience. Microsoft suggests that designers should attempt to create content scenes that encourage users to interact one meter or farther away from the content. It also recommends to not require users to rapidly switch from near-focus to far-focus as this can cause visual discomfort and fatigue.

Meanwhile, a research team at the University of Pisa conducted a study exploring how focal rivalry affects people’s performance when using AR to complete precision tasks. It found that accomplishing an AR-assisted task where content is within two meters of the person and requires a high level of precision (such as AR-assisted surgery or repairs) may not be feasible with existing technology.

The inability to bring content convincingly within arm’s reach and realistically place content at any focal point is holding AR back. The eye fatigue and discomfort of VAC will limit the amount of time we can spend in AR, preventing it from being an all-day wearable. Meanwhile focal rivalry will limit its usefulness in enterprise applications that require any significant degree of hands on accuracy such as manufacturing, engineering, or surgery.

The optical interface is the key to unlocking the true potential of augmented reality

If we are to unlock the true potential of augmented reality, we will need to find a solution that is able to address both VAC and focal rivalry – not just one or the other. The solution lies in a dynamic optical interface that is able to re-engage the visual system to allow users to position virtual content convincingly anywhere in the 3D space and integrate real and virtual content without any visual limitations.

Both the HoloLens 2 and Magic Leap One promised to bring some form of multiplane focus that would address focal rivalry and VAC, but no solution has yet come to market. However, it is encouraging to see that in the VR space, Facebook’s Reality Labs Chief Scientist Michael Abrash announced that the company is exploring the use of a liquid crystal technology in a prototype for Half Dome – Facebook’s VR headset that integrates mechanical varifocal displays.

There are several solutions that are currently being developed. These solutions range from visual pass-through (VPT) systems, to light field displays, to liquid crystal lenses, and adaptive lenses. The Greenlight Insights report reviews all the key dynamic focus solution technologies, comparing their relative performance and likely time to market. A detailed comparison concludes that adaptive lenses offer the best combination of addressing all the key issues, while also offering HMD display manufacturers a quicker time-to-market.

For us to advance augmented reality it is necessary that we integrate some form of dynamic optical interface. Once we do this, we will be able to drive a step change in both the experiences and applications that can be created.

John Kennedy is CEO of Adlens, the company pioneering the development of lenses that change focus like the human eye – enhancing vision in AR/XR, VR, and eyewear. Adlens believes that optical technologies will play a key role in helping to create a truly breakthrough mixed reality product that will have comparable consumer impact to the iPhone. An engineer by training, he’s built his experience in technology and fast-growth consumer businesses.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

It’s not just slide rules and pocket protectors!

  • It is the time of year when we gather to contemplate the things for which we are all thankful. Engineers may have appreciation for things that are overlooked by the general public, so we asked around to see what some of the things are that make some automotive engineers’ lives enough better that they have special appreciation for them. Would any of these items make your list?

    Image source: Jill Wellington from Pixabay

  • Brembo’s Ben Pohl’s thankfulness is elemental. Literally. He tips his hat to aluminum.

    “Aluminum is a wonderfully versatile material—it’s light, stiff, relatively inexpensive, recyclable, attractive and corrosion-resistant all while being fairly easy to work with in a manufacturing environment.  It is used in everything from cars to airplanes to bicycles to beverage cans (all things I’m interested in).  And, of course, it’s the core material of the signature Brembo product, our brake calipers.  Where would we be without it?”

    Image source: Brembo

  • As Ford Motor Co. Global Product Development Director for battery electric vehicles, Darren Palmer is always on the go. So he appreciates something that makes it easier for him to work from the road: Bose noise-cancelling headphones.

    “I travel around the world a lot. We work with no office desks. So I use portable wireless Bose noise-cancelling headphones. I can work in the middle of a plant even, anywhere in the world.”

    Image source: Bose

  • BMW’s manager of connected e-mobility, Jan Freimann, is thinking even more fundamentally than Pohl! Rather than an element, he’s looking at a force of nature, as described by Charles-Augustin de Coulomb: he’s named Coulomb’s Law, describing the electrostatic force of attraction and repulsion, as illustrated by the like-charged hairs repelling one another. It is also the force that lets electric motors propel EVs like BMW’s i8 sports car.

    Image source: Dan Carney

  • Insurance Institute for Highway Safety senior research engineer Becky Mueller recognizes the importance of communicating our work to a wider audience, so she has appreciation for something so ubquitous it could be easily overlooked: Microsoft Powerpoint.

    “I am thankful for Powerpoint because it allows me to take my engineering observations and communicate in a way to reach various audiences. Whether it is diagrams or adding photos and videos together. The greatest development means nothing if you can’t communicate what it is and what it can be used for. Having that bridge and being able to explain it to your audience is important. It is something I use every day.”

    Image source: Microsoft Corp.

  • Toyota senior engineer for fuel cells, Jackie Birdsall (center, in the cobalt blue shoes), is thinking a little more traditionally and less technically than the others, so we’ll close with her thanks to her colleagues: “I don’t want to be too cheesy – but I am thankful for my colleagues. Our team perfectly complements one another to balance our strengths and weaknesses. I believe this team could solve any engineering problem. Happy Thanksgiving!”

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.

how-to-improve-your-cad-productivity-by-measuring-true-workstation-performance

Cars all have the same basic components that function in similar ways, but you wouldn’t test-drive a sedan to make a buying decision on an SUV.

The same rationale should apply to CAD workstation performance. Benchmarks that use generic CAD models and datasets to characterize workstation performance will never give you a true picture of your real-world experience. That vague picture might be fine if your work isn’t critical to the operation of your company, or if productivity and efficiency don’t matter. But who has that luxury?

Chart from “The Economic Value of Rapid Response Time”

Faster response time = greater productivity

“When a computer and its users interact at a pace that ensures that neither has to wait on the other, productivity soars, the cost of the work done on the computer tumbles, employees get more satisfaction from their work, and its quality tends to improve. Few online computer systems are this well balanced; few executives are aware that such a balance is economically and technically feasible.” 

The quote above is from a paper titled “The Economic Value of Rapid Response Time,” and it’s just as true today as it was when it was published in 1982. It seems basic, but it is worth reiterating: The more computing speed you can pack into an engineer’s day, the more benefits it reaps, including:

  • Cost savings

  • Improved individual productivity

  • Shortened production schedules

  • Faster time to market

  • Ability to do more testing and prototyping

  • Increased product quality

Beyond economic issues, there is the mission-critical nature of CAD/CAM work. When events such as an airplane crash or automotive recall happen, CAD applications are placed in environments that demand the best in terms of people, process, and products. There is no room for poor or mediocre workstation performance.

Differences that matter

CAD workstations are all doing the same basic things, but it is the way they do those things – based on the packages they run, the optimizations they provide, the specific models they are building, and the way they render those models – that adds up to performance differences.

Aside from the obvious differences in the objects being designed and engineered, there are many other differentiating factors affecting CAD performance, including:

Unique industry requirements – Product design is far from one approach fits all. Architecture, engineering, and construction (AEC) have requirements such as building information modeling (BIM) processes and construction documentation that are unique to the industry. Aerospace and medical device products need to account for certification requirements. Different industries require different levels of detail, accuracy, and technical specifications.

Different approaches to engaging the CPU and GPUOne of the fastest-moving areas of CAD innovation is how different packages handle rendering in order to deliver a finished photorealistic model in the least time possible. There is a delicate balancing act in managing the work performed by the CPU with that of the GPU, enabling their respective architectures to complement one another through careful partitioning of the different methods and stages of rendering. The approaches to rendering differ not just among CAD packages, but when newer versions of the same package are introduced.

Certification and optimizations – Graphics card vendors and workstation OEMs probably work closer with CAD software vendors than with any other type of ISVs. There are typically more certification requirements than with other applications, along with intense competition to make CAD packages run faster and more intuitively. Change is a constant.

The new SPECapc for Solidworks 2019 benchmark exercises a full range of graphics and CPU functionality used within real-world operations. (Image source: SPECapc group)

Implementation of external referencing – Many CAD packages rely on externally referenced geometry to fully populate a model and streamline the modeling process. If your model has 100 sockets that are identical, why model or duplicate each individually? Instead, a part is saved to a file outside of the working assembly and a placeholder is imported into the main workspace that references that file. How a CAD package implements this functionality can make a difference in performance.

Support for different types of modeling – CAD packages typically go down different routes to the same destination in the type of modeling they support. Parametric modeling – a step-by-step process where an entire history of the model is recorded and can be adjusted at any time in the product development process – is implemented differently in Catia, Solidworks, NX, and other CAD packages. Direct modeling – which forgoes the object’s history and allows the user to adjust faces, vertices, or sections of the model directly – also works differently depending on the package. Then there’s the hybrid approach, using a combination of parametric and direct modeling. Same basic functionality; different implementations.

What happens after design is done – A lot of the major performance differences among CAD packages comes after the modeling is done. This includes functionality such as PLM/PDM integration, photorealistic rendering, management of different design iterations, and storage and retrieval of models.

The perpetual need to do more – In CAD/CAM/CAE, the prevailing trend is always to do more with larger and larger models. Benchmarks that measure basic functionality on small models are out of step with the innovation driving the industry toward integrating mechanical, software, electrical, and other elements in the same huge model. This leads to larger datasets that place pressure on the critical need for increased productivity and reliability.

Fortunately, there are choices

The complexity of measuring workstation performance based on professional CAD applications might be enough to have users throwing their hands in the air in despair. Fortunately, there are good options besides the generic benchmarks that are the equivalent of a plain brown wrapper.

The Standard Performance Evaluation Corporation’s Graphics and Workstation Performance Group (SPEC/GWPG), a non-profit organization that’s been around for more than 30 years, develops graphics and workstation benchmarks based on actual applications and real-world workloads, including ones for CAD/CAM. The really good news is that these benchmarks are free as long as your company is not a vendor of computer products or services.

Benchmark models should be representative of those used in a variety of day-to-day CAD work. Models within the SPECapc for Solidworks 2019 benchmark range in size from 392 MB in memory to this large model of a NASA Crawler Transporter, which takes up 2.3 GB in memory. (Image source: Jay Patterson)

Covering all the bases

If you want the best representation of total performance for a specific application, go for SPEC application performance characterization (SPECapc) benchmarks. These benchmarks require installing a specific version of the application you want to benchmark. They use models and workloads that represent those used by CAD/CAM professionals and provide a comprehensive picture of CPU, GPU and I/O performance.

Current SPECapc benchmarks for CAD/CAM cover Creo 3.0, Solidworks 2019, and NX 9.0/10.0. There are also SPECapc benchmarks for Autodesk Maya and 3ds Max.

Measuring graphics performance

If you don’t have easy access to the application you wish to benchmark and you’re interested primarily in graphics performance, you can run SPECviewperf. SPECviewperf measures the 3D graphics performance of systems running under the OpenGL and Direct X application programming interfaces. There is also a version for Linux.

SPECviewperf workloads, called viewsets, represent graphics content and behavior extracted from professional applications, without the need to install the applications themselves. SPECviewperf 13, the current version, includes viewsets for Catia, Creo, NX, Solidworks, and Autodesk Showcase.

A model from the Catia viewset in SPECviewperf 13. (Image source: SPECapc group)

Total workstation performance

If you are interested in total workstation system performance – CPU, GPU, I/O, and memory bandwidth – you can download SPECworkstation 3, which includes all of the CAD/CAM viewsets listed above, plus additional product development, media and entertainment, financial services, energy, life sciences, and general operations workloads.

Like SPECviewperf, SPECworkstation is a self-contained software package that doesn’t require you to install or run the applications themselves.

Speed saves when applied correctly

There’s a saying that speed kills, but it can also save – time and money. But in order for speed to deliver on its promise it must be applied in the proper way.

ISVs, workstation OEMs, graphics card manufacturers, and other component makers are in a perpetual race to improve the productivity of CAD engineers. Good CAD workstation benchmarking is a way to show how vendor innovations translate into real-world performance for very specific products and functionality, so you can make improvements in the areas that reap the greatest ROI.

Trey Morton is chair of the SPECapc subcommittee and a performance engineer in the Dell Client Solutions Group.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

what-are-beckstrom’s-laws-of-cyber-security?

The Internet of Things (IoT) has many defining characteristics, such as tiny, cheap and low power sensors, embedded computers, and connectivity. But one characteristic will rule them all, namely, security. In the very near future, the IoT will probably not exist if it isn’t secure.

Beckstrom’s Laws of Cyber Security sums it up nicely:

  1. Everything that is connected to the Internet can be hacked
  2. Everything is being connected to the Internet
  3. Everything else follows from the first two laws.

Perhaps this should be called a corollary to Beckstrom’s law, as it provides a short proof to the existing law. Originally, Beckstrom’s law (or theorem) was formulated to determine the real valuable or a given network. Postulated by Rod Beckstrom, former director of the National Cybersecurity Center, the law states that, the value of a network, “equals the net value added to each user’s transactions conducted through that network, summed over all the users.”

Image Source: Beckstrom.com

According to Beckstrom, his law can be used to value any network be it social networks, computer networks, and even the Internet as a whole. In his model, the values of the network are determined by looking at all of the transactions conducted and the value added by each transaction.

To determine the value of a network, Becktrom used an economic point-of-view which considers what the additional transactions cost or loss would be if the existing network was turned off. For example, if a goods delivery service is shut down, then customers will go without those goods or obtain them in a different manner (i.e. driving to the store).

This focus on transactions is what distinguishes Beckstrom’s Law from its more famous cousin, Metcalfe’s Law. For Metcalfe, the value of a network was based purely on the size of the network, specifically the number of nodes. Conversely, Beckstrom’s Law focused on transactions, which makes it more applicable to current experiences on the Internet. This means that Metcalfe’s Law doesn’t account for a decreasing value of the network from an increase number of users or hackers who steal value.

Focusing on transactions makes Beckstrom’s Law of immediate value to the cyber security industry, i.e., the number of desired transactions versus the number of undesired transactions. To illustrate this point, consider a simplified equation:

V = B – C’ – SI – L

Where:

V = value of the network

B = benefit of the network

C = remaining costs outside of the security investments and losses

SI = security investment that a company or person spends to avoid losses

L = actual losses due to poor security.

With this equation, cyber security professionals can prioritize their efforts by focusing to minimize the costs of computer security, “SI” and “L”. Conversely, law enforcement can focus on raising the security costs of the bad actors and hackers.

Regardless of your point of view, the costs of cyber-attacks are staggering. Indeed, one is tempted to ask what hasn’t been hacked? Here are but a few example of seriously hacked networks:

1. IOT Botnet Devices Hack

Back in October of 2016, the largest DDoS attack ever was launched on service provider Dyn using an IoT botnet, i.e., a string of connected computer nodes coordinated together to perform a task. Unfortunately, the IoT botnet was easily infected by a malware called Mirai. Once infected, connected computers continually search the internet for vulnerable IoT devices (e.g., digital cameras, DVD players, etc) and then use known default usernames and passwords to login, infecting them with malware. This attack led to huge portions of the internet going down, including Twitter, the Guardian, Netflix, Reddit, and CNN.

2. The Hackable Cardiac Devices from St. Jude

Early this year, CNN wrote, “The FDA confirmed that St. Jude Medical’s implantable cardiac devices have vulnerabilities that could allow a hacker to access a device. Once in, they could deplete the battery or administer incorrect pacing or shocks, the FDA said. The article continued to say, “The vulnerability occurred in the transmitter that reads the device’s data and remotely shares it with physicians. The FDA said hackers could control a device by accessing its transmitter.”

3. The Jeep Hack

The IBM SecurityIntelligence website reported the Jeep hack a few years ago, saying, “It was just one, but it was enough. In July [2015], a team of researchers was able to take total control of a Jeep SUV using the vehicle’s CAN bus.

By exploiting a firmware update vulnerability, they hijacked the vehicle over a cellular network and discovered they could make the vehicle speed up or down and even veer off the road

In conclusion, the IoT has ushered in a need for even more robust network security. Beckman’s Law will help cyber security managers and law enforcements prioritize their efforts by focusing on the value of transactions.

5-technology-takeaways-from-autodesk-university

At Autodesk University in Las Vegas this week, company executives introduced a number of concepts that reveal just how far technology is progressing. This includes bringing gaming software into design tools or using these tools to improve the environment. During a meeting with journalists on Tuesday, Autodesk president and CEO Andrew Anagnost and a handful of his senior executives explained the near future of design technology.

Bringing gaming software into design tools

Rather than simply presenting classes that teach new software, Autodesk is beginning to use game software to entice users into adopting advanced tools. “We’re bringing gaming tools into our software to engage with our highly skilled users. It’s a positive attention grabber. It drags you into something you really want to accomplish,” said Anagnost. “It guides you to choose this new path, and if you do, you’ll quickly get better at the new tools.”

Autodesk, generative design, environmental improvements, PLM, automated construction, manufacturing
Andrew Anagnost kicks off Autodesk University on Tuesday by showing off an EV VW Microbus partly built by using gnerative design. (Image source: Autodesk)

Rather than incrementally improving existing tools, Autodesk aims to bring these tools to its users by making it fun. “Our users ask us to improve features on the tools they already know. We want to take people into areas they don’t know so they don’t have to go off somewhere else to learn, so they can learn while doing what they’re already doing,” said Anagnost. “It’s a gaming engine. Eventually it will be everywhere our applications are. It will also assess how well our users are doing. We think there’s massive value there. People are going to get more and more technology in front of them, and they need guides to what tech is best for them to use.

The concept it to mimic the unstructured way users learn new tools, by accident and buy having fun. “With this gaming software, we realize that most of us discover new ways of doing things by accident,” said Lisa Campbell, CMO of Autodesk. “We want to make it deliberate, so you can learn the skill while doing your job.”

Construction sites will become automated factories

Construction is one of the last industries to adopt advanced software tools. That’s about to change. There’s a quiet-but-steady move toward bringing automation to construction. “People are trying to do stand-up automation at the construction sites. In 10 years, we’re going to see plenty of these stand-up factories,” said Anagnost. “The biggest thing that is restricting this is regulations. The technology is moving faster than those regulations. It’s not like they’re trying to hold us back.”

Anagnost noted that Autodesk is already pushing to help construction companies shift away from age-old practices. “We’re working on the policy side as well as the relations with unions,” said Anagnost. “That’s where the heavy lifting will be in moving automation to the construction site.”

Some of those construction processes are already changing. This is reflected in a Marriott hotel going up in a tight spot in Midtown Manhattan. The hotel is getting put together in modules that are manufactured offsite and brought to Midtown by boat, then put together like large Lego pieces. “At some point we will get into a model where the modular work is done close to the site. Right now they’re bringing these things in by boat – which in itself is an efficient way to move modular items,” said Anagnost. “But the modular manufacturing will move closer to the sites as modular building factories become more common. What we don’t want to do is create buildings based on modules that need to fit on a railway car.”

PLM doesn’t matter in a world of multiple digital tools

As digital tools get adopted and used in manufacturing processes, the need for PLM will diminish. The data created and collected by the tools will fill PLM needs. “PLM doesn’t matter as much in this new digital world. Once you build the right data model, the concept of PLM falls behind,” said Anagnost. “Basically, we’re already including PLM when we build the data layer that fits underneath in Autodesk. That’s the beauty of the cloud. It contains all of your processes, so you don’t need another layer of data.”

Part of the diminishing need for PLM is that the new tools free up design to move in unexpected directions. “Traditional PLM was sold to solve data problems, but it has became a bureaucracy,” said Scott Bordun, CTO of Autodesk. “It was there to help you enforce your rules. Now we’re trying to break restrictions. In many ways, the construction side of the world is more open to the idea of cloud and data. They’re going to skip a whole generation of technology.”

These tools improve the environment

Much of the new way of doing business and creating products has intrinsic value environmentally. “Our solutions impact the environment. Construction is responsible for 30% to 40% of the materials in landfill. The more you can automate construction, the more you can reduce that waste,” said Anagnost. “We believe you can eliminate at least 30% of construction waste through better planning and better modeling. Just the act of modeling reduces waste.”

Anagnost insisted that bringing manufacturing values to construction will result in less wasted materials. “Planning is intrinsic to manufacturing. It’s dominated by modeling. That’s not done much in construction,” said Anagnost. “When it is done, you’ll reduce waste. If you add modular construction, you’ll reduce even more. You enter into a virtuous cycle.”

Generative design will take on a larger and larger role

Generative design is beginning to get used for solving problems far beyond 3D printing shapes. Airbus recently used generative tools to design a plant in Hamburg, Germany. “Generative design is the engine that we use to bridge design and manufacturing. With generative design we’re consolidating the parts,” said Anagnost. “The whole idea of generative design is to become a partner with the designer in creating an outcome. Generative design is really a machine-generated problem solver.”

Part of the value of generative design is the ability to see multiple iterations of an object – or even a plant – in a very short period with little cost. “In generative design, you’ve see a ton of examples for solving problems. You can consolidate eight parts into one part. That’s great for folks who are applying generative design to their products,” said Scott Reese, SVP for manufacturing, cloud, and production at Autodesk. “The challenge is gaining awareness of generative design. In order to make people more familiar with it, we are making it free until the end of the year.”

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

what-is-beckstrom’s-laws-of-cyber-security?

The Internet of Things (IoT) has many defining characteristics, such as tiny, cheap and low power sensors, embedded computers, and connectivity. But one characteristic will rule them all, namely, security. In the very near future, the IoT will probably not exist if it isn’t secure.

Beckstrom’s Laws of Cyber Security sums it up nicely:

  1. Everything that is connected to the Internet can be hacked
  2. Everything is being connected to the Internet
  3. Everything else follows from the first two laws.

Perhaps this should be called a corollary to Beckstrom’s law, as it provides a short proof to the existing law. Originally, Beckstrom’s law (or theorem) was formulated to determine the real valuable or a given network. Postulated by Rod Beckstrom, former director of the National Cybersecurity Center, the law states that, the value of a network, “equals the net value added to each user’s transactions conducted through that network, summed over all the users.”

Image Source: Beckstrom.com

According to Beckstrom, his law can be used to value any network be it social networks, computer networks, and even the Internet as a whole. In his model, the values of the network are determined by looking at all of the transactions conducted and the value added by each transaction.

To determine the value of a network, Becktrom used an economic point-of-view which considers what the additional transactions cost or loss would be if the existing network was turned off. For example, if a goods delivery service is shut down, then customers will go without those goods or obtain them in a different manner (i.e. driving to the store).

This focus on transactions is what distinguishes Beckstrom’s Law from its more famous cousin, Metcalfe’s Law. For Metcalfe, the value of a network was based purely on the size of the network, specifically the number of nodes. Conversely, Beckstrom’s Law focused on transactions, which makes it more applicable to current experiences on the Internet. This means that Metcalfe’s Law doesn’t account for a decreasing value of the network from an increase number of users or hackers who steal value.

Focusing on transactions makes Beckstrom’s Law of immediate value to the cyber security industry, i.e., the number of desired transactions versus the number of undesired transactions. To illustrate this point, consider a simplified equation:

V = B – C’ – SI – L

Where:

V = value of the network

B = benefit of the network

C = remaining costs outside of the security investments and losses

SI = security investment that a company or person spends to avoid losses

L = actual losses due to poor security.

With this equation, cyber security professionals can prioritize their efforts by focusing to minimize the costs of computer security, “SI” and “L”. Conversely, law enforcement can focus on raising the security costs of the bad actors and hackers.

Regardless of your point of view, the costs of cyber-attacks are staggering. Indeed, one is tempted to ask what hasn’t been hacked? Here are but a few example of seriously hacked networks:

1. IOT Botnet Devices Hack

Back in October of 2016, the largest DDoS attack ever was launched on service provider Dyn using an IoT botnet, i.e., a string of connected computer nodes coordinated together to perform a task. Unfortunately, the IoT botnet was easily infected by a malware called Mirai. Once infected, connected computers continually search the internet for vulnerable IoT devices (e.g., digital cameras, DVD players, etc) and then use known default usernames and passwords to login, infecting them with malware. This attack led to huge portions of the internet going down, including Twitter, the Guardian, Netflix, Reddit, and CNN.

2. The Hackable Cardiac Devices from St. Jude

Early this year, CNN wrote, “The FDA confirmed that St. Jude Medical’s implantable cardiac devices have vulnerabilities that could allow a hacker to access a device. Once in, they could deplete the battery or administer incorrect pacing or shocks, the FDA said. The article continued to say, “The vulnerability occurred in the transmitter that reads the device’s data and remotely shares it with physicians. The FDA said hackers could control a device by accessing its transmitter.”

3. The Jeep Hack

The IBM SecurityIntelligence website reported the Jeep hack a few years ago, saying, “It was just one, but it was enough. In July [2015], a team of researchers was able to take total control of a Jeep SUV using the vehicle’s CAN bus.

By exploiting a firmware update vulnerability, they hijacked the vehicle over a cellular network and discovered they could make the vehicle speed up or down and even veer off the road

In conclusion, the IoT has ushered in a need for even more robust network security. Beckman’s Law will help cyber security managers and law enforcements prioritize their efforts by focusing on the value of transactions.

nominate-someone-outstanding-to-be-the-2020-designcon-engineer-of-the-year

Each year during DesignCon, we recognize an engineer who is deemed by their peers to be the best of the best in engineering and new product advancements at the chip, board, or system level. The winner will be selected based on his or her leadership, creativity, and out-of-the-box thinking brought to design/test of chips, boards, or systems, with particular attention paid to areas of signal and power integrity. 

Click here to go directly to the Nomination Form. 

Vishram Pandit presented with the 2019 DesignCon Engineer of the Year Award
Vishram Pandit, 2019 Engineer of the Year, accepted the award from Naomi Price, Conference Content Director for DesignCon.

Last year we presented the award to Vishram Pandit. His goal is to share knowledge with the technical community that will one day create the CPUs for next-generation cars, phones, and servers. He is well on his way to making that happen; to date he has co-authored a book on Power Integrity for I/O Interfaces and is co-author of approximately 30 conference and journal publications, out of which 19 were presented at DesignCon. Those papers have received 3 best paper awards and 3 finalist awards. Other past award winners have been industry greats Dr. Mike Li in 2018, Heidi Barnes in 2017, Eric Bogatin in 2016, and Michael Steinberger in 2015.

Nominations are open from now until Tuesday, December 3, 2019. To be considered for this award , nominees must be active members of the DesignCon community and cannot be employed by the same company as the winner from the previous year.

Members of the Design News editorial staff will choose finalists from the nominees, and then the DesignCon and Design News communities will have the opportunity to vote for the engineer who will receive the 2020 Award. Watch DesignNews.com for the announcement of the finalists and voting.

The winner of DesignCon’s Engineer of the Year Award will be provided with a $1,000 grant or scholarship to present to the educational institution of his or her choice.

You may nominate multiple people for the award, but please only nominate each person once. Multiple people nominating the same one engineer is encouraged. Feel free to nominate yourself or another engineer.

Click here to go to the Nomination Form. 

Click here to see the Official Rules and Regulations of the Award. 

Click here to learn more about DesignCon and register for the event. 

Contact Naomi Price with questions.

invention:-coming-up-with-an-idea-and-making-it-work

At the core of every patent application are its inventors. For both large and small companies, differing opinions on input into the invention and internal politics frequently lead to questions about inventorship. The key point that companies must remember is that invention is based on two elements: coming up with the idea and envisioning how that idea will be made to work.

inventions, engineering, concepts, US Patent Office
The mere possibility of an invention as an idea is not enough.  Conception is established when the invention is sufficiently clear in the mind of the inventor to enable one skilled in the art to put it into practice. (Image source: Wolf Greenfield)

While many people may contribute to an invention, only those who originate the idea itself are the true inventors. This notion is referred to as conception, and any inventor listed on the patent application must have contributed to some aspect of the idea itself to be considered an inventor. However, an inventor need not actually build or make the invention operable so long as the idea can be made by someone with the relevant knowledge in the field.

Coming Up With the Idea – the “Conception”

Conception has been defined as “the complete performance of the mental part of the inventive act” and it is “the formation in the mind of the inventor of a definite and permanent idea of the complete and operative invention as it is thereafter to be applied in practice….” Townsend v. Smith, 36 F.2d 292, 295, 4 USPQ 269, 271 (CCPA 1930). The mere possibility of an invention as an idea is not enough.  Conception is established when the invention is sufficiently clear in the mind of the inventor to enable one skilled in the art to put it into practice—or, in the language of patents, “reduce it to practice”—without the exercise of extensive experimentation or inventive skill. A classic example of this is the time machine. Many people have desired time travel, but there has yet to be an invented time travel machine. No one has had a “definite and permanent understanding” as to whether or how, or a reasonable expectation that, a time machine would result in time travel.

The vast majority of inventions patented today are incremental improvements on existing technologies. In a single groundbreaking product, there may be tens of these incremental inventions which are worthy of patent protection. Here, the question of conception becomes more nuanced, because there are frequently overlapping incremental inventions produced by a team of designers. Each designer may be an inventor of the improvement or component they helped to conceive. If the design team conceived of each invention together, they are all joint inventors. If different members of the design team conceived different improvements or components of a product, careful attention should be paid to which designers should be included on the patent application. A person is an inventor if they helped to conceive of any invention and is not an inventor if they did not participate in the conception of the invention. Failure to recognize this last point, in particular, is what usually results in incorrect inventorship.

Bringing the Invention to Life – “Reduction to Practice”

A common misconception is that inventorship is based on constructing or developing an operable product based on the conceived invention, also known as reduction to practice. While it is true that the invention is completed on reduction to practice, inventorship only requires conception. An invention may be reduced to practice in one of two ways: the invention can be actually reduced to practice by building a working invention, or the invention can be constructively reduced to practice by describing it well enough in a patent application such that a person of skill in that field can make it. Many people may be involved in the reduction to practice phase, as prototyping, testing, and manufacturing all fall within this phase of invention.

Where companies are often tripped up is determining inventorship through contributions at this stage. An engineer who builds a part to specification, follows the direction of a superior, or otherwise merely takes another’s conceived invention and puts it into real-life practice is not an inventor. Core to inventorship is the original idea; the manufacture of the invention is a secondary component. This means that the engineer who may have spent countless hours building and testing a product is not actually an inventor if they did not conceive of the invention to begin with.

As an example of the differences between the elements of conception and reduction to practice, take a company executive, an engineer, and a manufacturer. The executive approaches the engineer and says, “I want something that will quadruple the battery life of my phone.” The engineer goes to work and determines a solution to the problem laid out by the executive, finally producing a new battery capable of meeting the executive’s goal after a long development process. The engineer then sends the specification for the new battery to the manufacturer, who builds the first operable batteries.

Where does inventorship lie? The executive had the original desire for increased battery life, but his contribution goes no further than expressing a wished-for result. The manufacturer actually built the invention, but they were merely following the specification from the engineer. In this case, the engineer is the inventor. While the engineer did not think of the goal first, they (1) came up with the definite idea of the operable battery and (2) demonstrated knowledge of the invention sufficient to enable the manufacturer to build it without undue experimentation. In effect, the reduction to practice by the person who conceived of the invention is non-essential to inventorship; it may demonstrate knowledge of the means of making the invention, but it is conception that ultimately defines inventorship.

Doing the Inventorship Analysis

Practically speaking, each prospective inventor should be asked about their contribution to the conception of the invention. If they came up with an idea of at least one claim of the patent application alone or with others, and their idea was definite enough to enable someone with the relevant knowledge in the field to make the idea, they are an inventor. Those who merely expressed a wished-for result or those that reduced the idea to practice are not inventors unless they also contributed to conception of the invention.

Determining inventorship can be complex and stressful, especially because the consequences of a mistake are at minimum expensive and time consuming, and at worst can invalidate a patent. With the considerations of conception and reduction to practice in mind, it is essential to perform an educated analysis of inventorship when preparing and filing patent applications in order to get it right the first time.

Trevor Day is a technology specialist and Neil Ferraro is a shareholder at intellectual property law firm Wolf Greenfield.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

fiat-chrysler-automobiles-is-simplifying-vehicle-modeling-for-better-results

Improved high-strength steels and mind-bogglingly complex computer models of the vehicle body structures built with those new kinds of steel sound like a recipe for better new cars, but in fact they have the potential of making cars needlessly heavy, with a harsh ride or sloppy handling.

The solution is to run an extremely simple model, very early in the vehicle design process, said Fiat Chrysler Automobiles technical fellow for advanced body structure James Truskin, in a presentation at the Altair Technology Conference in Detroit, where he described the company’s new Simplified Loadpath Model.

A conventional complex CAD image of a body-in-white. (Image source: Fiat Chrysler Automobiles)

While today’s powerful computers can quickly crunch through a simulation of a vehicle structure with 5 million degrees of freedom, computers in the early 1990s worked with BEAM-SPRING models with just 480 degrees of freedom to calculate load paths and joints in vehicle structures. “We’ve lost some learnings and understandings of what made a body structure great,” Truskin asserted.

The Simplified Loadpath Model. (Image source: Fiat Chrysler Automobiles)

It is a simple model like the ones from the ‘90s, applied at the very beginning of the process of designing a structure for a planned new vehicle platform, that can steer engineers toward an optimized design that simultaneously reduces weight and improves noise, vibration and harshness characteristics, said Truskin.

“We wanted a new model to understand how to design vehicles with these new materials to get optimized NVH performance,” he said. “It is not possible to do this with regular body structure models later in the design process,” he said.

New high-strength steels can let engineers whittle away materials in the structure’s joints, reduce the gauge of the material used and adjust the height and width of load paths such as rocker panels. “To be efficient, we don’t need joints to be infinitely stiff in every direction, but they do need to be stiff in a few directions,” Truskin explained. The SLM software lets engineers understand that, so they can eliminate excess material that is provided stiffness in directions that don’t contribute to the goal of creating necessary strength.By doing this very early in the process, Truskin says that SLM lets engineers account for expected future variants on the platform, so that when it is time to develop those, the body structure doesn’t require as much change or reinforcement because the requirements for the different configuration are baked into the fundamental design.

“We do it very early, before we have any CAD at all,” he said. “We only need to know where the powertrain is and where the hard points are for the load paths.”

Using the SLM has provided two key benefits, according to Truskin. The first is an accelerated development process. “It is speeding up the process to understand where the load paths need to be so we can speed up our tradeoff studies,” he said.

Tradeoff studies are the next step in the process, when body structure designers weigh various and sometimes conflicting requirements for space, weight, crash structure, cost and other variables to determine the best combination, Truskin explained.

The second benefit is that it lets the engineering team focus quickly on where they need to do additional work to yield improved NVH. “Generally, NVH standards are increasing in passenger vehicles,” he said.

Optimized stiffness also lets engineers specify stiffer suspension bushings because the improved body structure does not require as much help isolating vibration from the passengers, and these stiffer bushings improve vehicle’s steering responsiveness.

And, of course, eliminating excess mass that provides unnecessary stiffness lets the resulting vehicle be lighter and more fuel efficient.

So far, no new Fiat Chrysler models have hit showrooms that benefited from the SLM, so we will have to wait to see the results for ourselves.

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.