lithium-ion-battery-design-inspires-fast,-low-power-memory-device

Researchers have taken inspiration from the design of solid lithium-ion batteries to create a new, low-power energy device that could allow for the design of faster memory components for next-generation computing systems.

There are two key design elements to the device—created by a team led by the Tokyo Institute of Technology (Tokyo Tech) Professor Taro Hitosugi and his student Yuki Watanabe—that are important for dynamic random-access memory, or DRAM, in computers.

lithium-ion battery, low power memory, memory device, Tokyo Institute of Technology, Tokyo Tech
 A diagram of the stacked layers of a new memory device designed by researchers in Japan. The devices can be quickly and efficiently switched between three different voltage states. (Source: Tokyo Institute of Technology)

One is that the device consumes less power than typical devices of this kind, which is promising for also allowing computing devices to save on battery power, researchers said. The other is that the device has potential to improve the speed of DRAM, which has significant impact on overall computer performance. Scientists at the University of Tokyo also worked on the project.

Hitosugi and Watanabe previously had developed a similar memory device also inspired by solid lithium-ion batteries comprised of a three solid layers made of lithium, lithium phosphate, and gold. The new battery is basically a new iteration of this device, with nickel replacing the gold in the stack. “This system can be viewed as an extremely low-capacity thin-film lithium battery with three charged states,” said Hitosugi.

Seeking better performance

Indeed, the three-stack configuration forms a mini-battery that can switch between several voltages–0.95 V, 1.35 V, and 1.80 V—but instead of providing energy, it acts as a memory cell, or the most basic unit in a memory chip.

Each cell in a chip typically stores a single bit by adopting and holding one of two possible voltage values, which correspond to a stored value of either “0” or “1.” The individual cells themselves are key to the overall performance of the chip, which is why a chip that works at a high speed without drawing on too much power is well-suited for developing computers with more efficient memory.

Achieving better performance for the cell is indeed why the team switched from gold to nickel in one layer of the stack, they said. The problem with using gold in the device is that the precious metal combines with lithium to form a thick alloy layer within the device, which increases the amount of energy required to switch from one state to the other.

The nickel, however, does not easily form alloys with lithium, which allows it to consume less energy when switching from one state to another, providing “the potential for extremely low energy consumption [that] is the most noteworthy advantage of the device,” said Hitosugi.

Another aspect of the nickel that lends itself to this particular aspect of the device is that the metal forms a very thin nickel oxide layer between the nickel and the lithium-phosphate layers, researchers said. This layer is much thinner than that of the gold–lithium alloys that formed in the team’s previous device, leaving the new cell with a very low capacity that quickly and easily switches between states using only very small currents.

The team published a paper on its research in the ACS journal Applied Materials and Interfaces.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? 

Register to attend!

5-trends-that-will-guide-automotive-technology-in-2020

Here are five trends that will be playing a key role in making cars safer and more efficient in the years to come.

  • Auto manufacturers have no option other than to realign their strategies in order to accommodate the looming revolution. Connected and electric cars are already on our roads. And the reality of fully-autonomous cars is coming closer and closer. Technology is helping auto companies to not only modernize their manufacturing processes but also to gather, manage, and analyze data. There’s also tons of data being generated by vehicles themselves All of this data will soon be the guiding factor for the automotive industry going forward.

    Here are five trends that will be playing a key role in making rides smoother, safer, and more efficient.

  • cybersecurity, automotive, MCU, networks, infotainment system

    1.) Vehicle VPNs and automotive cybersecurity

    We might not quite be there yet, but we are for sure on the verge of completely adopting autonomous vehicles. There has been a lot of talk surrounding self-driven vehicles, especially in regard to their safety and security. But the promise of connected and autonomous vehilces, and vehicle-to-everything (V2X) communcation, also opens up new avenues for hackers to attack our cars.

    Virtual Private Networks (VPNs), which allow users to create secure and private connections across even public networks, have been around for some time now. They even allow you to appear online as if you’re in another country. They have been successfully deployed by consumers and businesses as well as in many high-risk cybersecurity situations, including safeguarding government data.

    With the rise of connected vehicles, it is now clear that car owners and manufacturers are going to be adopting VPNs and other cybersecurity solutions to protect their connected and autonomous cars from cybersecurity threats.

    (Image source: Microchip Technology)

  • 2.) Multimodal mobility

    New options like ridesharing, e-scooters, and electric bikes are transforming the way we think about transportation. Powerful tools have made Big Data collection and analysis seamless. When this data is harnessed under a public-private partnership, it starts to bring flexible, multimodal mobility solutions to life. We are already witnessing this partnership change the travel and tourism industry through white-label journey planning apps. Going forward, urban transportation will get more efficient, streamlined, and, in the long run, sustainable thanks to the adoption of multimodal mobility.

    (Image source: VeoRide)

  • 3.) AI that understands drivers and passengers

    Real-time Big Data analysis enables vehicles to recognize user preferences and automatically adjust their settings in order to make rides more comfortable and customized. Image recognition and processing technologies are also being integrated into cars as a way of training vehicles to identify their owners and users without the need of car keys. Systems like the one being developed by Affectiva can even recognize the emotional states of drivers and passengers. Deep learning is already helping fleet operators monitor drivers remotely. Farther into the future, AI and brain-to-vehicle technologies will also be instrumental in the actualization of driverless car technology.

    (Image source: Affectiva)

  • 4.) Vehicle-to-everything (V2X) communication

    Decision making in our roads is now based on real-time, accurate, and well-analyzed data thanks to the Internet of Things (IoT). V2X technology is bringing connected cars to our roads that will have the ability to capture and digest data from other vehicles and infrastructure, and then act upon that data in order to make our roads safer and more efficient. IoT connectivity will allow vehicles to assess the effectiveness of different features such as their braking and steering systems, perform predictive maintenance, and even update a their firmware and software without human intervention. Experts agree, V2X will get a big boost from the emergence of 5G as well.

    (Image source: NXP Semiconductors

  • 5.) More sensors on the road

    Cars are already packed with sensors, and more and more advanced sensors such as LiDAR and even thermal are implemented into autonomous cars. But more sensors will also be coming to our roads. Road scanning will be using sensors and cameras to scan the road ahead, identifying any possible imperfections or hitches. Smart vehicles will then use that information to adjust their routes accordingly.WaveSense, a Boston-based sensor company, for example, is using ground-penetrating radar to help vehicles map topography.

    (Image source: WaveSense)

As a child, Ariana Merrill loved to figure out how cars worked, and this has translated into her love and passion for mechanical engineering.  For the past 12 years, Ariana has been helping communities thrive through careful monitoring and innovation of electrical and mechanical systems. Ariana also is a tech enthusiast living in New Jersey. She is a computer science and engineering graduate, specialized in artificial intelligence. She loves to write on how AI is paving all industries.  

10-semi-electronic-device-tech-reveals-from-ieee-iedm-2019

2019 IEEE IEDM event reveals latest node chips, chiplets, memories for AI, densest thin-film batteries, 400Gbits/s silicon photonics, quantum computing tools and much more.

  • The theme for this year’s 65th IEEE International Electron Devices Meeting (IEDM) was, “Innovative Devices for an Era of Connected Intelligence.” As in previous years, major semiconductor players including and international research organizations (e.g., imec, CEA-Leti, UC universities and others) presented the latest detailed technology for processors, memories, interfaces and power device devices. Additionally, the event included quantum computing advances, medical uses and other newer areas of application.

    Here are 10 of the major semiconductor “reveals” at the show for 2019.

  • Leading Edge 5nm Chip with Super Dense Memory

    Moore’s Law may be hitting the wall but it’s not dead yet. TSMC unveiled a complete 5nm technology platform that advanced silicon chip scaling (miniaturization) to the next process node. Reaching the 5nm node milestone was due in part to advances in lithography and improvements in process and packaging techniques.

    TSMC researchers described a 5nm CMOS process optimized for both mobile and high-performance computing. It offered nearly twice the logic density and a 15% speed gain or 30% power reduction over the company’s 7nm process. The process optimization incorporated extensive use of EUV lithography to replace immersion lithography at key points in the manufacturing process.

    TSMC’s 5nm platform also featured FinFETs and high-density SRAM cells. The SRAM could be optimized for low-power or high-performance applications, and the researchers say the high-density version was the highest-density SRAM ever reported. The researchers say high-volume production was targeted for 1H20.

  • Quantum computing 

    Great strides have been made in quantum computing. At the Semicon West/Electronic System Design (ESD) 2019 conference, IBM displayed it’s  IBM Q Experience, a cloud-based quantum computer available for free to anyone with a web browser and an internet connection.

    Creating a quantum computer has been an amazing technological achievement, but like any computer it needs software. Imec – the international Flemish R&D nanoelectronics organization – presented the first step toward developing a systematic approach to the design of quantum computing devices.

    EDA chip design software such as TCAD is necessary to produce highly accurate models of semiconductor devices and their operation. To date, no analogous tools exist to model qubits, the basis of quantum computing, because the field is so new and complex. If these design tools did exist, the development of quantum computers could take place much more quickly.

    The Imec team has taken a step to create such a software framework using multiphysics simulation methods to develop a comprehensive design methodology for qubits built in silicon. They modeled device electrostatics, stress, micro-magnetics, band structure and spin dynamics. Based on the results of these studies, they say that single-electron qubits in quantum dots can be induced and optimized in silicon MOSFETs with thin (<20nm) gate oxides. The researchers will discuss critical aspects of their methodology, the parameters they modeled, and next steps.

  • 3D Chiplets

    Intel presented a novel 3D heterogeneous integration process for chiplet creation. It is seen as an evolution of Moore’s Law, a way to keep the scaling, size and cost benefits continuing into the foreseeable future.

    Chiplets are a type of advanced packaging which offers a different way to integrate multiple dies into a package or system. There are a number of ways to make chiplets, but all use a library of modular chips – like Lego building blocks. These module chips are assembled in a package that connects them using a die-to-die interconnect scheme.

    There are many other approaches to combining chip dies, i.e., 2.5D dies that are stacked on top of an interposer. But the hope with a chiplet approach is that it’s a faster and less expensive way to assemble various types of third-party chips like processors, memory, interfaces and the like.

    Here are the details: Intel believes that heterogeneous 3D integration will drive scaling. CMOS technology requires both NMOS and PMOS devices. Intel researchers used 3D sequential stacking architecture to combine these different devices. They first built Si FinFET NMOS transistors on a silicon wafer. On a separate Si wafer they fabricated a single-crystalline Ge film for use as a buffer layer. They flipped the second wafer, bonded it to the first, annealed them both to produce a void-free interface, cleaved the second wafer away except for the Ge layer, and then built gate-all-around (GAA) Ge-channel PMOS devices on top of it. The researchers say these results show that heterogeneous 3D integration is promising for CMOS logic in highly scaled technology nodes.

    This images hows a schematic and a cross-section of a fully processed 3D CMOS transistor structure achieved by this process; in the middle is a thickness contour map of the Ge transfer layer, showing good uniformity; and at right is a 3D cross-sectional view of the completed 3D CMOS chip showing Ge-channel GAA transistors on top of Si FinFET NMOS transistors.

  • AI That Does’t Forget

    Embedded STT-MRAM and other non-volatile memories (NVMs) are getting a lot of attention lately. NVMs devices retain their memory even after the power is removed. Embedded SST-NRAM is one NVM that shows particular promise in the embedded memory space for cache memory in IoT and AI applications.

    At IEDM 2019, TSMC described a versatile 22nm STT-MRAM technology for AI while Intel talked about STT-MRAMs for use in L4 cache applications.

    In STT-RAM writing, an electric current is polarized by aligning the spin direction of the electrons flowing through a magnetic tunnel junction (MTJ) element. Data writing is performed by using the spin-polarized current to change the magnetic orientation of the information storage layer in the MTJ element. Intel improved the process and stack for L4 cache applications. STT-MRAM technology for L4 cache requires tighter bitcell pitches, which translate into smaller MTJ sizes and reduced available write current.

  • Organ Forceps With a Special Touch

    Our internal organs are slippery because they’re covered with blood and other body fluids, so grasping and pulling them with forceps can be challenging. Although contact-force sensors have been placed on the tips of forceps used in diagnostic laparoscopic and robotic surgeries, there currently is no way to know if they are slipping, other than visually via a monitor, which has limited usefulness. A Kagawa University team described a highly sensitive slip-sensing imager (sub-mm resolution) and novel algorithm that can, in effect, give forceps a sense of touch. The idea is to use the device to visualize the spatial distribution of the grasping force across the organ’s surface. The center of that distributed load is calculated, and as the forceps are moved the algorithm relates any corresponding movements of the load center to slippage. Built on an SOI wafer, the device’s force-sensor pixels consist of a 20µm–thick piezoelectric silicon diaphragm (400µm diameter) with a center contact, and with a force detection circuit integrated on the diaphragm. The diaphragm acts as a strain gauge as it flexes due to varying grasping force.

  • Impedance Sensor for Fingerprint Imaging

    Researchers led by Cornell discussed the monolithic integration of a piezoelectric aluminum nitride (AlN) resonator into a CMOS-controlled, GHz ultrasonic impedance sensor/imager. The device measures changes in surface properties such as surface oxidation, materials, liquid viscosity and others, and is meant for use in wearable, IoT and smartphone systems to detect fingerprints with high resolution, determine tissue states, and for other applications. This is the first time monolithic fabrication – all in one chip or die –  has been successfully demonstrated, and it led to small, power-efficient GHz sensing arrays with improved performance vs. the standard two-chip heterogeneous integration approach, thanks to less parasitic coupling and a higher signal-to-noise ratio.

  • Thin-Film Battery Goes High-Density

    The miniaturization of power sources hasn’t kept pace with the miniaturization of electronics. Although integrated electrochemical capacitors offer high power density, high frequency response and novel form factors, their low energy densities are of limited value for MEMS and autonomous device applications that require long periods between charging. CEA-Leti researchers discussed a thin-film battery (TFB) with the highest areal energy density yet reported (890 µAh/cm-2) and high-power density (450 µAh/cm-2). Built on silicon wafers using UV photolithography and etching for the successive deposition and patterning of each layer, the thin-film battery integrates a 20µm-thick LiCoO2 cathode in a Li-free anode configuration. It showed good cycling behavior over 100 cycles, and the fact it was built using a wafer-level process opens up the possibility to tightly integrate this battery technology with future electronic devices.

  • Physically Unclonable Function (PUF) for Mobile and Smart Devices

    The spread of networked mobile devices and smart gadgets in the IoT landscape has created an urgent need to protect them with lightweight and low-power cryptographic solutions. A physically unclonable function (PUF) is a hardware-intrinsic security primitive, or basic programming element. UC Santa Barbara researchers discussed an ultra-low-power PUF that operates on the varying electrical resistances and current leakages that arised from intrinsic process variations in ReRAM crossbar arrays. The team built 4K-ReRAM passive crossbar circuit arrays fabricated with a CMOS-compatible process suitable for back-end-of-the-line (BEOL) integration. The arrays allow for an extremely large number of challenge-response pairs (a common cryptographic protocol), as well as 4x better density vs. other ReRAM architectures plus a ~100x improvement in power efficiency and more robust security metrics.

  • Silicon photonics

    Very fast speed data races around within data centers via optical fiber, using silicon photonic (light-based) interfaces that operate at 100 Gb/s. But cloud data center traffic is growing at nearly 30% per year and there soon will be a need to increase the data rates. A STMicroelectronics-led team described a new silicon photonics technology platform built on 300mm Silicon-on-Insulator (SOI) wafers, yielding devices that operate at 400Gbits/s (each device has 4 channels, each of which operates at 100Gbits/s, for a total of 400Gbits/s).

    Optical coupling and polarization management are key requirements, and their devices incorporate a 60 GHz high-speed photodiode and a high-speed phase modulator. They also built devices with a supplementary SiN waveguide layer for higher coupling efficiency, to meet evolving data-transmission requirements. The researchers say the photonics platform has the potential to meet the requirements of applications other than data centers, too, such as automotive.

    The image is a photo of the chip-on-board assembly of an analog front-end (AFE) function implemented in a 400G-DR4 optical transceiver using the technology, and (2b) are PAM4 signal eye diagrams at 106 Gbits/s per channel, used to measure high-speed signal quality.

  • 5G and beyond

    One of the challenges for chip makers is how to integrate III-V materials with silicon to make ultra-fast devices for 5G and other uses, which are compatible with conventional CMOS technology.  In addition to silicon, III-V compound semiconductors are obtained by combining group III elements (essentially Al, Ga, In) with group V elements (essentially N, P , As, Sb). This gives us 12 possible combinations; the most important ones are probably GaAs, InP GaP and GaN.

    IOT and 5G applications typically use sensors that transmit wireless data to anedge or cloud network. This requires a combination of RF capabilities with a small form factor and low operating power. A promising approach to achieve this combination is to create single chips that combine the capabilities of silicon CMOS with those of III-V devices, such as gallium nitride (GaN) and indium gallium arsenide (InGaAs). The unique properties of III-V compounds make then well suited for optoelectronics (LEDs) and communications (5G).

    At IEDM, Intel talked described how low-leakage, high-k dielectric enhancement mode GaN NMOS and Si PMOS transistors were built monolithically on a 300mm Si substrate. The goal was to combine GaN’s high-frequency/-temperature/-power attributes with silicon CMOS circuitry’s digital signal processing, logic, memory and analog capabilities, to create compact devices for next-generation solutions for power delivery, RF and system-on-chip (SoC) applications. The researchers say both device types demonstrated excellent performance across a range of electrical specifications.

    III-V materials offer higher electron mobilities than silicon, and HBTs made from them are very fast transistors often used for RF and other high-frequency applications. A key goal is to build them on 300mm silicon wafers instead of other substrates, to take advantage of silicon’s lower manufacturing costs. A team led by imec described how they used a unique nano-ridge engineering technique to build GaAs/InGaP HBTs on a 300mm silicon substrate.

RELATED ARTICLES:

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier

the-10-greatest-issues-ai-needs-to-face
  • There are a lot of reasons to be excited about artificial intelligence. AI is transforming industries in innovative ways and even enabling entirely new business models to emerge.

    But there are also a lot of reasons to be cautious about AI. The 2019 AI Now Report, created by the AI Now Institute, takes a look back on the social impact of AI in 2019, and some of the most important issues surrounding the technology as it moves forward. The AI Now Institute is a non-profit, interdisciplinary research institute “dedicated to understanding the social implications of AI technologies.”

    “This year we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI,” the report says.

    As AI moves into the next decade we’ve outlined some of the most important issues AI will have to grapple with in the coming years.

  • 1.) Algorithmic bias is already affecting us

    As more and more AI algorithms are implemented into decision making processes in everything from real estate to healthcare, it is important to for developers to be aware of the inherent biases within the datasets they use to train AI.

    Apple’s Apple Pay service recently came under fire from customers – including Apple’s co-founder Steve Wozniak – over allegations that the services approval system was assigning lower credit limits to female customers.

    Experts agree it will likely be impossible to completely safeguard systems again bias, but steps can be taken to mitigate the impact of bias.

    (Image source: Apple)

  • 2.) Facial recognition is watching us

    Facial recognition is already here and being widely deployed throughout the world. In China facial recognition technology has become a part of surveillance and security systems and even allowed customers to use their face to access ATMs.

    While there is an argument for convenience and security, there are also wide spread privacy and ethics concerns around using AI facial recognition. The city of Detroit is facing pushback over plans to add facial recognition to its Project Green Light – a camera system that allows police departments to monitor businesses and intersections in real time.

    In 2019 cities of Oakland, Calif., Somerville, Mass., and San Francisco voted to pass a ordinances banning municipal use of face recognition technology.

    By contrast however, the Department of Homeland Security (DHS) announced that it has plans to issue a proposed regulation that could require all travelers, including US citizens, to submit to face and other biometric scans at airports and other ports of entry.

    Regarding the DHS announcement, ACLU Senior Policy Analyst Jay Stanley had this to say:

    “Time and again, the government told the public and members of Congress that US citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling. This new notice suggests that the government is reneging on what was already an insufficient promise.”

    (Image source:  teguhjati pras from Pixabay )

  • 3.) Deepfakes are a reality

    If you want to see the power of deepfakes you only need to browse around YouTube, to channels like Ctrl Shift Face.

    This isn’t a special effect. With enough data (including images and audio) AI algorithms can actually reconstruct and superimpose individual’s faces onto existing video footage. It makes for some entertaining viral videos, but there are wider, more frightening implications for deepfakes as they can be used to create fraudulent videos of political figures, celebrities, and even private citizens. Left unchecked, deepfakes could become a powerful tool for the spread of misinformation.

    (Image source: Ctrl Shift Face)

  • 4.) Algorithms are ruining our social media experience

    Have you ever watched one video on YouTube or liked a post on Facebook or other social media only to be sent down a rabbit hole of increasingly questionable recommendations? That’s not an accident – that’s AI trying to predict what you’ll “like.” And by “like” we mean it’s trying to figure out what content you’re most likely to engage with – and that often means offending or shocking you. Algorithmic issues are being blamed for both a rise in the quantity of extremist content on social media as well as its proliferation. Google, Facebook, and others have pledged to search for ways to tamp down on the spread of dangerous and extremist content as well as misinformation. But many would argue the damage has already been done.

    (Image source: Pixelkult from Pixabay  )

  • 5.) AI is a powerful tool for hacking

    Automation is meant to handle the dirty, dangerous, and repetitive tasks humans can’t or don’t want to perform, right? Well the benefits go both ways. More and more malicious hackers are leveraging AI technology to assist with sophisticated cybersecurity attacks. A well-trained algorithm can attack a target with a level of speed and efficiency that would be difficult for one or even a larger group of hackers. Fortunately, cybersecurity companies like XM Cyber are fighting fire with fire and are also using machine learning algorithms to safeguard networks and sensitive systems as well.

    (Image source: XM Cyber)

  • 6.) AI developers lack diversity

    Issues with AI can be correlated to a lack of racial and gender diversity among the engineers and developers being hired at the top technology companies working on AI. The AI Now Institute has found that Black and Latinx workers are substantially underrepresented in the tech workforce, and women are particularly underrepresented as AI researchers.

    “Rather than recognizing the scale and systemic nature of the problem, tech companies have responded to mounting evidence of bias and misuse by primarily focusing on narrow diversity solutions,” the AI Now Institute said. “They have also attempted technical debiasing, working to ‘fix’ algorithms and diversify data sets, even though these approaches have proven insufficient and raise serious privacy and consent concerns. Notably, neither approach addresses underlying structural inequalities.”

    (Image source: PixLoger from Pixabay)

  • 7.) AI isn’t green

    As engineers come to terms with the realities of climate change and the need to develop greener technologies, AI is having its own energy crisis. The massive amount of compute power required for AI also comes with a massive energy bill.

    “As a whole, the industry’s energy dependence is on an exponential trajectory, with best estimates showing that its 2020 global footprint amounts to 3–3.6 percent of global greenhouse emissions, more than double what the sector produced in 2007,” the AI Now Institute said. “This is comparable to that of the aviation industry,and larger than that of Japan, which is the fifth biggest polluter in the world.”

    Tech companies are already implementing renewable energy sources and other means to make data centers more efficient. But the emergence of 5G and other advanced networking technologies only threatens to make the problem worse before it gets better. “In the worst-case scenario, this footprint could increase to 14 percent of global emissions by 2040,” the Institute warned.

    (Image source: Free-Photos from Pixabay )

  • 8.) AI helps privatize public infrastructure

    “Troubling partnerships between government and private tech companies also emerged as a trend this year, especially those that extended surveillance from public environments into private spaces like private properties and the home,” the AI Now Institute said.

    In 2019 the city of Detroit established the “Neighborhood Real-Time Intelligence Program,” a $9 million, state- and federally-funded initiative that would expand the city’s Project Green Light surveillance system to 500 intersections, in addition to the 500 businesses where it is already deployed, as well as add facial recognition technology to the system. The city has reported reduced crime in areas thanks to Project Green Light, but that hasn’t stopped privacy advocates from protesting the technology.

    In 2018, Amazon came under fire for offering to let police departments utilize its facial recognition software. The company has also negotiated with over 700 police departments in the US to give police access to videos from Ring smart home cameras if the footage can help with a criminal investigation, according to the AI Now Institute.

    (Image source: Pixabay)

  • 9.) Automation impacts people of color and the poor the most

    The debate about automation and labor likely won’t ever stop. But the narrative is taking new shape as more data emerges about specific groups affected by rapid automation due to AI.

    Depending on who you ask, automation will be a boon to the economy as well as personal productivity, or it will usher in a dystopian nightmare where humans struggle for basic needs while robots handle all of the jobs.

    “Both narratives are predicated on the assumption that automation in the workplace is inevitable and that automated systems are capable of performing tasks that had previously been the work of humans. What is missing from both conflicting narratives is the more nuanced prediction of who will be harmed and who will benefit from labor automation in the years to come,” the AI Now Institute said.

    The 2019 AI Now Report predicts that Black, Latinx, and low-wage workers in the US will be disproportionately impacted by increased levels of automation.

    (Image source: mohamed_hassan from Pixabay)

  • 10. ) AI is removing the ‘human’ from human resources

    More and more companies are using AI to manage and oversee workers. AI is even being implemented into the hiring process. Amazon, for example, uses an AI system to set shifting performance goals for its warehouse workers. Workers are assigned a daily “rate” of productivity to hit each day, based on their prior performance and the overall goals of the warehouse.

    “If a worker falls behind, they are subject to disciplinary action. In many warehouses, termination is an automated process (not unlike being “kicked off” a gig-economy platform),” the AI Now Institute said. “According to Abdi Muse, an organizer with Amazon warehouse workers in Minneapolis, if workers fall behind the algorithmically set productivity rate three times in one day, they are fired, however long they may have worked for the company, and irrespective of the personal circumstances that led to their ‘mistakes.’ ”

    “The introduction of AI-enabled labor-management systems raises significant questions about worker rights and safety. The use of these systems—from Amazon warehouses to Uber and InstaCart—pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color) by setting productivity targets linked to chronic injuries, psychological stress, and even death and by imposing unpredictable algorithmic wage cuts that undermine economic stability.”

    (Image source: iosphere / Freedigitalphotos.net)

lamborghini-and-mit-double-the-energy-density-in-super-capacitors
Image souce: Automobili Lamborghini

An advance in super capacitor technology by partners Automobili Lamborghini and the Massachusetts Institute of Technology promises to make this method of energy storage and release even more suitable for fast Italian hybrid-electric super sports cars. The patented technique doubles the energy density of the capacitor compared to the current state of the art, according to Lamborghini.

The new patented material was synthesized by professor Mircea Dincă’s team in the laboratories of MIT’s Chemistry Department with the support of Lamborghini’s Concept Development Department, and it is based on the “Metal-Organic Frameworks” (MOF) concept. The molecular structure of this family of materials makes it the ideal candidate for producing electrodes for high performance supercapacitors of the future, because it maximizes the amount of surface area exposed to electric charge in relation to the mass and volume of the sample.

While the patented material is still in the lab and not in the factory, the company does not see any obvious issues preventing manufacturing production-ready capacitors that use it. “The investigation about production of the technology is at the very beginning, but at the moment we don’t see many obstacles,” Maurizio Reggiani, chief technical officer, told Design News.

Reggiani (left) and Dincă (right). Image source: Automobili Lamborghini

Of course, after manufacturability, the next question with new technology is the cost, though a maker of high-end sports cars is less sensitive to cost than others. “Costs have not been evaluated yet,” Reggiani reported.

Lamborghini has been outspoken in its intention to preserve the auditory characteristics of its signature naturally aspirated V10 and V12 engines despite the industrywide move to forced induction. Instead, the Italian supercar maker will rely on electric boosting to keep its machines competitive with turbocharged rivals’ performance while preserving the shriek its customers love.

But rather than follow the conventional hybrid-electric route, using lithium-ion batteries, Lamborghini is pursuing capacitors for energy storage. “There’re several very interesting characteristics,” Reggiani explained. “The power density, first of all, which makes the capacitors much more powerful compared with batteries — up to three times more power for a given mass — with a symmetrical behavior which makes them able to recuperate as much power as they can deliver.  Under this aspect, the difference with batteries is huge.”

Hidden by this cover is the Sián’s super capacitor. Image source: Automobili Lamborghini

And that’s not the only benefit. “Then, the very low electrical resistance, which means high efficiency and low heat dissipation, and the very long life, measurable in millions of cycles in comparison with the thousands of cycles of the batteries,” Reggiani added.

Lamborghini began its push toward super capacitor hybrids in 2017, with the Terzo Millennio, and most recently with the Sián, which debuted at the Geneva Motor Show earlier this year. As for when additional such models will arrive and when they will employ this advance in technology, Lamborghini can not say just yet. “It’s too early for both questions. But the evolution of the project up to now has been quicker than expected, so we’re optimistic for the next steps as well.”

Which should be naturally aspirated music to the ears of fans of the traditional operatic V10 and V12 arias by Lamborghini’s supercars.

Dan Carney is a Design News senior editor, covering automotive technology, engineering and design, especially emerging electric vehicle and autonomous technologies.

vote-for-the-2020-engineer-of-the-year

Now is the time to cast your vote for the DesignCon 2020 Engineer of the Year. This award is given out each year during the DesignCon event and seeks to recognize the best of the best in engineering and new product advancements at the chip, board, or system level, with a special emphasis on signal integrity and power integrity.

Editors of Design News and the staff of DesignCon would like to offer hearty congratulations to the finalists. For this year’s award, the winner (or his/her representative) will be able to direct a $1,000 donation to any secondary educational institution in the United States. The details on each nominee are below as provided in their published biographies and by the person/s who made the nomination. Please cast your vote by following this link.

Voting closes at noon Pacific Time on Friday, December 27. The winner will be announced at DesignCon 2020, January 28-30, at the Santa Clara Convention Center, Santa Clara, CA.

The six finalists for the 2020 DesignCon Engineer of the Year Award are (click each name to see finalist’s bio and community activity):

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

See the Official Rules of the Engineer of the Year Award

Please click here to learn more about DesignCon and register to attend

Jay Diepenbrock

Consultant, SIRF Consultants LLC

DesignCon 2020 Engineer of the Year finalist Jay Diepenbrock from SIRF ConsultantsJoseph C. (Jay) Diepenbrock holds an Sc. B. (EE) from Brown University and an MSEE from Syracuse University. He worked in a number of development areas in IBM including IC, analog and RF circuit, and backplane design. He then moved to IBM’s Integrated Supply Chain, working on the electrical specification, testing, and modeling of connectors and cables and was IBM’s Subject Matter Expert on high speed cables. After a long career at IBM he left there and joined Lorom America as Senior Vice President, High Speed Engineering, and led the Lorom Signal Integrity team, supporting its high speed product development. He left Lorom in 2015 and is now a signal integrity consultant with SIRF Consultants, LLC. 

Holding 12 patents, 30 publications, and a recognized expert in SI, Jay is currently the technical editor of the IEEE P370 standard and has worked on numerous other industry standards. He is a Senior Member of the IEEE and was an EMC Society Distinguished Lecturer. Jay has a steadfast commitment to solid engineering and communicating/teaching about it. He regularly contributes to industry discourse and education at events and in trade publications. He has made a distinguished career in high-speed product development, including backplane design, high speed connectors and cables, and signal integrity consulting. Beyond that, Jay actively volunteers his time for disaster and humanitarian relief around the world, including being part of the IEEE MOVE truck, which provides emergency communications during and after a disaster. He truly uses his engineering skills to make the world a better place.

Jay is a long-time, active member of the DesignCon Technical Program Committee.

This year at DesignCon, Jay will be presenting the tutorial “Introduction to the IEEE P370 Standard & Its Applications for High Speed Interconnect Characterization” and speaking in the panel “Untangling Standards: The Challenges Inside the Box.”

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

Vladimir Dmitriev-Zdorov

Senior Key Expert, EBS Product Development, Mentor, A Siemens Business

DesignCon 2020 Engineer of the Year finalist Vladimir Dmitriew-Zhorov from Mentor, A Siemens BusinessDr. Vladimir Dmitriev-Zdorov has developed a number of advanced models and novel simulation methods used in Mentor products. His current work includes development of efficient methods of circuit/system simulation in the time and frequency domains, transformation and analysis of multi-port systems, and statistical and time-domain analysis of SERDES links. He received Ph.D. and D.Sc. degrees (1986, 1998) based on his work on circuit and system simulation methods. The results have been published in numerous papers and conference proceedings, including DesignCon. Several DesignCon papers such as “BER-and COM-Way of Channel-Compliance Evaluation: What are the Sources of Differences?” and “A Causal Conductor Roughness Model and its Effect on Transmission Line Characteristics” have received the Best Paper Award. Dr. Vladimir Dmitriev-Zdorov holds 9 patents.

Vladimir is an active member of the DesignCon Technical Program Committee.

This year at DesignCon, Vladimir will be presenting the technical session, “How to Enforce Causality of Standard & “Custom” Metal Roughness Models” and on the panel “Stump the SI/PI Experts.”

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

Tim Hollis

Fellow, Micron Technology

DesignCon 2020 Engineer of the Year finalist Tim Hollis from Micron TechnologiesTim Hollis is a distinguished member of the Macron Technologies technical staff and an advanced signaling R&D lead. His main focus is in identifying and directing forward-looking projects for the SI R&D team to pursue and driving a cross-functional working group intended to provide forward-looking technical guidance to upper management.

Tim has shown outstanding technical leadership in solving numerous challenges with regard to high-speed DDR memory interfaces, for both computing and graphics applications. He has contributed papers to DesignCon as received a Best Paper Award in 2018 as lead author for “16Gb/s and Beyond with Single-Ended I/O in High-Performance Graphics Memory.” His 85 patents reflect his innovative mind and his prodigious contributions to technology.

Tim received a BS in Electrical Engineering from University of Utah and a Ph.D. in Electrical Engineering from Brigham Young University.

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

Istvan Novak

Principle SI and PI Engineer, Samtec

DesignCon 2020 Engineer of the Year finalist Istvan Novak from SamtecIstvan Novak is a Principle Signal and Power Integrity Engineer at Samtec, working on advanced signal and power integrity designs. Prior to 2018 he was a Distinguished Engineer at SUN Microsystems, later Oracle. He worked on new technology development, advanced power distribution and signal integrity design and validation methodologies for SUN’s successful workgroup server families. He introduced the industry’s first 25um power-ground laminates for large rigid computer boards, and worked with component vendors to create a series of low-inductance and controlled-ESR bypass capacitors. He also served as SUN’s representative on the Copper Cable and Connector Workgroup of InfiniBand, and was engaged in the methodologies, designs and characterization of power-distribution networks from silicon to DC-DC converters. He is a Life Fellow of the IEEE with twenty-five patents to his name, author of two books on power integrity, teaches signal and power integrity courses, and maintains a popular SI/PI website.

Istvan has in many cases single handedly helped the test and measurement industry develop completely new instruments and methods of measurement. New VNA types and Scope probes and methodologies are in the market today thanks to Istvan’s efforts and openness to help others. He was responsible for the power distribution and high-speed signal integrity designs of SUN’s V880, V480, V890, V490, V440, T1000, T2000, T5120 and T5220 midrange server families. Last, but not least, Istvan has been a tremendous contributor to SI List, educating and helping engineers across the world with their SI/PI problems. Istvan is an active member of the DesignCon Technical Program Committee, sharing his expertise by participating in the review of content for multiple tracks. He is an IEEE Fellow and has been a tutor at the University of Oxford, Oxford, UK for the past 10 years. He has also been a faculty member at CEI Europe AB since 1991 and served as Vice Dean of Faculty, Associate Professor at the Technical University of Budapest.

At DesignCon 2020, Istvan will be participating in the technical session, “Current Distribution, Resistance & Inductance in Power Connectors,” and the panel, “Stump the SI/PI Experts.”

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

Michael Schnecker

Business Development Manager, Rohde & Schwarz

DesignCon 2020 Engineer of the Year finalist JMichael Schnecker from Rohde & SchwarzMichael Schnecker’s experience in the test and measurement industry includes applications, sales and product development and specialization in signal integrity applications using oscilloscopes and other instruments. Prior to joining Rohde & Schwarz, Mike held positions at LeCroy and Tektronix. While at LeCroy, he was responsible for the deployment of the SDA series of serial data analyzers.    

Mike has more than two decades of experience working with oscilloscope measurements. His background in time and frequency domains provides him with unique insight into the challenges engineers face when testing high-speed systems for both power and signal integrity. Interacting with engineers in the industry daily has allowed Mike to master the ability to explain complex measurement science to engineers at any level. He also holds several patents, including methods and apparatus for analyzing serial data streams as well as coherent interleaved sampling. Thus, Mike is recognized as a thought leader and exceptional mentor in the signal and power integrity community.

Mike has a BS from Lehigh University and an MS from Georgia Tech, both in electrical engineering. 

This year at DesignCon, Mike will be presenting the tutorial “Signal Integrity: Measurements & Instrumentation“ and at the technical session, “Real-Time Jitter Analysis Using Hardware Based Clock Recovery & Serial Pattern Trigger.”

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

Yuriy Shlepnev

President and Founder, Simberian

DesignCon 2020 Engineer of the Year finalist Yuriy Shlepnev from SimberianYuriy Shlepnev is President and Founder of Simberian Inc., where he develops Simbeor electromagnetic signal integrity software. He received M.S. degree in radio engineering from Novosibirsk State Technical University in 1983, and the Ph.D. degree in computational electromagnetics from Siberian State University of Telecommunications and Informatics. He was principal developer of electromagnetic simulator for Eagleware Corporation and leading developer of electromagnetic software for simulation of signal and power distribution networks at Mentor Graphics. The results of his research are published in multiple papers and conference proceedings.

Yuriy conceived and brought to market a state of the art electromagnetic field solver tool suite and is considered an expert in his field and regularly posts teaching videos. He is a senior member of IEEE AP, MYY, EMC, and CPMT societies. He is also a Fellow of Kong’s Electromagnetics Academy and a member of the Applied Computational Electromagnetics Society (ACES).

Yuriy is active in the Technical Program Committee for DesignCon and has served a track co-chair in the past. At DesignCon this year he will be presenting the tutorial “Design Insights from Electromagnetic Analysis & Measurements of PCB & Packaging Interconnects Operating at 6- to 112-Gbps & Beyond” and speaking in the technical session “Machine Learning Applications for COM Based Simulation of 112Gb Systems.”

Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.

Learn more about DesignCon and register to attend

magic-leap-is-hoping-enterprise-will-be-its-salvation
Magic Leap is rolling out a suite of solutions targeted at enterprise users. But is it too little too late? (Image source: Magic Leap)

When the Magic Leap One first rolled out, we wondered if the headset would be better suited to enterprise applications, rather than consumers. Now it looks like the company is finally ready to see if engineers and designers will embrace its product.

Thinking of all the hype that once surrounded Magic Leap to where it is now it’s easy to recall that scene of Obi Wan confronting Anakin in Revenge of the Sith. Magic Leap was supposed to be the chosen one – the company that would make a quantum leap in extended reality (XR) technology.

What has emerged in the year since the “spatial computing” company released its flagship headset – the Magic Leap One – is less of a vision of a bold, new future and more of an emerging cautionary tale.

Magic Leap has announced it will now be offering a suite of services and applications targeted specifically at enterprise, as well as a new headset to go with it, the Magic Leap 1 – an updated version of its Magic Leap One Creator Edition.

“Today’s announcement heralds the arrival of a new chapter for spatial computing with an advanced technology platform for enterprises across all industry sectors,” Omar Khan, chief product officer at Magic Leap, said in a press statement. “Our innovative partners are leading the charge by developing groundbreaking solutions that will transform their businesses and customer experiences. Together, we are rewriting the rules of business with spatial solutions that will yield greater efficiencies, deeper engagement, and significant new business opportunities for all stakeholders.”

This new chapter that Khan speaks of looks less like a new innovation and more like a re-branding. For the price of $2,995, a couple hundred more dollars than the headset alone, Magic Leap’s Enterprise Suite offers customers the Magic Leap 1 along with access to Device Manager – a cloud-based support and metric analytics system. However there are no new significant hardware upgrades to the Magic Leap 1 or any features that might pull potential customers away from other options like the Hololens, or convince skeptics to add AR to their workflow.

Though it refers to its technology as spatial computing (a term you’d think NASA would have adopted decades ago if it really meant anything), Magic Leap is offering the expected benefits associated with augmented reality for enterprise: digital twin and 3D visualizations; education and training; and remote collaboration. None of that is at all bad in and of itself, but you have to ask if Magic Leap is late to the party at this point.

The company has however managed to secure a good number of partners for its enterprise venture. Big names like McLaren, JetBlue Travel, Deloitte, and NTT DOCOMO, among others, have already “committed to bringing spatial computing to their companies and customers,” according to Magic Leap. There’s even a hyperloop company, Hyperloop TT, committed to using Magic Leap for remote demonstrations of its transportation technology.

Magic Leap is also touting a healthy ecosystem of enterprise app developers including Across Realities, Arvizio, Eon Reality, Immersion Analytics, and PTC, that will be rolling out apps for everything from design and virtual prototyping to remote collaboration in the coming months.

Look before you leap

This latest announcement from Magic Leap could excite engineers and developers who have been itching to get their hands on the headset for applications outside of entertainment. However, it also comes at an embattled time for the company.

A recent report by The Information paints Magic Leap less as the next Microsoft or Apple and more like the next WeWork or Theranos (though such a comparison is a tad unfair given Magic Leap has released an actual, working product).

Upon its initial release, Magic Leap’s CEO, Rony Abovitz, aimed for the company to sell one million headset units in its first year. That number was later significantly downgraded to 100,000 by the company’s more conservative executives. The latest sales figures, according to The Information, reveal the company has only sold about 6,000 units to date.

Magic Leap has responded to The Information piece in a statement, calling the article “clickbait.”

“The Information’s reporting is littered with inaccuracies and misleading statements, and erroneously portrays Magic Leap’s operations, internal plans, and overall strategy,” a company statement released to GamesIndustry.biz said.

Magic Leap did not return Design News‘ request for further comment.

Where’s the magic?

What’s most surprising is that Magic Leap didn’t target its hardware at enterprise sooner – as in, upon its initial release (or even before). The company heavily marketed itself as a next-wave entertainment product. But the hype fizzled when reports emerged that the company’s hardware was nothing revolutionary and was more in line with current market trends. Early demo videos of Magic Leap’s technology were also discovered to have been created by special FX houses, and not on the company’s actual hardware.

A lightweight headset with limited head tracking ability seems like a much better fit for engineers and designers working in 3D CAD than for a gamer looking to play an action-packed first person shooter in their living room.

Magic Leap begged to differ. Outside of a partnership with CAD software company OnShape, the company really had no enterprise-focused content for the Magic Leap One on its initial release.

Did we mention the headset also costs $2,2,95? Surely there early adopters willing to pay that price, but any savvy consumer knows they can put together a respectable VR PC rig with a headset from Oculus, HTC, or HP for almost half that price – and enjoy a healthy library of gaming content to boot.

Now, if the disappointing sales figures are to be believed, Magic Leap has found itself at a crossroads. It is not the consumer darling it pledged itself to be, but it also has a lot of ground to gain in enterprise with companies like Microsoft, Google, Vuxiz, HTC, HP, and ThirdEye Gen having already offered enterprise AR, VR, and mixed reality (MR) hardware for a while now.

The company is also falling behind on the hardware front. Magic Leap said its second-generation headset – the Magic Leap Two – will offer new features like 5G connectivity. But insiders have speculated that it is years away from releasing a new headset. Meanwhile Microsoft’s Hololens 2 is available in limited release and is expected to go wide next year. And Qualcomm has already announced a 2020 release for its new XR2 platform for developing AR and MR hardware, which will offer 5G capability. Niantic, the company behind Pokemon Go, is reportedly working on its own AR glasses based on the XR2.

Magic Leap’s consumer ambitions were not misguided. There is still no singular AR product that has taken over the consumer market. What Magic Leap has been aiming to do was become a general-purpose AR headset – to do for augmented and extended reality what consoles like the Sony Playstation or Nintendo Switch do for gaming. More broadly, the Magic Leap One is meant to spark an entirely new device category along the lines of the PC or smartphone.

But maybe consumers don’t want that product? So far the most successful consumer deployments of AR have come on the software end – where games like Pokemon Go have leveraged existing smartphone hardware. On the enterprise end even offerings like ABB’s Ability Remote Insights services remain hardware agnostic.

More and more, the market seems to be saying that customers want AR for specific, niche applications (like enterprise).

In another example of niche demand, earlier this year, Tilt Five, a startup founded by Jeri Ellsworth, an AR entrepreneur and former R&D engineer at Valve, launched a successful Kickstarter campaign for its augmented reality system targeted solely at holographic table top gaming.

While a system like Tilt Five’s doesn’t offer the horsepower of something like the Magic Leap 1, it does offer another attractive incentive for consumers – a price point expected to be around $300-350. For the price of one Magic Leap system you can outfit your entire immediate family with Tilt Five glasses.

The AR market is still in flux – both in enterprise and consumer – with no one big name leading the pack yet. AR technology itself also still has some technical issues to iron out – most notably optics-related issues. With the right partnerships in place, Magic Leap could establish a firm enough foothold to keep itself afloat long enough to course correct and release a next-generation hardware platform. But time isn’t on Magic Leap’s side – particularly in an increasingly crowded enterprise space and with the company having already fallen short of so many lofty promises.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? 

Register to attend!

who’s-left-to-make-chip-development-tools?

Here’s a look at the remaining major EDA tool companies after years of consolidation.

  • The EDA market continues to consolidate. At this year’s 2019 Design Automation Conference (DAC), Rich Valera from Needham and Company noted that since the collapse of the tech bubble in the early 2000’s, the EDA market has been all about consolidation.

    “Many larger scale private companies, including multiple “near IPO’s” – e.g., Denali, Tensilica, Apache, and Spyglass (Atrenta), – have been bought before going public in the last 15 years,” explained Valera. “It goes without saying that the EDA industry has become very concentrated, one could argue an oligopoly, with most of the revenue driven by 3 major companies.”

    The above graphic does not include many of the more recent consolidations:

    Cadence acquisition of AWR from National InstrumentsInphi Corp acquired the majority of eSiliconSynopsys acquired DINI Group, QTronic GmbH and certain assets of eSiliconDassault Systemes acquisition of CST

    The number of private EDA startup company exits through acquisitions or going public (IPOs) has been declining, which is probably attributed to fewer companies being formed. Additionally, the time to exit for startup EDA companies has generally been well over 10 years. This is a long time period for most startups and their investors, which may explain the modest amount of venture capital funding flowing into EDA.

  • It would seem that the main EDA tool vendors have formed an oligopoly, i.e., Synopsys, Cadence and Mentor Graphics (recently acquired by Siemens PLM). According to Valera, one might expect to see less competition, reduced investment and a push to maximize profits – say, as opposed to growing into new markets. This has not been the case. Rather, the combined Cadence/Synopsys research and development (R&D) budget has been on a generally upward trend over the last 10 years, which is a positive activity as it relates to job growth.

    The three major EDA companies have realized healthy growth thanks to their movement into new application areas like autonomous vehicle electronics, ongoing advancement and roll-outs in industrial and commercial IOT, AI and edge-cloud computing.

    What about the other EDA tool vendors? According to Crunchhub, there are 132 organizations listed as semiconductor EDA companies, not including fabs like TSMC and OEMs like Intel. But we don’t need to consider all EDA companies to understand what makes up this industry. Instead, let’s consider the top 8 EDA tool providers.

  • Synopsys

    In 1986, a small synthesis startup called Optimal Solutions was created by a team of engineers from GE Microelectronics Center in Research Triangle Park, N.C. The team included Dr. de Geus, who would later become the CEO. Shortly thereafter, the company moved to Mountain View, Calif., to become Synopsys (for SYNthesis and OPtimization SYStems). Their first task was to focus on commercializing an automated logic synthesis “Design Compiler” tool. Today, Synopsys has a suite of chip design and verification tools plus verification intellectual property (IP).

    One of the significant announcements from Synopsys in 2019 was the completion of its acquisition of the DINI Group, an FPGA-based boards and solutions company. SoC designers are deploying FPGA-based prototyping platforms to enable rapid software development in automotive, artificial intelligence (AI), 5G, and high-performance computing (HPC) applications.

    DINI’s FPGA boards are frequently used to create a complete logic prototyping system that can emulate up to 130 million ASIC gates with over 20 FPGAS.

  • Cadence Design Systems

    Two small startups that emerged in the early 1980’s – Solomon Design Automation and ECAD – grew and merged to form Cadence Design Systems in 1988. Shortly thereafter, Cadence bought Gateway Design Automation, a developer of the Verilog hardware description language. A year later Cadence put Verilog into the public domain, and it became the most widely used hardware description language. In the ensuring year, Cadence pushed into the custom/analog design automation tool market and later IC layout automation.

    Today, Cadence offers a broad portfolio of tools to address an array of challenges related to custom IC / Analog / RF Design, digital, IC package, and PCB design and system-level verification.

    One of the more interesting announcements in 2019 was the introduction of a complete electrical-thermal co-simulation solution for ICs to physical enclosures. The thermal solver integrated with the company’s IC, package and PCB implementation platforms. Design insights from the solver will help design teams detect and mitigate thermal issues early in the design process, thus reducing electronic system development iterations.

  • Mentor Graphics (A Siemens PLM Company)

    Mentor Graphics was founded in 1981 by a small group of engineers in Oregon. All had left Tektronix to form Mentor Graphics, one of the first commercial EDA companies, along with Daisy Systems and Valid Logic Systems. Mentor Graphics was also the first EDA company that had its software run on a non-proprietary hardware system, i.e., the Apollo Computer workstations.

    Today, the company offers chip design, PCB design, systems, automotive, CAE Simulation and Test and Embedded tools. Mentor is involved in EDA, printed circuit board and system-of-system level design.

    One of the announcements this year was in the area of high-level-synthesis (HLS) for edge computing networks. The challenge is that moving machine learning to the edge has critical requirements on power and performance. Using off-the-shelf solutions like CPUs or GPUs are too slow or too expensive, respectively. Even generic machine learning accelerators can be overbuilt and are not optimal for power. That’s why HLS tools can help create new power/memory efficient hardware architectures to meet machine learning hardware demands at the edge.

  • ANSYS

    Ansys was founded in 1970 by John Swanson. In 1996, the company went public. During the next five years, Ansys made numerous acquisitions to gain additional technology for fluid dynamics, electronics design, and other physics analysis.

    The company develops and markets engineering simulation software used to design products and semiconductors, as well as to create simulations that test a product’s durability, temperature distribution, fluid movements, and electromagnetic properties.

    As an example of the company’s simulation capabilities, TURBOTECH is using Ansys fluids tools to potentially redesign aeronautical propulsion. TURBOTECH is developing an energy storage system capable of powering the hybrid-electric aircraft of the future. The idea is to develop regenerative cycle turbogenerators based on small turbines that recover energy from exhaust gases to reduce fuel consumption. By recharging batteries in-flight, the turbogenerators claim to improve the endurance of electric aircrafts by 10x — enabling significant weight and cost savings. The turbogenerators can produce electricity from virtually any type of renewable flammable material, including bio-fuel, bio-gas, hydrogen and conventional fuels.

  • Keysight Technologies

    Keysight Technologies’ can trace its origins back to the original Hewlett-Packard business founded in 1939 by Bill Hewlett and Dave Packard. In 1999, the HP spun off Agilent Technologies in 1999. Five years later, Agilent spun off Keysight Technologies as a wireless, semiconductor and aerospace test and measurement company.

    Significant news in 2019 includes the partnership with Marvin Test Solutions to develop advanced beamformer integrated circuit (IC) test technology to accelerate the production of high performance 5G chips and test associated mmWave antenna systems. To ensure reliable and efficient 5G mmWave communications, the performance of critical elements that form part of the beamformer chips need to be rigorously tested under linear and nonlinear conditions.

    Also noteworthy is the company’s simulation software that is being used for rapid development, integration and test of sophisticated electronic warfare (EW) systems with real-time RF modeling. Software and hardware simulation systems are needed so engineers can test their EW designs by easily generating specific RF environments.

  • Zuken

    Zuken is a Japanese-based company that started out in CAD systems in 1976. The company’s software is primarily used for designing printed circuit boards (PCBs), Multi-Chip Modules (MCM), and for the engineering of electrotechnical, wiring, wiring harness, pneumatics and hydraulics applications.

    Recently, Zuken moved firmly into the systems-of-systems engineering and model-based-systems engineering (MBSE) spaces with the acquisition of ViTech. This acquisition required the approval of the US Department of Defense (DoD) and the Committee on Foreign Investment in the United States (CFIUS). Vitech was a US company with more than 25 years of industry experience in systems engineering.

    In the fall of 2019, Zuken reinforced it’s presence in the world of digital twins by agreeing to develop system design and manufacturing process interfaces to Dassault Systèmes (DS) 3DEXPERIENCE platform. Zuken will provide electronic libraries and design data management capabilities within DS’s platform to enable cross-discipline systems engineering and traceability.

    In particular, Zuken’s component management process will permit the transfer, synchronization and authorization of component metadata and related files between the databases of the two companies. Zuken’s integration will enable creation and lifecycle management of electronic systems from the Dassault Systemes’s platform.

  • Altium

    Altium was founded in 1985 by Nick Martin as a PCB Computer-Aided Design (CAD) vendor. The company has continued to improve its original product over the last several decades, e.g., Altium Designer. Improvements in 2019 provide for a faster schematic editor, high-speed design and enhanced interactive router for PCB design.

    This year, the company also unveiled a cloud-based application for CAD component management. It may seem un-glamorous but selecting and managing components in the development of a PCB is critical to design and cost.

    The effective creation and reuse of component data in the PCB design process, including footprints, schematic symbols, and 3D models, is critical in meeting tight time-to-market windows. Until now, most PCB designers have created and stored component data in private file systems rather than in a shared, managed, and maintained library. Others have tried to use shared spreadsheets or proprietary databases. These outdated approaches led to multiple re-design cycles due to redundant, inaccurate or outdated component data that is often discovered only late in the product development process, when board designs are sent to manufacturers.

  • Applied Wave Research (AWR)

    Several former companies providing EDA tools (like CST and AWR), FPGA boards systems (like DINI) and design services (like eSilicon) have been “removed” through acquisitions from the official list of EDA companies. Yet the brands and product live on either as the original brand or under the flag of the acquiring company. Let’s look at the most recent of these acquired EDA vendors.

    AWR was founded in 1994 to improve the design efficiency for radio frequency and microwave circuit and system design. After several prior acquisitions, AWR was acquired by National Instruments (NI) in 2011. A further acquisition by Cadence was announced in late 2019.

    AWR software is used for radio frequency (RF), microwave and high frequency analog circuit and system design. Recently, The Italian National Institute for Astro Physics of the Institute of Radio Astronomy (INAF-IRA) used NI AWR software to design the circuitry of the receiver chains for a multi-channel heterodyne receiver antenna for radio astronomy applications operating across the 2.3–8.2 GHz RF band.  

    Large-scale surveys using highly sensitive electronics are an essential tool for new discoveries in radio astronomy. INAF designers were challenged to develop, fabricate, and test a room temperature, multi-channel heterodyne receivers needed for radio astronomy applications. AWR software helped in the critical modeling and design of the phased array for reflector observing systems (PHAROS) which uses a super-cooled feed with an analog beamformer.

    RELATED ARTICLES:

    February 25 – Day 1 – Introduction to EDA and the tools

    Announcing Heidi Barnes as Winner of the DesignCon 2017 Engineer of the Year Award

    John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

Electronic design automation (EDA) are the software tools used for designing electronic systems, such as system-on-chip (SoC) integrated circuits and printed circuit boards. The tools work in a design-verification flow that chip designers use to analyze and develop semiconductor chips.

But the EDA tool market has gone through massive consolidation over the couple of decades. Which companies are left? We’ll find out.

growth-returns-to-semiconductor-and-eda-tools-m&a-markets-–-for-now

2019 was a great time for merger and acquisition business in the semiconductor and electronic design automation tools industries. But what will the future hold?

There was an uptick in M&A activity for the semiconductor space in the first eight months of 2019. This was a welcome change after the market slowing in 2017 and 2018. The combined value of 20-plus M&A agreement announcements reached over $28 billion for the purchase of chip companies, business units, product lines, intellectual property (IP), and wafer fabs between January and the end of August – according to the Fall 2019 IC Insight’s McClean Report. This amount does not include transactions between semiconductor capital equipment suppliers, material producers, chip packaging and testing companies, and electronic design automation (EDA) software firms.

The activity in the first eight months of 2019 have surpassed the $25.9 billion total for all of 2018.

Image Source: IC Insights

2-game-changing-trends-that-will-define-embedded-systems-in-the-2020s

The last decade has seen an amazing advancement in embedded system development techniques, tools and technologies. A decade ago, I recall being amazed when a microcontroller had a clock speed above 48 MHz with an unbelievable 128 kilobytes for flash. Today, we now have microcontrollers with clock speeds above 1 GHz with more than 4 megabytes for flash storage that makes even my first personal computer jealous.

embedded systems, Python language, embedded systems at the edge, embedded systems market

In addition to the trends in language and edge computing, the overall embedded market will grow in the 2020s. (Image source: Global Market Insights)

This dramatic increase in capabilities for microcontrollers and their affordable costs is going to usher in a completely new design paradigm in the decade to come. Let’s examine three trends in embedded systems development that I believe will prove to be game changers in the 2020’s.

Trend #1 – The rise of python as a dominant language

Python is already the most popular programming language used by software developers outside the embedded systems industry. In fact, a survey conducted this year by IEEE verified that amongst engineers, Python is the number one programming language followed by Java and then C1. The Aspencore 2019 Embedded Markets Study also found that in the last two years, the number of projects programmed in Python in the embedded space has doubled2! (Keep in mind the study also found that there was no change in the number of projects using C). So, what is it about Python that makes me think it will become a dominant language for embedded systems?

First, as I discussed in the introduction, the compute power available in microcontrollers has grown to the point where a stripped-down version of a Python kernel can be ran on a microcontroller that costs only a few dollars. Second, there are already popular open source ports for Python such as MicroPython that are available on more than a dozen architectures including popular ones like the STM32 and the ESP32.

Third, C and C aren’t taught in most computer science or engineering programs. It’s now Python and some Java and has been for quite some time. This means that there is and will be a whole generation of engineers taking the lead in the next decade who have a natural inclination to using Python.

Finally, as I attend conferences, talk with prospects and colleagues, I’m already seeing a natural pull to use Python. No one wants to fight with the low-level hardware and software anymore. They want their microcontroller to come running something that they can put their application specific code on quickly and get their product to market. Forget worrying about registers, pointers and all the traditional embedded stuff. Plus, if Python is used, anyone can help develop the product, not just those embedded folks.

Ready or not, here Python comes!

Trend #2 – Machine learning at the edge

I really wanted to avoid having machine learning as a game changing trend for the upcoming decade. I feel like the hype around machine learning is enormous. I can’t open a newsletter or read a blog (or apparently write one) without machine learning showing up. The fact though, is that machine learning holds a lot of potential for embedded systems developers as we begin a new decade.

Machine learning for embedded developers, as it currently stands, has the greatest potential at the IoT edge. Up until recently, machine learning was done somewhere “out there” and it had little if anything to do with embedded developers. Remember though in my introduction when I discussed the rapid advancements in hardware technologies for microcontrollers? These advances are making it far easier to run machine learning inferences on a microcontroller.

Running the inference on the embedded controller at the edge opens a whole range of local applications and can save on bandwidth and communication costs with the cloud. One area that seems particularly primed for machine learning at the edge is embedded vision. The ability to perform object detection and recognition at the edge has so many potential opportunities for business applications and for developers to lighten their workload.

The vast amount of data and libraries that are currently available will make it very easy to train new machine learning models. Even as I write this there are teams of specialists working on how to optimize tools and libraries so that the inferences can run on embedded controller. In fact, we are already at the point where you can run an inference on an Arm Cortex-M4 processor. I know that we are getting tired already of talking machine learning, but the industry is just getting started for us embedded systems engineers.

Conclusions

The next decade has the potential to dramatically change the way that products and embedded systems are developed. In this post, we’ve explored the two game changing trends that I believe will have the biggest impact on embedded systems developers. There are certainly many other trends that will see in the 2020’s, but we will have to save those for another time. I think in the next decade will find the majority of applications will not just use Python, but also machine learning.

What do you think, will Python and Machine Learning be the two major game changers in the coming decade?

References

  1. https://spectrum.ieee.org/computing/software/the-top-programming-languages-2019
  2. https://www.embedded.com/wp-content/uploads/2019/11/EETimes_Embedded_2019_Embedded_Markets_Study.pdf

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [email protected], at his website, and sign-up for his monthly Embedded Bytes Newsletter.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!