4-reasons-why-someone-would-hack-your-car

If hackers are going to break into our cars they’ll most likely be motivated by profit over anything else.

Autonomous vehicles haven’t even hit public roads in any kind of widespread deployment yet, but Hollywood is already painting frightening scenarios on the potential of car hacking. The 2019 remake of Child’s Play features a scene in which a woman is held prisoner in a self-driving car run amok. Most notorious is a scene from The Fate of the Furious in which hackers remotely control autonomous vehicles to wreak havoc on city streets – even making cars rain down from multilevel parking structures.

While these situations may seem outlandish, they’re not terribly far from reality. Hackers have demonstrated for years now that it is possible to remotely access vehicles’ braking and other crucial systems. A recent study by Georgia Tech’s School of Physics found that if hackers were able to hack only 20 percent of the vehicles in Manhattan they could grind New York City to a halt. Such a hack would not only freeze commuter traffic, but also prevent important services and emergency vehicles from getting around the city.

But is creating chaos the only value proposition for car hackers? Asaf Ashkenazi, Chief Strategy Officer at Verimatrix, a provider of security solutions to the automotive industry, believes we should be worried less about terrorist-level attacks and more about the ways hackers will attempt to turn a profit by hacking cars.

“It’s very sexy to talk about these terror attacks, I think. But if you put aside the terror – big states that are launching acts of war. You have to look at what is the practicality of that,” Ashkenazi told Design News following a talk at the 2019 Drive Conference & Expo.

Here are four major reasons hackers might want to break into your car in the very near future:

1.) Credit Card and Bank Fraud

As vehicles become increasingly connected to infrastructure for things such as automated toll booth payments, it will create more points of entry for hackers looking to steal personal data such as credit card or banking information. “Hackers are looking to make the maximum profit for the minimum effort,” Ashkenazi said. “Cars are going to have to implement a lot of payment systems because you will be able to pay via your car when you go to toll roads and parking lots.”

2.) Stalking

Fraud can be a big way for hackers to make money. But Ashkenazi also noted there’s a healthy black market for other forms of stolen information as well. Hacked vehicles could provide avenues for stalkers or other malicious parties to track potential victims.

“All the cars will have GPS. So if I have the ability to track any car I can start a service where anyone that wants to track somebody can. I’d just need a VIN number,” Ashkenazi said. “Let’s say that I have access to half of the cars in the U.S. You just log into my service, pay me monthly, and I give you access to track whatever car you want. It’s a great business.”

3.) Helping Car Thieves

Thanks to key fobs and other security technologies, cars are becoming increasingly more difficult to steal the old fashioned way. But hackers could bridge the gap and give thieves an easy way to steal modern vehicles. “Let’s say that I’m a hacker and I have access to your digital key,” Ashkenazi said. “If a car thief wants to steal your car they could connect to a service I offer, give me the VIN number of the car, and I could open it remotely.”

4.) Ransom

Even if a hacker were able to create a widespread disruption or attack using vehicles, it wouldn’t be the best way to turn a profit, Ashkenazi said. The moneymaking proposition here is to use the exploits to extort companies.

“If I have the capability to stop the car and do all sorts of damage then as hacker I probably won’t use it because the entire world would be after me – FBI and Interpol and the like,” he said. ”But I can demonstrate it to a car manufacturer and say, ‘If you don’t pay me this amount of money I’ll leak it out. The real danger in this situation is the hacker makes the money and the public will never hear about it.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

what's-the-state-of-emotional-ai?

Is emotional AI ready to be a key component of our cars and other devices?

Analysts are predicting huge growth for emotional AI in the coming years, albeit with widely differing estimates.

A 2018 study by Market Research Future (MRFR) predicted that the “emotional analytics” market, which includes video, speech, and facial analytics technologies among others, will be worth a whopping $25 billion globally by 2025. Tractica has made a more conservative estimate in its own analysis, but still predicted the “emotion recognition and sentiment analysis” market to reach $3.8 billion by 2025. Researchers at Gartner have predicted that by 2022 10 percent of all personal electronic devices will have emotion AI capabilities, either on the device itself or via cloud-based services. The market will be driven by use cases running the gamut from consumer experience and entertainment to healthcare and automotive.

Yet the technology itself still has strides to make. In a 2019 meta-analysis of 1,000 studies on inferring emotion from human facial expressions, a group of scientists concluded that the relationship between our faces and emotions is more complex that meets the eye. The study was published in the journal, Psychology Science in the Public Interest and reported by The Washington Post.

In an interview with The Washington Post, Lisa Feldman Barrett, a professor of psychology at Northeastern University, who worked on the study said: “About 20 to 30 percent of the time, people make the expected facial expression,” such as smiling when happy… But the rest of the time, they don’t. “They’re not moving their faces in random ways. They’re expressing emotion in ways that are specific to the situation.”

In short, if emotional AI is going to deliver on the lofty expectations placed upon it, it’s going to need a very complex understanding of how our faces and voices correspond to our emotions.

In the same article, Rana el Kaliouby, co-founder and CEO of emotional AI company Affectiva, described the emotional AI space as an ever-evolving one. She agreed that emotional AI technology hasn’t reached the level of sophistication needed for widespread deployment, but she expressed hope that more research will someday achieve this, and also better educate industries and consumers about the limitations of emotional AI.

Affectiva has emerged as one of the leaders in the emotional AI space. The company has focused primarily on applying its emotional AI to vehicles – imagining a world where cars can respond to the emotions of their drivers and passengers in a variety of ways from adjusting music and temperature to even pulling themselves over and offering emergency roadside assistance.

Just how far does emotional AI have to go? And how are we getting there?

Following his talk at the 2019 Drive World Expo and Conference, Abdelrahman Mahmoud, senior product manager at Affectiva, sat down with Design News to discuss the current state of emotional AI research and what’s needed to push the technology forward.

Abdelrahman Mahmoud

(Image source: Affectiva)

Design News: What’s your response to the meta-analysis that concluded that there needs to be more research done in the area for emotional AI to really have any efficacy?

Abdelrahman Mahmoud: A lot of that study was focused on the prototypical expression of emotion like joy, anger, and surprise. But fundamentally we believe that the emotion expression is much more than just just those five or six prototypical emotions. That is why as a company we don’t just focus on these emotions. We actually focus first on the different facial muscles, like how people express a smile.

DN: Can you talk a bit about what’s been happening at your company as far as research into emotional AI lately?

Mahmoud: From a research perspective there’s a lot of continuous focus on multi-modal[methods], for recognizing things like frustration. We’ve done a lot of internal studies and we know that you need a multi-modal approach to try to solve that problem.

Early on we did a lot studies using just the face or just the voice and we’ve seen that the accuracy jumps dramatically if we use data from both — which is kind of intuitive, but we just had to validate that. That’s the main focus for our multi-modal effort these days – detecting signals like frustration and drowsiness that are important in a car.

DN: Has their been work into study things such as body language or position as well? For example, imagine someone who shows frustration less in their face but more in as tension in their shoulders or hands?

Mahmoud:: There are there are both strong signals on the face and in voice. But for sure adding gestures would be beneficial in some cases. Keep in mind that the automotive ecosystem focuses a lot on optimizing cost, which means you don’t have a lot of room for adding a lot of models that can do different things.

DN: Meaning there has to be a balance between what information you want to capture and how many cameras you can place inside the vehicle?

Mahmoud:: For us it’s always a matter of choosing the signals that will mostly strongly give an indicator about what’s happening in the cabin. It might not be the complete picture but you want to get as close as possible. But we think it’s very short term. In the longer term computational power and better compute platforms inside the car are going to change how much we can capture.

DN: Market analysis has been talking a lot about use cases for emotional AI beyond automotive. Affectiva itself even made a deal with Softbank to supply AI to its robots. Do you think automotive is still where the greatest opportunity for emotional AI lies?

Mahmoud:: There are a lot of markets that we can deploy general emotion recognition or emotional AI in. We actually don’t see automotive as a very distinct market from things like robotics. And the reason why that’s the case is because there is a lot focus going into the HMI [human-machine interface] in the car these days.

Traditionally, OEMS didn’t really focus on the HMI and you saw very ugly HMIs in the car that were not really intuitively designed. Recently, there’s a lot of focus on how the HMI in the car is going to have to change. With the push towards more autonomy if the HMI in the car is not intuitive the driver is just going to switch to the next intuitive HMI they can interact with, which is their cellphone.

And you see a parallel to that in the cellphone market where there is a lot of focus on the UI because that’s the main differentiator in terms of different hardware manufacturers. This is very similar to robotics because a robot’s human-machine interface is the thing that would benefit most from having emotion recognition. The cool thing about automotive is that you get to test HMIs at larger scale because the robotics market is still a limited market in terms of deployment.

DN: In your talk at Drive you spoke about how context is a very important aspect in regards to emotion recognition. Can you elaborate on this?

Mahmoud:: There is definitely a lot of research that we’ve been doing with partners with regards to how to translate different facial muscles to emotions in specific contexts. The thing is the context really matters as far as detecting emotion. With frustration for instance, one of the fundamental facial expressions is a smile, which is counterintuitive, but people do smile when they are frustrated.

Context is why a hybrid and multi-modal approach is important. You can have some of the machine learning detecting things like how people express a smile or how people move their facial muscles, but then there needs to be a layer on top of that that takes into account some of the context in order to understand the difference between frustration or just a smile.

DN: Would you say context recognition is the big missing component of emotional AI right now?

Mahmoud:: I think we’re still far off of from a kind of human intuition as far as having an AI being able to just analyze all of these different signals and understand emotion. But this is an active area of research. But for the emotion recognition models you just have to understand the context that you are deploying them in and what they are trained on, which is very similar to any machine learning model you could think of.

*This interview has been edited for content and clarity.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

sebastian-thrun-is-driving-a-future-where-cars-fly-themselves
Through his company, Kitty Hawk, Sebastian Thrun is working to make personal flying vehicles a reality. Shown above: Flyer, one of vehicles being developed by the company. (Image source: Kitty Hawk)

What do you get the industry that is disrupting everything? Ask serial entrepreneur and inventor Sebastian Thrun and the answer you’d get is, “more disruption.”

As the CEO of Kitty Hawk, a company with the bold vision of bringing flying cars to consumers, Thrun is imaging a future where autonomous cars have been replaced by personal autonomous planes. “I believe the flying car revolution will disrupt self driving cars,” he told an audience during his keynote at the 2019 Drive World Conference & Expo. “I believe we will see flying cars at scale before self-driving cars.”

“Don’t Trust the Experts”

Thrun speaks to an audience at Drive World 2019. (Image source: Drive World Conference & Expo)

While he admitted his own predictions may sound fantastical, Thrun has been involved with autonomous vehicles for over a decade. He headed up the team that won the 2005 DARPA Grand Challenge – a 132-mile, off-road autonomous vehicle race. While the race itself is more noted today for its mishaps than any particular innovation, winning the competition put Thrun on the path that has today led him to imagine a future most people would associate with The Jetsons or Back to the Future: Part 2.

Following his success at the DARPA competition, Thrun was invited to Google to lead the company’s then fledgling autonomous car program (he founded Google X). “I was the go-to person [at Google] for self-driving cars,” Thrun said.

Google’s ambition was to create a self-driving car that could navigate even the most difficult roads. The company’s founders, Larry Page and Sergy Brin, even personally selected the most difficult routes in Northern California and tasked Thrun and his team with creating a self-driving car that could handle them.

But according to Thrun, being Google’s in-house expert taught him two major lessons: “Don’t trust incumbents” and “don’t trust the experts.”

“Larry [Page] came to me and said, ‘you’re the world expert. Can you start a team?’ And I said it can’t be done,” Thrun recalled regarding Google’s ask for its autonomous cars. When Page asked for a purely technical reason why it wasn’t possible however, “I couldn’t say all the technical reasons. I had to tell him I know it can’t be done, but there is no technical reason.”

The experience brought Thrun to a realization, “experts know the past, not the future.” He found himself presenting the same reasoning to Google that the traditional automakers had told him about autonomous cars. “We talked to automotive companies, but they didn’t believe it,” he said. “The incumbents are the least interested in disruption.”

If you need any evidence of how reluctant traditional automakers were toward true disruption at the time, Thrun pointed to a 2011 ad campaign for the Dodge Challenger in which the vehicle was touted as, “the leader of the human resistance” against AI-driven cars.

This 2011 Dodge Challenger ad took a jab at the idea of autonomous vehicles. 

“We Are Not the Gatekeepers”

Today, thanks to advances such as deep learning and advanced sensor technologies, autonomous cars are doing things that engineers only a decade ago weren’t sure would be possible. Most major automakers are developing self-driving cars, an entire startup ecosystem has risen up around autonomous vehicles, and autonomous trucks are even being tested on public roads.

And to disrupters that means it’s time to move on to the next thing. For Thrun that’s developing electric vertical take-off and landing (eVTOL) vehicles at Kitty Hawk (Google’s Larry Page is a financial backer of the company). Though the idea of flying cars soaring over our heads every day might feel like science fiction, Thrun firmly believes the advanced in autonomous cars lend themselves directly to the development of creating autonomous flying machines.

He argued that whereas as roads are two-dimensional spaces with a limited capacity, the sky is three-dimensional and offers many more benefits in terms of travel efficiency and capacity. By Thrun’s own estimation, the same stretch of road that can hold a few dozen cars, would be able to hold upwards of a million flying vehicles of the same size. Adding full autonomy to these vehicles, Thrun said, would also alleviate issues around navigation. He told the Drive World audience the key would be in automating the sort of systems used today in air traffic control.

Kitty Hawk has yet to release a commercial product to the public, but has been actively testing its vehicles with human pilots. It recently reported that one of its vehicles, Flyer, has already been flown over 25,000 times.

Earlier this year Kitty Hawk entered to a strategic partnership with Boeing to further develop , Cora, its two-seat flying vehicle. Kitty Hawk would like Cora to someday function as an autonomous flying taxi that consumers can summon with a simple app similar to Uber or Lyft.

Getting to that vision, Thrun admitted, will be no small feat. But it’s one he certainly believes is reachable as long as technology, regulation, and society come together to make it happen. “Innovation is a matter of society.” he said. “As much as we in Silicon Valley like to believe we are the pinnacle for what’s possible, we are not the gatekeepers. We are just technologists…It’s not just the technologists and engineers that change the world. It’s all of society that makes an innovation successful.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

The Battery Show logoBattery, EV/HV, & Stationary Power All in One Place.

Learn everything you need to know at our in-depth conference program with 70 technical sessions in eight tracks covering topics on battery, electric & hybrid vehicles, and stationary power technologies. 

The Battery Show. Sept. 10-12, 2019, in Novi, MI. Register for the event, hosted by Design News’ parent company Informa.

the-drive-to-level-5-podcast-–-episode-6:-car-2.0-–-transportation-as-a-service
The ZF ProAI is an AI-based automotive control unit for automated driving applications. (Image source: Xilinx)

As the 2019 Drive World Conference kicks off, Willard Tu, senior director of automotive for Xilinx, joins us to discuss the challenges automakers are facing as they move toward a model of Transportation-as-a-Service.

The Drive to Level 5 podcast investigates the seismic shift toward the autonomous vehicle and the technical advancements necessary to perfect the connected car, from foundational elements down to the component level.

The exploratory series connects with innovators across automotive electronics and embedded systems industries in preparation for the inaugural Drive World Conference & Expo launching in Silicon Valley August 27-29 with North America’s largest embedded systems event, the Embedded Systems Conference (ESC).

This week we’re kicking off the official opening of the Drive World Conference & Expo by speaking with Willard Tu, senior director of automotive for Xilinx.

Host Jack Heslin spoke with Tu ahead of his Drive World session: “Car 2.0: Transportation-as-a-Service,” to discuss the many competing solutions available to automotive innovators aiming to achieve “Transportation-as-a-Service nirvana.”

If you’re curious about the pros and cons of computer vision and machine learning, distributed versus centralized computing, sensing options such as LiDAR and radar, and how GPUs, FPGA, and DSPs compare for automotive applications, this is the talk for you!

Listen to the full episode below:


Drive World with ESC Launches in Silicon Valley

This summer (August 27-29), Drive World Conference & Expo launches in Silicon Valley with North America’s largest embedded systems event, Embedded Systems Conference (ESC). The inaugural three-day showcase brings together the brightest minds across the automotive electronics and embedded systems industries who are looking to shape the technology of tomorrow.

Will you be there to help engineer this shift? Register today!

the-drive-to-level-5-podcast-–-episode-5:-partnerships-will-be-more-important-than-ever-for-automated-driving

Sam Abuelsamid, Principal Research Analyst with Navigant Research, talks about the technical, regulatory, and commercial challenges of deploying automation which are increasingly forcing longtime rivals to partner up.

The Automated Driving Vehicles Leaderboard from Navigant Research contends that partnerships will be important going forward, given the long-term commitment needed to put self-driving technology on public roads. (Image source: Navigant Research)

The Drive to Level 5 podcast investigates the seismic shift toward the autonomous vehicle and the technical advancements necessary to perfect the connected car, from foundational elements down to the component level.

The exploratory series connects with innovators across automotive electronics and embedded systems industries in preparation for the inaugural Drive World Conference & Expo launching in Silicon Valley August 27-29 with North America’s largest embedded systems event, the Embedded Systems Conference (ESC).

Listen below to our fifth interview where host Jack Heslin sits down with Sam Abuelsamid, Principal Research Analyst with Navigant Research to talk about Sam’s free-to-attend Drive World Conference session, “Partnerships Will Be More Important Than Ever for Automated Driving.” This session will look at the technical, regulatory, and commercial challenges of deploying automation which are increasingly forcing longtime rivals to partner up in hopes of sharing the costs until they can really start to recover their investment.

Click here to learn more about Drive World Conference & Expo and to register for the event.

Drive World with ESC Launches in Silicon Valley

This summer (August 27-29), Drive World Conference & Expo launches in Silicon Valley with North America’s largest embedded systems event, Embedded Systems Conference (ESC). The inaugural three-day showcase brings together the brightest minds across the automotive electronics and embedded systems industries who are looking to shape the technology of tomorrow.

Will you be there to help engineer this shift? Register today!

the-drive-to-level-5-podcast-–-episode-4:-realizing-autonomy-through-sensors,-simulation,-and-safety

Host Jack Heslin sits down with Tim Wong, Technical Marketing for Autonomous Vehicles at NVIDIA, to discuss how AI and high-performance computing are enabling the deployment of safe autonomous vehicles.

High performance compute platforms such as the ones offered by Nvidia are key to the performance of autonomous vehicles. Shown above: An autonomous truck and car being developed by Uber (Image source: Nvidia)

The Drive to Level 5 podcast investigates the seismic shift toward the autonomous vehicle and the technical advancements necessary to perfect the connected car, from foundational elements down to the component level.

The exploratory series connects with innovators across automotive electronics and embedded systems industries in preparation for the inaugural Drive World Conference & Expo launching in Silicon Valley August 27-29 with North America’s largest embedded systems event, the Embedded Systems Conference (ESC).

Listen below to our fourth interview where host Jack Heslin sits down with Tim Wong, Technical Marketing for Autonomous Vehicles at NVIDIA. Tim and Jack will break down Tim’s Drive World session, “Realizing Autonomy through Sensors, Simulation & Safety,” which is set to cover everything from training AI networks for numerous algorithms, to testing and validating autonomous driving technology in the virtual world as well as in the real-world, to processing and compute inside the vehicle with perception, localization and path planning. Discover how AI and high-performance computing are enabling the deployment of safe autonomous vehicles.

Click here to learn more about Drive World Conference & Expo and to register for the event.

Drive World with ESC Launches in Silicon Valley

This summer (August 27-29), Drive World Conference & Expo launches in Silicon Valley with North America’s largest embedded systems event, Embedded Systems Conference (ESC). The inaugural three-day showcase brings together the brightest minds across the automotive electronics and embedded systems industries who are looking to shape the technology of tomorrow.

Will you be there to help engineer this shift? Register today!

7-can't-miss-sessions-from-drive-world-2019

From blockchain and quantum computing to vehicle hacking and chip design, the inaugural Drive Conference and Expo in Silicon Valley will bring together engineers across multiple disciplines to explore the future of automotive technology.

  • How V2X Connectivity Could Pave the Way to Level 5 Autonomy 

    Tuesday, August 27 • 9:00 AM – 11:00 AM

    In the race to Level 5 autonomy, V2X (vehicle-to-everything) communication is making waves in the automotive world by complementing the capabilities of existing sensor technology, and providing higher level of predictability and determinism, conveying location, speed, direction, and even intent. But will V2X be the catalyst that will get us to Level 5? This session will explore how V2X will serve as the foundation of connected vehicles, future transportation, and ultimately achieving Level 5. 

     (Image source: NXP Semiconductors N.V. )

  • Decomposition of an Autonomous Vehicle

    Tuesday, August 27 • 2:15 PM – 5:00 PM

    Location: Room 207

    What are autonomous vehicles made of? Experts from VSI Labs will dive deep into the hardware and software that comes together to create vehicles with Level 2 autonomy and beyond. You’ll leave this session with a new appreciation and understanding of just what it takes to engineer a self-driving car.

    (Image source: VSI Labs)

  • graphene, bacteria, University of Rochester, Delft University of Technology, Shewanella oneidensis, field-effect transistor biosensor

    Hacking the Autonomous Vehicle

    Wednesday, August 28 • 8:00 AM – 8:55 AM

    Location: Room 207

    Our vehicles have essentially become computers on wheels. And no computer is immune to hackers and cyberattacks. A panel of automotive and cybersecurity experts will discuss the new realm of automotive safety and how automakers can address the growing threat of vehicle hacking.

     (Image source: Wikimedia Commons)

  • Future Directions: Blockchain in the Automotive Ecosystem

    Wednesday, August 28 • 3:15 PM – 4:00 PM

    Location: Room 206

    Blockchain isn’t just about cryptocurrency anymore. The distributed ledger technology is here to stay and it’s poised to transform how vehicles operate, connect with each other, and even how they’re sold. Join a panel of experts: Alan Gordon, Technical Leader at Ford Research & InnovWednesday, August 28 • 3:15 PM – 4:00 PM

    Location: Room 206ation Center; Ramesh Ramadoss, Chair of the IEE Blockchain Initiative, and Michael Vo, CTO Mobility Open Blockchain Initiative (MOBI) as they discuss how blockchain is transforming the automotive industry and what challenges lie ahead.

    (Image source: Chorus Mobility)

  • Future Directions: Quantum Computing to Accelerate Safety & Efficiency of Autonomous Driving

    Thursday, August 29 • 8:00 AM – 8:55 AM

    Location: Room 207

    The computing power needed to make self-driving vehicles a reality is a task that is still daunting engineers. But the answer may lie in the realm of quantum mechanics.  The auto industry is exploring the potential of quantum computing to address autonomous vehicles’ enormous need for computing power for data processing and wireless communication. Join Jeewika Ranaweera, Principle Hardware Engineer at Oracle, as she lays out how future generations of automobiles will rely on qubits and quantum entanglement as much as oil and gas.

     (Image source: Quantum Nanoelectronics Laboratory, UC Berkeley)

  • Human Perception AI: The Future of the In-Car Experience

    Thursday, August 29 • 9:00 AM – 9:45 AM

    Location: Room 206

    What if your car could understand how you were feeling? Using in-cabin cameras, today’s AI can sense the identities, activities, and cognitive and emotional states of vehicle occupants to personalize the driving experience. Join Abdelrahman Mahmoud, Senior Product Manager at Affectiva, a developer of emotional AI systems for the automotive industry, for a look at the future of the in-cab experience.

    And if you want a primer on emotional AI, be sure to read more about it on Design News.

     (Image source: Affectiva)

  • How to Build an Autonomous Chip

    Thursday, August 29 • 3:15 PM – 4:00 PM

    Location: Room 206

    Level 3 autonomy is here and Level 4 is right around the corner. The push for vehicle technology is having a massive impact on chip design, transforming the ways chips are designed and built and what IPs are used. Join a panel of experts from Semiconductor Engineering, Dell, Qualcomm, Drako Motors, Synopsis, and Mentor as they discuss the new landscape of chip design for automotive and how advanced electronics will eventually bring vehicles to Level 5 autonomy.

    (Image source: NXP Semiconductors/Kalray)

What do artificial intelligence, blockchain, and quantum computing all have in common? They’re all emerging technologies that are having a significant impact on the automotive industry on many levels. As the world continues on the road toward Level 5 fully autonomous vehicles, and today’s vehicles become more and more connected, the auto industry is leveraging new technologies like never before to realize its ambitions.

The sessions here are just a few of the offerings available at the 2019 Drive World Conference & Expo , but are great options for any attendees looking to understand these technologies and the seismic shifts they’re bringing to the automotive industry.

Drive World with ESC Launches in Silicon Valley

This summer (August 27-29),  Drive World Conference & Expo  launches in Silicon Valley with North America’s largest embedded systems event, Embedded Systems Conference (ESC). The inaugural three-day showcase brings together the brightest minds across the automotive electronics and embedded systems industries who are looking to shape the technology of tomorrow.

Will you be there to help engineer this shift?  Register today!