low-cost-wifi-microcontroller-allows-entry-into-iot-market

According to Statista, the Internet of Things (IoT) market is to reach $212 billion by the end of 2019 with an industry market value of $1.6 trillion by 2025. This technology will continue to impact the consumer, industrial, medical, agricultural, and pharmaceutical markets as big data demands drive the IoT industry. With such a demand, a variety of innovative and wireless technologies are being developed daily. One silicon device that has made inroads into this growing technology field is the ESP8266 WiFi microcontroller.  The small microcontroller’s system architecture provides a wealth of circuit peripherals and technologies that appeals to designers, developers, engineers, and makers of WiFi enabled products.

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller
The ESP8266 WiFi microcontroller. (Image source: Espressif)

The ESP8266 Specifications

The ESP8266 microcontroller developed by Espressif Systems is a 32-pin quad flat no-lead (QFN) WiFi system on a chip (SOC) package.  The ESP8266 integrates a radio circuit consisting of antenna switches, a power amplifier, a low noise receiver amplifier, filters, a power management module, and a RF balun. The RF balun is an electrical component capable of taking a balance signal and converting it to an unbalanced signal.

A balance signal has equal line impedances. An unbalanced signal has unequal impedances. The RF balun ensures the antenna works properly with the RF transmit and receive circuits through the internal network switch and proper impedance matching. The balanced and unbalanced impedances are achieved with the RF balun. At the heart of the ESP8266 is a 32-bit microprocessor manufactured by Tensilica. The Tensilica microprocessor manages the binary data and coding language information required to work with the ESP8266’s physical layer electrical interfaces. Such physical layer electrical interfaces include the general-purpose input/output (GPIO), the universal asynchronous receiver transmitter (UART), pulse width modulation (PWM), serial peripheral interface (SPI), and the analog to digital converter (ADC). With these physical layer electrical interfaces, the ESP8266 can be wired to external sensors, transistor driven actuators, discrete LEDs, and displays (LCD, OLED, and LED). In addition, the ESP8266 microcontroller supports the WiFi specification of 802.11 b/g/n and the 2.4 GHz operating frequency. Besides supporting a variety of electrical interfaces, the small QFN WiFi SoC microcontroller can be operated from a voltage supply range of 2.5V- 3.6V.  The average operating current of the SoC microcontroller is 80 milliamperes (mA).

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller, system block diagram
The System Block Diagram of the ESP8266 WiFi enabled microcontroller. (Image source: Espressif)

Creating an IoT Infrastructure

The design concept idea behind the ESP8266 WiFi microcontroller is to develop low cost wireless networks. These networks can assist in obtaining vital data from industrial machines on the plant floor and allowing wireless control of Mechatronics based systems found in manufacturing and process facilities. Also, home automation systems can easily be deployed using the wireless SoC architecture of the ESP8266. Such a wireless infrastructure can allow the creation of low-cost networks to send data to a variety of mobile and laptop devices assisted by the cloud. To allow access to these wireless nodes through the cloud requires a hosting webpage. A unique feature of the ESP8266 is the internal hosting webpage that can be configured using a programming language like MicroPython. Also, the ESP8266 SoC architecture provides a unique internet protocol (IP) address that works in unison with the Media Access Control (MAC) and network interface controller (NIC) subcircuit integrated within the microcontroller.

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller, system block diagram, IoT ionfrastructure
A low cost IoT network using the ESP8266. (Image source: Don Wilcher)

Ease of Prototyping A Wireless PoC

Espressif’s low cost wireless microcontroller chip has allow new development platforms for designers, engineers, makers, and developers to rapidly explore IoT product concepts. Popular prototyping platforms being used by designers and makers are the printed circuit board (PCB) module and the development board. Both prototyping platforms allow rapid Proof of Concept (PoC) product, system monitoring, and controller device ideas to fruition. The pcb module provides copper circular pads that allow soldering of external circuits and sensors to the WiFi microcontroller pins. The development board is an extension of the pcb, whereby the WiFi module is soldered to the prototyping platform. Also, male header pins are soldered to the development board’s pcb. These male header pins allow insertion into a solderless breadboard for ease of circuit prototyping.

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller, system block diagram, IoT infrastructure, ESP prototyping
ESP8266 development platforms: pcb module and the development board. (Image source: Don Wilcher)

An example PoC concept using the development board is the creation of a WiFi enabled Audio Signal Generator. With software created in MicroPython, a webpage hosting site for a simple user interface (UI) will allow the wireless operation of an audio signal generator. Such a device will aid in injecting signals into malfunctioning amplifier units remotely using a laptop computer, smartphone, or tablet accessing the ESP8266’s web hosting page through an IP address.

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller, system block diagram, IoT infrastructure, ESP prototyping, wed device server
 Simple UI hosted by the ESP8266’s webpage IP address: (Image source: Don Wilcher)

The ESP8266 WiFi microcontroller’s digital pin enables the audio signal generator upon receiving a wireless command from the simple UI. An LED provides a visual indicator of the device being enabled by the UI. Also, the measured range of audible tones measured with an oscilloscope is 657Hz – 1.70KHz.  

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller, system block diagram, IoT infrastructure, wifi enabled audio signal
A WiFi enabled Audio Signal Generator circuit schematic diagram. (Image source: Don Wilcher)

The development of this WiFi test gear was prototyped using a retro Radio Shack Science Fair 200-in-1 electronics project lab and an ESP8266 development board. This product concept scenario provides benefits for electronics troubleshooting in the field by a technician or service engineer. As illustrated in this example, a variety of IoT products can be rapidly developed using the ESP8266 WiFi microcontroller. Additional technical specifications of the ESP8266 WiFi microcontroller can be found at Espressif’s website.

Microcontroller, wifi, IoT, ESP8266 WiFi microcontroller, system block diagram, IoT ionfrastructure
A prototyped WiFi enabled Audio Signal Generator. (Image source: Don Wilcher)

Don Wilcher is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He’s worked on industrial robotics systems, automotive electronic modules/systems, and embedded wireless controls for small consumer appliances. He’s also a book author, writing DIY project books on electronics and robotics technologies.

digital-microscopes-and-imaging-reveal-all

If you’ve ever attended any optical or photonic conferences, you’ve likely seen demos of digital microscopes by Dino-Lite, a brand name for Omano Microscopes and others companies like Andonstar Microscopes, National Optical, OptixCam, and more.

Unlike traditional classroom or laboratory optical microscopes, digital versions replace the eyepiece with a digital camera that feed into a monitor. Thanks to advances in computer display resolution, digital microscopes can produce amazing visuals for everything from cells and micro-organisms to the details of integrated circuits and PCBs (see videos).

By digitizing optical images, users can calculate measurements in software, share results around the globe and often conduct real-time tests and diagnostics. Such advantages might be the reason that digital microscopes are used in multiple market sectors including academics, manufacturing, quality control, semiconductors, healthcare, and electronics.

Digital microscopes are great for displaying tiny objects like ants, gears and chips to even bacteria on a microscopic slide. However, the specimen on the slide must still be fixed and stained to be seen by a digital microscope. If a user needs to witness cellular structures and interactions in real time, then a new technology known as digital holographic microscopy is needed.

Digital holographic microscope, such as those from the Imec R&D center in Belgium, replace the traditional optical lens with a lens-free image sensor. Holographic image reconstruction algorithms then convert the image sensor data into a visual image of amazing interactive detail.

Check out the videos below to see live demonstrations of digital microscopes from the SPIE Photonics West events.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

the-top-19-battery-articles-of-2019

The 14 Most Efficient Hybrids You Can Buy in 2019 

battery stories, Prius,
(Image source: Toyota)

Here are a group of new cars that will deliver more than 40 mpg without asking you to give up a thing.

15 Examples of the Past, Present, and Future of Battery Technology 

battery stories, Prius, battery technology
(Image source: Exide Technologies)

Batteries are powering our world. It’s time to learn where they came from and where they are going.

EVs Are Made Of This- REDUX 

battery stories, Prius, battery technology, Tesla
(Image source: Tesla)

Here (once again) are 8 crucial parts that make up an electric vehicle (EV)

10 Green Principles For EV Sustainability 

battery stories, Prius, battery technology, Tesla, EV sustainability
(Image source: University of Michigan)

Recently published guidelines could help ensure that new battery technologies are sustainable and environmentally sound.


9 Ways Electrons Can Carry You Across the Water 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water

(Image source: Corvus Energy)

New electric jet skis, lake boats, super yachts, and marine ferries are using batteries and electric motors to revolutionize the maritime industry.

Tesla’s Megapack Aids In Renewable Energy Adoption 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack
(Image source: Tesla)

Tesla has introduced a huge new battery system that will allow the integration of wind and solar power sources into large-scale utility grids.

The Batteries That Powered the Lunar Module 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, lunar module battery
(Image source: NASA)

SPACE WEEK: Landing on the moon took a huge amount of engineering skill, not the least of which went into developing the batteries that would provide electricity on the lunar surface.

EV Disruption Goes Beyond Mere Numbers 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, Tesla Model 3
(Image source: Tesla)

Sales volumes are still small, but electric vehicles are still having a large and growing effect on the overall auto industry.

Lamborghini and MIT double the energy density in super capacitors 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, Automobili Lamborghini
(Image souce: Automobili Lamborghini)

Lamborghini’s exotic supercars will rely on capacitors for a jolt of electric boost, rather than moving to turbochargers.

The Battery Show Wrap-up 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show
(Image source: Design News)

What we learned, what surprised us, and how the EV world is changing.

Heard At The Battery Show 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show
(Image source: Design News)

Breaking news, new ideas, and the pulse of the industry from The Battery Show in Novi, Michigan.

Are We Ready For Autonomous Vehicles? 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, Tesla Model 3, autonimous EVs
(Image source: Toyota)

A recent study examines 20 countries worldwide and finds both opportunities and potential potholes on the road to driverless cars.

What Are The Keys To EV Success? 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, GM
 (Image source: General Motors)

Ahead of his Keynote address at The Battery Show in Novi, Michigan, General Motors’ Mark Verbrugge shares with us his thoughts on how EV progress will be made.

Opinion: How Toyota’s EVs May Save the Company 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, Toyota EVs
(Image source: Toyota)

Toyota has finally committed to building a range of electric vehicles (EVs)—but is it too late?

AI Can Sort Batteries 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, AI
(Image source: Dean Deng)

A new machine-learning process can be used to predict lithium ion battery cycle life, based upon factors measured in just a few early charge and discharged cycles.

Six Electric Motorcycles That You Can Buy In 2019 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, electric motorcycles
(Image source: Energica)

2019 will be a breakthrough year for electric motorcycles—here are some two-wheel offerings from six companies for commuting, work, and play.

Could Solar-Powered Cars Become Practical? 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, solar car
(Image source: Toyota)

Toyota is working with solar-cell maker Sharp to explore solar-powered EVs using new, experimental, high-efficiency solar panels on a modified Prius.

5 Lessons learned from a smart home experiment 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, home experiment
(Image Source: Honda Smart Home System – HEMS and EV in Garage)

These lessons along with years of data are available to all with an interest in smart home development thanks to the Honda and UC-Davis project.

Heard Around the Industry: Dyson cancels its electric car project 

battery stories, Prius, battery technology, Tesla, EV sustainability, EV across water, Tesla Megapack, battery show, James Dyson
(Image source: Dyson)

Commentary on Dyson’s exit from the electric car business.

deepfakes:-the-looming-threat-of-2020

I’m watching a clip from the movie The Shining. Shelly Duvall is hiding from her crazed husband as he chops down the door with an axe. Jim Carrey sticks his head through the opening and cackles the iconic line: “Here’s Johnny!”

…Jim Carrey is not in The Shining.


What you’re seeing is not a Hollywood special effect. It wasn’t done with After Effects, green screen, or with costuming and makeup. The video is a fake created by deep learning artificial intelligence – a deepfake. And anyone with a powerful computer and enough time can make one.

You might have heard of deepfakes before, or glimpsed headlines discussing the technology. You might even have laughed at various YouTube videos on channels such as Ctrl Shift Face that have swapped faces of celebrities in iconic roles to some humorous and sometimes unsettling results (once you’ve seen any of the bizarre deepfakes involving Nicolas Cage you can never un-see them.)

But deepfakes, once confined to darker corners of the internet, are becoming a serious threat. In the US, particularly as the 2020 election season rapidly approaches, AI experts are warning that deepfakes could become a powerful tool for spreading misinformation and manipulating the public. With enough effort a bad actor could create a video of any political candidate saying nearly anything. And in today’s climate of social media outrage and algorithm-driven content distribution, there’s no telling how far it could spread before someone caught it.

It’s time engineers, developers, and technologists all had a serious discussion about deepfakes.

(Image source: Adobe Stock)

The Origin Of Deepfakes

There’s no one particular person that has taken credit for originally developing deepfakes. Their existence owes to a confluence of technologies ranging from ever-more sophisticated computer vision algorithms and neural networks, to increasingly powerful GPU hardware.

The first deepfakes to emerge on the internet seem to have emerged in 2017, when an anonymous Reddit user called “Deepfakes” began distributing illicit, altered videos of celebrities online. Other Reddit users followed suit and it wasn’t long before a community had sprung up around distributing both deepfakes themselves as well as tutorials and software tools to create them.

In an interview with Vice, [NSFW link] one of the first outlets to take an extensive look at deepfakes, the Reddit user outlined how comparatively easy the process is:

“I just found a clever way to do face-swap. With hundreds of face images, I can easily generate millions of distorted images to train the network. After that if I feed the network someone else’s face, the network will think it’s just another distorted image and try to make it look like the training face.”

But it wasn’t all fun and games. Far from it. When they first appeared, deepfakes had one particularly popular and disturbing use case – pornography. Much of the early deepfake content available was pornographic films created using the faces of celebrities like Gal Gadot, Scarlett Johansson, and Taylor Swift without their consent.

As the videos proliferated, there was an crackdown with Reddit itself shutting down its deepfakes-related communities, pornographic websites removing the content, and sites like GitHub refusing to distribute deepfake software tools.

If private citizens weren’t that concerned yet it was probably because sites got somewhat ahead of the problem. Left unchecked it wouldn’t have been long before deepfake pornography spread from celebrities to every day people. Anyone with enough publically available photos or video of themselves on a platform like Facebook or Instagram could potentially become a victim of deepfake revenge porn.

In 2018, Rana Ayyub, and investigative journalist from India, fell victim to a deepfakes plot intended to discredit her as a journalist. Ayyub detailed her ordeal in an article for The Huffington Post:

“From the day the video was published, I have not been the same person. I used to be very opinionated, now I’m much more cautious about what I post online. I’ve self-censored quite a bit out of necessity.

“Now I don’t post anything on Facebook. I’m constantly thinking what if someone does something to me again. I’m someone who is very outspoken so to go from that to this person has been a big change.

“I always thought no one could harm me or intimidate me, but this incident really affected me in a way that I would never have anticipated…

“…[Deepfakes] is a very, very dangerous tool and I don’t know where we’re headed with it.”

How Deepfakes Work

On the surface the process of creating a deepfake is fairly straightforward. First, you need enough images (hundreds or more ideally) of your target – showing their face in as many orientations as possible (the more images you can get, the better the results – hence why celebrities and public figures are an easy target). If you think it might be difficult to get hundreds or thousands of images of someone remember that a single second of video could contain 60 frames of someone’s face.

Then you need a target video. The AI can’t change skin tone or structure so it helps to pick a target and source with similar features. Once a deep learning algorithm is trained on a person’s facial features, additional software can then superimpose that face onto another person’s in your target video. The results can be spotty at times, as many videos online will attest to, but done right, and with enough attention to detail, the results can be seamless.

In an interview with Digital Trends, the anonymous owner of the Ctrl Shift Face YouTube channel (the channel responsible for the Jim Carrey/The Shining videos, among others) discussed how simple, yet time-consuming the process is:

“I’m not a coder, just a user. I don’t know the details about exactly how the software works. The workflow works like this: You add source and destination videos, then one neural network will detect and extract faces. Some data cleanup and manual extraction is needed. Next, the software analyzes and learns these faces. This step can sometimes take a few days. The more the network learns, the more detailed the result will be. In the final step, you combine these two and the result is your deepfake. There’s sometimes a bit of post-process needed as well.”

On one hand, the relative ease at which this can be done with little to no coding experience is certainly disconcerting. On the other however, deepfakes are an impressive demonstration of the sophistication of AI today.

At the core of deepfakes is a neural network called an autoencoder. Put simply, an autoencoder is designed to learn the important features of a dataset so it can create a representation of it on its own. If you feed a face into an autoencoder its job is then to learn the distinguishing characteristics that make up a face and then construct a lower-dimensional representation of that face – in this case called a latent face.

Deepfakes work by having a single encoder train to create a generalized representation of a face and then have two decoders share that representation. If you have two decoders – one trained on Person A’s face, the other on Person B’s – then feed the encoder either face you can transpose Person A’s face onto Person B’s (or vice versa). If the encoder is trained well enough, and the representation is generalized enough, it can handle facial expressions and orientations in a very convincing way.

Since faces in general are very similar in their overall shape and structure, a latent face created by an encoder using Person A’s face, can be passed to a decoder trained on Person B’s face to good effect. The result at the other end is a video of Person B, but with Person A’s face.

As long as you have two subjects similar enough and a computer with enough processing power, the rest just takes time. Faceswap – one of the more readily available deepfakes apps – can run on a Windows 10, Linux, or MacOS computer and recommends a newer Nvidia GPU for processing. “Running this on your CPU means it can take weeks to train your model, compared to several hours on a GPU,” according to Faceswap’s documentation.

top-10-tech-failures-from-2019-that-hint-at-2020-trends
  • As the last year of the last decade, 2019 had a lot to live up to. Within the span of 10 short years, service apps like Uber, Lyft, AirBnB and others on mobile phones became big business. Mobile phone companies introduced amazing personal features like voice assistance (e.g., Siri and Alexa), iCloud connections for fast video streaming, and very high-resolution HD cameras. Not to be outdone, the automobile was transformed with automation tech and electrification. A Tesla electric vehicle even made it into space.

    Space technology flourished in the last decade with the commercialization of space rockets, the launch of hundreds upon hundreds of communication satellites and the increasing popularity of Cubesats. Back on earth, homes and buildings became smarter while alternative forms of energy continued to improve in efficiency. And the list goes on.

    But there were several notable failures in the last decade, many seeming to culminate in 2019. Here is the short list of the 10 tech failures most worthy of mention, in no particular order.

  • #1 Glitchy Spacecraft Launch

    Boeing suffered several major setbacks this year. The first one was an incomplete demonstration flight of its new astronaut capsule. The mission of Boeing’s CST-100 Starliner spacecraft began successfully but suffered technical problems that prevented it from reaching the International Space Station (ISS). Many observers believe that the Starliner capsule on top of an Atlas rocket simply burned too much fuel as it climbed into space, leaving an insufficient amount to reach the ISS. Some have suggested the failure was from a glitchy timer system that turned off the rocket thrusters too soon.

    The demonstration test wasn’t a complete failure as the Starliner did land successfully in the deserts of New Mexico.

  • #2 Andromeda Strain revisited?

    Remember the Andromeda Strain? It was a techno-thriller novel from 1969 written by Michael Crichton that centered around the efforts of a team of scientists investigating the outbreak of a deadly extraterrestrial microorganism in Arizona.

    Fast forward to 2019. A company in Israel launched its first lunar lander that unfortunately crashed-landed on the moon. The small robotic spacecraft called Beresheet was created by the SpaceIL and Israel Aerospace Industries (IAI). It failed just moments before landing on the moon.

    This was an unmanned operation, but not one devoid of life. A US-based nonprofit had added tardigrades, or water bears, to the capsule. These microscopic, eight-legged creatures could survive in a dormant state through harsh conditions, and maybe even on the moon.

    In other words, earth-based lifeforms have now been introduced to the moon’s ecosystem. Without some water, the tardigrades aren’t likely to revive and spread. But this failure highlights the need for planetary protections – both on the moon and earth.

    It should be noted that the goal of the Arch Mission Foundation was not to contaminate the moon but rather to, “create multiple redundant repositories of human knowledge around the Solar System.” The foundation tests out technologies for long-lasting archives, like securing information in DNA strands or encapsulating insects in artificial amber. In addition to water bears, the Arch’s payload included nickel sheets nanopatterned with thousands of pages of Wikipedia and other texts.

    One of Arch’s first missions was launched by SpaceX on the Falcon Heavy rocket and is now entering an orbit around the Sun for millions of years.  The first books in the Solar Library were Isaac Asimov’s Foundation Trilogy. Can you guess where they are located? The books containing Asimov’s Foundation Trilogy were placed in the glovebox of the Cherry Red Tesla Roadster that will soon be orbiting the Sun.

  • #3 Communication Failures (again)

    Both Boeing and the FAA have been cited for oversight breakdowns that contributed to 737 Max failure. But the actual cause of the tragedy that resulted in the crash of two Boeing 737 Max aircrafts seems to be broad failures in the automated system that controls the new planes. The report by the Joint Authorities Technical Review panel said that assumptions about critical aspects of the plane’s design were “not adequately reviewed, updated, or validated.”

    This lack of communication and incorporation of warnings from the engineering teams is a common problem with very complex, modern systems, e.g., the Challenger Space Shuttle and others.

  • #4 Disappearing Bitcoin Miners

    While 2019 was overall a profitable year for the semiconductor chip development market, there were a few noticeable declines. One was the system-on-chip (SoC) devices made specifically for bitcoin mining. The cost of mining for bitcoins dramatically increased in 2019, leading to a drop in the need for hardware SoC-based equipment.

    In essence, it took much more effort for bitcoin miners to solve the equations required to validate transactions on the Bitcoin network. This increase in mining difficulty reflects the increased competition.

    Another slowdown was in the market for automotive chips and electronics, as companies and drivers realized that autonomous car technology won’t really be ready for several more years. This corresponds well to Gartner’s famous “trough of disappointment” portion in its hype cycle for emerging technologies.

  • #5 Cloud Buckets

    A new type of cybersecurity issue has emerged in which millions of people have had their personal information exposed through file storage systems known as cloud buckets. Such storage areas typically consist of public resources that are easily accessed by a variety of web service applications. Cloud buckets are like public file folders which contain user information.

    Placing sensitive user data information in the cloud offers companies the capability to offload their security to big firms like Google, Apple, Amazon or Microsoft. The problem is that the buckets are not configured by these firms but rather by the companies who use their cloud networks.

    Not all of these companies are storing their customer information properly. This lack of security is easy pickings for identity thieves. It is an example of readily available information that doesn’t require any hacking.

  • #6 Hacks of the Year

    Speaking of hacks, this year experienced even more cybersecurity breaches. In 2018, there were 500 million personal records stolen, according to the Identity Theft Resource Center. But that number was miniscule compared to the 7.9 billion records exposed in 2019 by over 5,000 breaches, as reported by Risk-Based Security. Compared to the 2018 Q3 report, the total number of 2019 breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent. Here’s just a small sampling of the more infamous breaches (more details here):

    > ElasticSearch Server Breach

    > Canva Data Breach

    > Facebook App Data Exposure 

    > Orvibo Leaked Database

    > Social Media Profiles Data Leak

    Sadly, the common theme in many of these data exposures is that data aggregators obtained and used personal information in a way the owners never imaged or gave their consented. This is a legal problem as much as a technical one.

  • #7 Google Glass

    In 2019, Google announced a new $999 Glass augmented reality headset that looked suspicious like the failed Google Glass from the past.

    Early in 2012, Google co-founder Sergey Brin debuted Google Glass. A year later, the founder and head of the Google Glass Project, Babak Parviz, delivered a keynote about the technology at the IEEE Hot Chips event at Stanford.

    One of the ongoing leading smart phone trends is the ever-improving screen resolution and larger screen size. During his keynote, Parviz argued that there was a physical limit to this trend, but glass offered the next display form factor evolution, i.e., immersion with one’s surroundings. This will be especially important in augmented reality applications.

    Originally, Google Glass was a standalone unit (not yet cloud-based) that included internet access, voice controls, and a camera for pictures and videos. It accomplished all of this with dual core processors running at more than 1 GHz. Five MEMS sensors capture all the environmental data. It had a two-dimensional touch panel on side of glass.

    Why was this technology a failure? It wasn’t because of the technology, but rather because it wasn’t clear to the customer what problem it solved or why they needed it. Additionally, many felt it was intrusive as a user of the device could take pictures and short film snippets of people without their knowledge.

    In January 2015, Google announced that they would no longer be developing Google Glass. But that wasn’t the end of the project. Instead, Google pivoted to the business sector by launching Glass Enterprise Edition for workplaces like factories in 2017. This year, Google announced the Glass augmented reality headset.

  • #8 Folding Phone

    Samsung’s Galaxy folding phone was billed as a new dawn in display technology. The phone levered open into a 7.3-inch dynamic AMOLED display.

    Unfortunately, the company had to postpone the launched of the folding phone after early review models broke, delaminated, and got filled with gunk. The problem seemed to be potential defects with a weak hinge as well as substances found inside the device.

    As with many new technologies, the price tag also presented a barrier to anyone but early adopters. A reengineered and improved version is now on sale for near $2,000.

  • #9 Machine-Bias or Garbage-in, Garbage-out

    The challenge of machine-bias came clearly into focus in 2019. Similar to human-bias, machine-bias occurs when the learning process for a Silicon-based machine makes erroneous assumptions due to the limitations of a data set and pre-programming criteria. One example of machine-bias was recently revealed in Apple’s new credit card, which contained an algorithm to decide how much trustworthy (or risky) a user might be. This evaluation used to be done by trained humans but now is often performed by AI based algorithms.

    Apple’s credit card was shown to have a gender bias. Males are more likely to get a higher credit line limit than females. This bias was highlighted when a male entrepreneur was assigned a spending limit 10 times higher than that of his wife, even though they have a common account.

    How does a machine get a bias? A report from IBM Research outlines two main ways AI systems could inherit biases. First, the AI software might contain errors and dependencies. Second, the data set from which AI learns its task may have flaws and bias. These data points come from the real world which contains many biases, e.g., favoring white men to the exclusion of women and minorities. Algorithms are only as smart as the data you feed them. This is a modern update of the old computer data expression, “garbage-in, garbage-out.”

  • #10 Software App Failures

    No list of tech failures would be complete without mention of the apps that didn’t make it. The range of the applications that failed is wide.

    Consider first British Airways (BA) glitch, whose computer system completely wend down during a peak travel season. Over a hundred flights of BA were cancelled and near to 300 delayed. Thousands of passengers were affected. Sadly, this wasn’t the first time the system had failed, which suggests a systemic problem that has not been properly addressed by management.

    Or how about the Facebook 2019 failure that prevented users from viewing or loading images form the newsfeed? Several other social media apps had a similar problem, including Instagram, WhatsApp and Messenger. In each case, users were prevented from sending messages, media files and the like.  Facebook claimed their problem was the result of an accident during routine maintenance.

    Several app failures or hacks from 2019 include Apple’s Facetime bug and the Ring security camera intrusions. The later may have been more of a customer problem as Ring notes that the system invasion was likely the result of the hacker gaining access to the family’s account through weak or stolen login credentials.

four-ways-blockchain-can-help-engineers-with-patent-ips

The adoption of blockchain has indeed been widespread. But the realm of intellectual property protection could find blockchain particularly useful.

The industry of IP has plenty of its own problems. On average, infringement can cost a company around $102 million a year. And there’s little on the horizon that indicates any relief on that front. 

Blockchain, intellectual pro[perty, Cisco, Fortunly

Blockchain technology can address the issue of IP infringement. (Image source: Cisco)

But blockchain technology could address this issue. In fact, its radical approach to data storage lends itself kind of perfectly to solve much of the IP industry’s ills. And IP protection for engineering patents, in particular, could benefit from blockchain on its side.

But, first, let’s take a step back and look at why enforcing IP rights in general is so problematic.

Problems Plaguing Intellectual Property Protection

One of the greatest problems to IP protection comes from the Internet or, rather, how it helped proliferate content. The Internet has enabled us to easily copy, download, and share digital content. Not only that, but doing so is also near-instantaneous. Piracy is a great example of how IP owners lose out on profits.

You can see how that might make copyright protection a bit more difficult than before the Internet existed. With people being capable of copying, downloading, or otherwise distributing intellectual property owners’ content without breaking a sweat, protecting IP becomes next to undoable.

And that isn’t the end for owners’ woes. The systems that are supposed to enforce copyright protection are sorely out of date. DMCA (Digital Millennium Copyright Act) is the most up-to-scratch safeguard, and that act is from 1998. It has done some good, but it isn’t enough, especially nowadays.

This isn’t due to it being fundamentally flawed. Rather, it’s simply down to the fact that, on its own, it cannot keep up with so many people violating its stipulations. It’s more of a technology problem than anything else.

And that’s where blockchain comes into play.

What Is Blockchain? A Brief Explanation

Blockchain, in a nutshell, is a digital ledger that distributes access to and control over data to all of its members. Its records cannot be erased or changed in any capacity, all the data is completely transparent, and hacking the system is nigh impossible.

Using blockchain to guard the rights of intellectual property has quite a few advantages. GoChain is a well-known example of a company that leverages this tech to make IP’s safer. 

But how does this translate to IP protection? These characteristics are applicable to the aforementioned problems in several ways.

Proving Patent Ownership

Patent ownership rights are a bit of a pain to prove sometimes. In creative fields (music, videos, and the like), whatever you create de facto belongs to you, but when someone (wrongfully) claims that creation is theirs, it’s difficult to prove you created said content when there’s little to no documentation pointing to the fact. 

And in these cases, the onus is normally on the creator to prove that they created the work in question. And that’s the tricky part. Patents are a tad easier to prove ownership over, but it can still prove difficult.

But blockchain, possessing indelible records of all data, can tell you exactly when what was created. This makes it easier to prove ownership whenever someone illegally uses your content. 

Monitoring Use of Content

Seeing that all the information on the blockchain is transparent, every piece of data is easily trackable. The network marks any instance of a patent being used, replicated, or otherwise, so users can track exactly what is done to it.

This would greatly improve the situation in regards to people infringing on IP rights willy-nilly. Creators could keep track of any instances of their work and how people utilize them. And if any of those instances violate ownership rights, you’ll know it.

Streamlining the Registration Process

Decentralization is blockchain’s main schick. The fact that the system doesn’t require third-party oversight means that all the red tape associated with such entities goes out the window.

This could come in handy indeed in the IP world. The journey to attaining patent rights can take as much as 22 months. Some exceptions, like utility patents, might be faster, usually wrapped up between six and twelve months, but it’s still a long wait.

The bulk of this time comes down to bureaucracy. And blockchain can go around this problem. Blockchain could greatly reduce this massive wait, leaving both individuals and businesses with more time for further innovation or other affairs.

Smart Contracts

The concept of smart contracts uses blockchain in a more innovative fashion. For those who don’t know, smart contracts are self-verifiable, self-enforceable contracts that don’t need any other entity other than those involved in the agreement.

This could facilitate much easier and smoother enforcement of licencing agreements. Creators that enter into these contracts can easily see if its stipulations are being respected or not. This is a more hands-on use of blockchain, and its potential applications are many. Nevertheless, it’s just as needed here.

Whenever a contract stipulation regarding patent use is being violated, the contract has the ability to respond by, for instance, terminating. This would do wonders to reduce IP abuse in the engineering world.

Wrap-Up

Blockchain technology has the potential to greatly improve the current situation of patent IP protection. Its triple-threat, that being decentralization, transparency, and immutability, would both smooth out and reinforce the systems that are currently in place. It would enable inventors to gain better control over their patents, while also having a better idea of what’s being done to them. As things stand right now, these would be welcome changes indeed.

Tamara Backovic is a content specialist at Fortunly with a background in computer sciences, tech research and IT.

human-augmentation-makes-list-of-2020-strategic-techs

Ever the adaptive species, human beings have long used technology to augment their natural capabilities. Even pre-historic cave dwellers used obsidian rocks to sharpen sticks into fine points, effectively extending their arms and thus the targets they could hit.

Today, humans use electronic and mechanical technology to physically augment their bodies. This is done attaching or implanting some type of device to improve their capability to go beyond the current human experience, e.g., 3D-printing an appendage or interfacing directly with the digital world through a brain-computer interface (BCI). The former is an example of how technology can enhance a person’s physical capabilities while the latter is related to cognitive improvements.

Like our cave dwelling ancestors, many of today’s augmentations offer fairly simple improvements. Consider the example of an extra thumb. That’s exactly what Dani Clode, a grad student at London’s Royal College of Art (RCA), has done with her third thumb project. This augmentation is a 3D-printed prosthetic that simply extends a user’s grip. The extra thumb straps onto the hand, which connects to a bracelet containing wires and servos. The wearer controls the thumb via pressure sensors located under the soles of their feet. Pressing down with one foot will sent a signal via a Bluetooth device that will cause the thumb to grasp.

Be it simple or complex, human augmentation has made the list of Gartner Top 10 Strategic Technology Trends for 2020. The report cites several growing market areas were implanted or hosted physical augmentations improve both workers health and the company’s financial bottom lines.  For example, the automotive or mining industries use wearables to improve worker safety. In other industries, such as retail and travel, wearables are used primarily to increase worker productivity. 

The report lists four main categories of physical augmentation: Sensory augmentation (hearing, vision, and perception), appendage and biological function augmentation (exoskeletons, prosthetics), brain augmentation (implants to treat seizures) and genetic augmentation (somatic gene and cell therapy). Each of these categories are worthy of a separate discussion. For now, the timeline below will focus on one portion of the second category, namely, prosthetics.

Modern human augmentation sometimes called “Human 2.0,” would not be possible with the advances offer by semiconductor electronics (e.g., processors, memory, sensors and wireless tech) and the related advancement in robotics. Thus, our brief timeline starts with the advent of the transistor.  

humble-beginnings-to-mobile-dominance-for-6502-processor

The recent passing of American processor developer, Charles (Chuck) Ingerham Peddle (Nov. 25, 1937 to Dec. 15, 2019), reminded me of a video interview I conducted with Dominic Pajak, ARM’s embedded strategist, concerning the 6502 and British Acorn PC.

Chuck Peddle was best known for his design of the MOS Technology 6502 microprocessor, as well as the KIM-1 single-board computer and its successor, the Commodore PC. The 6502 was an 8-bit processor that gain wide use in the late 70s and early 80s. It was so successful that Jack Tramiel, president of Commodore, used it in his famous Commodore Computers.

In Britain, the MOS 6502 gained fame in 1979 when Acorn Computers used it as the basis for the BBC Model B microcomputer, which was the company’s first big success. Acorn went on to design the ARM processor, which spun out into the Arm company whose processors still dominate the mobile markets.

Check out my interview with Dominic Pajak from Arm on the amazing evolution of the 6502.

IoT – From the 6502 Processor to the ARM MCUs

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

adding-defects-to-batteries-can-speed-charging

Typically, defects in particular chemical formulas are considered a bad thing. But researchers at Rice University have discovered that defects can actually be used to an advantage in future battery chemistries to improve how quickly the device can charge.

batteries, battery defects, Rice University, Brown School of Engineering, lithium-based battery design

An illustration shows a battery’s cathode undergoing phase transition from iron phosphate to lithium iron phosphate LFP during charging. Simulations by Rice University scientists showed that adding defects–distortions in their crystal lattices–could help batteries charge faster. (Image source: Kaiqi Yang, Rice University)

Scientists from the university’s Brown School of Engineering discovered through simulations that placing specific defects in cathodes of a lithium-based battery design can improve its performance. This finding could lead to charging that’s two orders of magnitude faster than current batteries.

The team–led by Ming Tang, Rice assistant professor of materials science and nanoengineering–worked specifically with lithium iron phosphate-based cathodes, focusing on the phase transition the cathode makes from iron phosphate to lithium iron phosphate when charging. Researchers discovered that by adding defects, called antisites, to the crystal lattices of the cathodes could help them to charge faster.  “We think we can turn defects into friends, not enemies, for better energy storage,” Tang said.

Antisites typically aren’t something researchers hope to see when designing this type of battery, which is why the research is unique. These defects form when atoms sit on sites of the lattice that should be occupied by lithium, typically limiting lithium movement inside the crystal lattice, researchers said. This is why scientists consider them detrimental to how the battery performs.

Tang and his team turned this idea on its head, however, discovering that in the case of lithium iron phosphate, the antisites actually create detours within the cathode. These detours allow lithium ions to reach the reaction front over a wider surface, thus improving the charge or discharge rate of the batteries.

Optimizing material defects

Basically, when lithium is inserted into the cathode, it changes from a lithium-poor phase to a lithium-rich one, he said. If the surface reaction kinetics are slow, then lithium can only be inserted into lithium iron phosphate within a narrow surface region around the phase boundary, which researchers call the “road,” Tang said. This limits the battery’s recharging speed.

“However, antisite defects can make lithium insertion take place more uniformly across the surface, and so the boundary would move faster and the battery would charge faster,” he said.

Researchers also noted that they can use certain methods to control the concentration of defects, such as annealing the material, or heating without burning it. This would allow them to add other optimal properties to the battery design, such as using larger cathode particles than nanoscale crystals to be used to help improve energy density and reduce surface degradation.

“An interesting prediction of the model is that this optimal defect configuration depends on the shape of the particles,” said Tang. “We saw that facets of a certain orientation could make the detours more effective in transporting lithium ions. Therefore, you will want to have more of these facets exposed on the cathode surface.”

Researchers published a paper on their work in the Nature journal Computational Materials. They believe that their model is applicable not just to the type of cathode with which they worked, but also can be applied as a general strategy to improve phase-changing battery compounds. “For structural materials like steel and ceramics, people play with defects all the time to make materials stronger,” said Tang. “But we haven’t talked much about using defects to make better battery materials. Usually, people see defects as annoyances to be eliminated.”

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

DesignCon 2020 25th anniversary Logo

January 28-30: North America’s largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

12-reasons-why-5g-and-wireless-dominated-the-airways-in-2019

The first wave of 5G-enabled devices dominated 2019, including the Galaxy S10, OnePlus 7, and Huawei P30 among others. Major infrastructure improvements continued to be rolled-out.

But 5G wasn’t the only new wireless tech hitting the market. Other wireless activities, like Wi-Fi 6, also began to appear. Below are 12 of the editor’s top picks for 2019.

Image source: Qualcomm

Qualcomm has big plans for 5G in 2020

From photos to gaming and AI applications, the Snapdragon 765 and 865 are both focused on bringing 5G to consumers at all levels.

Qualcomm’s latest Snapdragon platforms are aimed squarely at bringing 5G devices to consumers next year.

certification mark, 5G, Wi-Fi 6

Image source: WiFi Alliance

How Wi-Fi 6 and 5G will transform factory automation

A key technology trend for automation and control in 2020 and beyond is the emergence of wireless communications including 5G, Wi-Fi 6, LoRaWAN and more. An obvious benefit for factory automation is the use of wireless communication for remote monitoring and remote operation of physical assets but an equally important benefit is an ability to replace cables, unreliable WiFi and the many industrial standards in use today.

Image source: Veo Robotics

FreeMove turns any industrial robot into a cobot

Veo Robotics’ FreeMove platform gives standard industrial robots the ability to function as collaborative robots that work alongside human workers. Don’t get rid of your old industrial robot yet. You may be able to upgrade it into a collaborative robot.

Image source: Beckstrom.com

What are Beckstrom’s Laws of Cyber Security?

Prioritizing security efforts – critical for the IoT to survive – may best be done by considering the value of transactions over the size of the network.

The Internet of Things (IoT) has many defining characteristics, such as tiny, cheap and low power sensors, embedded computers, and connectivity. But one characteristic will rule them all, namely, security. In the very near future, the IoT will probably not exist if it isn’t secure.

5G, 5G networks, automation and control, industrial application, factories
Image source: Design News

5G May Soon Explode with New Factory Functionality

Just as the iPhone ushered in a new world of consumer connectivity, the ultra-fast 5G networking may jump-start new industrial capabilities.

Speedy 5G networking is beginning to show up; in some places it is already a reality. While we know it will speed up consumer applications such as streaming or downloads – like a movie in seconds – what will it mean for industrial settings? At a session at the Design and Manufacturing Minneapolis show last week, Joshua Ness, senior manager of 5G Labs at Verizon, explained the coming impact of 5G during his session, Prepping for the 5G Factory.

smart textiles, sensors, wearable technology, National University of Singapore, sensor networks, IoT
Image source: NUS

Smart’ Textiles Boost Data Connectivity Between Sensors for Wearable Tech

A new textile network of sensors can provide the basis for intelligent wearable technology with unprecedented communication capability.

Image source: AiFi MWCLA, Design News


AiFi Replaces Cashiers and Checkouts with AI

Walk in, get what you need, and walk out. AiFi uses AI to completely automate and streamline the shopping experience.

Imagine walking into a grocery store, picking up all the items you need, and walking right out the door. No need for a cashier or even a self-checkout, the store will automatically charge everything you take to your credit card.

Image source: WMG at the University of Warwick

A European University is Testing 5G for Self-Driving Vehicles

The University of Warwick, in collaboration with NI, will be conducting tests of 5G-enabled services for autonomous and connected vehicles.

Edge Devices, MQTT, IIoT Connectivity, MOXA, MQTT, AMQP, CoAP, broker/client design

Image source: MOXA

Edge Devices Leverage MQTT for IIoT Connectivity

The MQTT transport protocol has emerged as a top choice for implementing IoT connectivity, offering solutions using local gateways or direct to the cloud.

A primary challenge for IIoT automation and control applications is the ability to efficiently and effectively collect the data that becomes the grist for IoT enterprise-level, decision making and analytics. And while a wide variety and different types of edge devices have been introduced, a major concern is how to collect data from these devices.

Image source: Molex

The 4 Major Challenges of Wireless In-Vehicle Charging

Wireless charging is the best way for automakers to meet consumer demand for better charging performance in their vehicles. But there significant challenges to getting this innovation to market.

Today’s consumer expects to be connected via their mobile device wherever they go – especially in their vehicles, which have become more than transportation; they are a mobile extension of the owner’s living space.

Image source: LoRa Alliance

Everything You Need to Know about LoRa and the IoT

A guide to LoRa / LoRAWAN, the communications technology emerging as the leader among Low-Power Wide-Area Networks (LPWANs) entering the IoT space.

We’re entering a world in which WiFi and Bluetooth may no longer be the best communication technologies for Internet of Things (IoT) applications. The IoT is gaining more ground each year. Experts project there will be 75 billion connected devices by 2025.

Ethernet, edge computing, IEEE, standards, IoT, Ethernet, DeviceNet, EtherNet/IP

Image Source: ODVA

Ethernet at the Device-Level

IoT-drive technology standards are creating new possibilities for reducing the cost and complexity of integrating Industrial Ethernet into constrained networks and devices.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.