the-10-greatest-issues-ai-needs-to-face
  • There are a lot of reasons to be excited about artificial intelligence. AI is transforming industries in innovative ways and even enabling entirely new business models to emerge.

    But there are also a lot of reasons to be cautious about AI. The 2019 AI Now Report, created by the AI Now Institute, takes a look back on the social impact of AI in 2019, and some of the most important issues surrounding the technology as it moves forward. The AI Now Institute is a non-profit, interdisciplinary research institute “dedicated to understanding the social implications of AI technologies.”

    “This year we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI,” the report says.

    As AI moves into the next decade we’ve outlined some of the most important issues AI will have to grapple with in the coming years.

  • 1.) Algorithmic bias is already affecting us

    As more and more AI algorithms are implemented into decision making processes in everything from real estate to healthcare, it is important to for developers to be aware of the inherent biases within the datasets they use to train AI.

    Apple’s Apple Pay service recently came under fire from customers – including Apple’s co-founder Steve Wozniak – over allegations that the services approval system was assigning lower credit limits to female customers.

    Experts agree it will likely be impossible to completely safeguard systems again bias, but steps can be taken to mitigate the impact of bias.

    (Image source: Apple)

  • 2.) Facial recognition is watching us

    Facial recognition is already here and being widely deployed throughout the world. In China facial recognition technology has become a part of surveillance and security systems and even allowed customers to use their face to access ATMs.

    While there is an argument for convenience and security, there are also wide spread privacy and ethics concerns around using AI facial recognition. The city of Detroit is facing pushback over plans to add facial recognition to its Project Green Light – a camera system that allows police departments to monitor businesses and intersections in real time.

    In 2019 cities of Oakland, Calif., Somerville, Mass., and San Francisco voted to pass a ordinances banning municipal use of face recognition technology.

    By contrast however, the Department of Homeland Security (DHS) announced that it has plans to issue a proposed regulation that could require all travelers, including US citizens, to submit to face and other biometric scans at airports and other ports of entry.

    Regarding the DHS announcement, ACLU Senior Policy Analyst Jay Stanley had this to say:

    “Time and again, the government told the public and members of Congress that US citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling. This new notice suggests that the government is reneging on what was already an insufficient promise.”

    (Image source:  teguhjati pras from Pixabay )

  • 3.) Deepfakes are a reality

    If you want to see the power of deepfakes you only need to browse around YouTube, to channels like Ctrl Shift Face.

    This isn’t a special effect. With enough data (including images and audio) AI algorithms can actually reconstruct and superimpose individual’s faces onto existing video footage. It makes for some entertaining viral videos, but there are wider, more frightening implications for deepfakes as they can be used to create fraudulent videos of political figures, celebrities, and even private citizens. Left unchecked, deepfakes could become a powerful tool for the spread of misinformation.

    (Image source: Ctrl Shift Face)

  • 4.) Algorithms are ruining our social media experience

    Have you ever watched one video on YouTube or liked a post on Facebook or other social media only to be sent down a rabbit hole of increasingly questionable recommendations? That’s not an accident – that’s AI trying to predict what you’ll “like.” And by “like” we mean it’s trying to figure out what content you’re most likely to engage with – and that often means offending or shocking you. Algorithmic issues are being blamed for both a rise in the quantity of extremist content on social media as well as its proliferation. Google, Facebook, and others have pledged to search for ways to tamp down on the spread of dangerous and extremist content as well as misinformation. But many would argue the damage has already been done.

    (Image source: Pixelkult from Pixabay  )

  • 5.) AI is a powerful tool for hacking

    Automation is meant to handle the dirty, dangerous, and repetitive tasks humans can’t or don’t want to perform, right? Well the benefits go both ways. More and more malicious hackers are leveraging AI technology to assist with sophisticated cybersecurity attacks. A well-trained algorithm can attack a target with a level of speed and efficiency that would be difficult for one or even a larger group of hackers. Fortunately, cybersecurity companies like XM Cyber are fighting fire with fire and are also using machine learning algorithms to safeguard networks and sensitive systems as well.

    (Image source: XM Cyber)

  • 6.) AI developers lack diversity

    Issues with AI can be correlated to a lack of racial and gender diversity among the engineers and developers being hired at the top technology companies working on AI. The AI Now Institute has found that Black and Latinx workers are substantially underrepresented in the tech workforce, and women are particularly underrepresented as AI researchers.

    “Rather than recognizing the scale and systemic nature of the problem, tech companies have responded to mounting evidence of bias and misuse by primarily focusing on narrow diversity solutions,” the AI Now Institute said. “They have also attempted technical debiasing, working to ‘fix’ algorithms and diversify data sets, even though these approaches have proven insufficient and raise serious privacy and consent concerns. Notably, neither approach addresses underlying structural inequalities.”

    (Image source: PixLoger from Pixabay)

  • 7.) AI isn’t green

    As engineers come to terms with the realities of climate change and the need to develop greener technologies, AI is having its own energy crisis. The massive amount of compute power required for AI also comes with a massive energy bill.

    “As a whole, the industry’s energy dependence is on an exponential trajectory, with best estimates showing that its 2020 global footprint amounts to 3–3.6 percent of global greenhouse emissions, more than double what the sector produced in 2007,” the AI Now Institute said. “This is comparable to that of the aviation industry,and larger than that of Japan, which is the fifth biggest polluter in the world.”

    Tech companies are already implementing renewable energy sources and other means to make data centers more efficient. But the emergence of 5G and other advanced networking technologies only threatens to make the problem worse before it gets better. “In the worst-case scenario, this footprint could increase to 14 percent of global emissions by 2040,” the Institute warned.

    (Image source: Free-Photos from Pixabay )

  • 8.) AI helps privatize public infrastructure

    “Troubling partnerships between government and private tech companies also emerged as a trend this year, especially those that extended surveillance from public environments into private spaces like private properties and the home,” the AI Now Institute said.

    In 2019 the city of Detroit established the “Neighborhood Real-Time Intelligence Program,” a $9 million, state- and federally-funded initiative that would expand the city’s Project Green Light surveillance system to 500 intersections, in addition to the 500 businesses where it is already deployed, as well as add facial recognition technology to the system. The city has reported reduced crime in areas thanks to Project Green Light, but that hasn’t stopped privacy advocates from protesting the technology.

    In 2018, Amazon came under fire for offering to let police departments utilize its facial recognition software. The company has also negotiated with over 700 police departments in the US to give police access to videos from Ring smart home cameras if the footage can help with a criminal investigation, according to the AI Now Institute.

    (Image source: Pixabay)

  • 9.) Automation impacts people of color and the poor the most

    The debate about automation and labor likely won’t ever stop. But the narrative is taking new shape as more data emerges about specific groups affected by rapid automation due to AI.

    Depending on who you ask, automation will be a boon to the economy as well as personal productivity, or it will usher in a dystopian nightmare where humans struggle for basic needs while robots handle all of the jobs.

    “Both narratives are predicated on the assumption that automation in the workplace is inevitable and that automated systems are capable of performing tasks that had previously been the work of humans. What is missing from both conflicting narratives is the more nuanced prediction of who will be harmed and who will benefit from labor automation in the years to come,” the AI Now Institute said.

    The 2019 AI Now Report predicts that Black, Latinx, and low-wage workers in the US will be disproportionately impacted by increased levels of automation.

    (Image source: mohamed_hassan from Pixabay)

  • 10. ) AI is removing the ‘human’ from human resources

    More and more companies are using AI to manage and oversee workers. AI is even being implemented into the hiring process. Amazon, for example, uses an AI system to set shifting performance goals for its warehouse workers. Workers are assigned a daily “rate” of productivity to hit each day, based on their prior performance and the overall goals of the warehouse.

    “If a worker falls behind, they are subject to disciplinary action. In many warehouses, termination is an automated process (not unlike being “kicked off” a gig-economy platform),” the AI Now Institute said. “According to Abdi Muse, an organizer with Amazon warehouse workers in Minneapolis, if workers fall behind the algorithmically set productivity rate three times in one day, they are fired, however long they may have worked for the company, and irrespective of the personal circumstances that led to their ‘mistakes.’ ”

    “The introduction of AI-enabled labor-management systems raises significant questions about worker rights and safety. The use of these systems—from Amazon warehouses to Uber and InstaCart—pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color) by setting productivity targets linked to chronic injuries, psychological stress, and even death and by imposing unpredictable algorithmic wage cuts that undermine economic stability.”

    (Image source: iosphere / Freedigitalphotos.net)