top of page

10 Ways AI Puts Your Privacy at Risk

Updated: Mar 10, 2023

Why is AI raising privacy concerns?


The public’s exploding interest in generative AI, such as ChatGPT and DALLE2, has rightfully raised privacy concerns. These technologies use AI algorithms to generate text, images, and videos, often based on large datasets of existing content. However, there is a risk that these generative AI systems could be used to generate fake news, deep fakes, or other forms of manipulated content, potentially compromising individuals' privacy and safety. Since these systems typically require access to large amounts of personal data, it’s prudent to be concerned about data privacy and security. Below, I discuss many ways that AI, particularly machine learning/generative AI, are creating risk.

What are specific examples where AI is creating privacy concerns?


Maintaining my consistent focus on explaining the practical issues related to AI, here are some examples of how AI applications raise privacy concerns:


1) Health data analysis - AI has the potential to analyze vast amounts of health data to reveal sensitive information about individuals' health and predispositions to certain diseases. This raises concerns about the privacy of such information and how it may be used or shared without individuals' consent. Imagine your health insurance and life insurance companies having your family's multi-generational detailed health history used to determine what they charge you!


2) Employment decisions - AI algorithms used in employment decisions can lead to discrimination against certain groups based on factors such as gender or race. For example, if an AI algorithm is trained on a biased dataset, it may discriminate against certain job candidates unfairly. There's a famous example where Amazon's AI hiring tool eliminated female candidates because the AI model learned, based on years of data, that women were rarely in engineering roles - thus, women must not make good engineers. Clearly, this model inadvertently perpetuated a current problem.


3) Smart home devices - Smart home devices collect and analyze detailed data about users' daily lives and routines, raising concerns about privacy and surveillance. For example, smart home devices can monitor when people are home, what appliances they use, and even their conversations, which can compromise their privacy and create a risk of surveillance. Now, imagine that the government BUYS access to your private in-home conversations - note - this is already happening!


4) Social media algorithms - Social media algorithms often reinforce existing biases and preferences, creating filter bubbles and limiting exposure to diverse viewpoints. As a society, we are already experiencing extreme philosophical division and distrust of government that has been fed by this poor application of AI algorithms. Additionally, these algorithms can manipulate user behavior through targeted ads or misinformation leading to individuals making decisions based on false or misleading information, or being exposed to harmful content that they may not have otherwise encountered. Google (owner of YouTube) is under Supreme Court scrutiny regarding their algorithm sharing ISIS videos. This ruling, based on Section 230, could upend the internet!


5) Autonomous vehicles - And I'm NOT just talking about fully automated vehicles! This also includes systems inherent in many of today's late-model autos! Autonomous vehicles collect sensitive information about passengers' location, travel patterns, and biometric data. Self-driving cars are vulnerable to various cybersecurity threats, including ransomware attacks that could render the vehicle inoperable until a ransom is paid - you couldn't enter, start, or exit your own car! Additionally, hackers could disable a car's networks, range sensors, and cameras, leading to multiple collisions and other safety risks. Your car could even be re-routed to where criminals await you! Another potential threat is the hacking of an autonomous vehicle's operating system, which could expose personal information stored on other connected devices - this means that hackers could potentially access everything on your phone!


6) Facial recognition technology - Facial recognition technology has the potential to compromise individuals' privacy and anonymity in public spaces. This is because such technology can be used to track individuals' movements and activities without their consent. If you want a reason to lay awake at night, check out what Clearview AI does using the 20+ billion images they scraped from the internet, including social media applications.


7) Biometric data - Collecting sensitive data used to uniquely identify individuals and track their movements and activities holds enormous privacy risks. Examples of biometric data include fingerprints, facial recognition scans, and retinal scans, which can be used to identify individuals without their consent. Did you try out the Lensa app? Do you use your face to unlock your phone? Have you traveled through an airport or across a country border? These are just a few examples of points where your facial data has been stored.


8) Workplace monitoring - Monitoring employees' activities, including personal data such as health and emotional states, raises significant privacy concerns. It is now commonplace, largely brought about by the work-from-home pandemic trend, that employers are monitoring you. They know when you’re logged in, what you’re typing, and they're even analyzing your facial expressions. I don't know about you, but I clearly view this as a privacy invasion.


9) Financial services - The use of sensitive personal data such as income, spending habits, where you live, and financial history raises concerns about data privacy and security. In 2022, U.S. Bank employees were found to have unlawfully accessed customers' credit reports and sensitive personal data to apply for and open unauthorized accounts. Although they were ordered to pay $37.5 million, the harm done to their customers lingers. AI worsens possible situations like this by providing financial institutions with even more sophisticated tools for collecting, analyzing, and using sensitive personal data. For example, AI algorithms could be used to predict customers' creditworthiness based on personal data, such as who you associate with - imagine being turned down for a home loan because an algorithm determines that your dead-beat relatives/acquaintances have too much influence over you.


10) Education technology - The extensive collection of personal data about students, beginning in Pre-K and continuing through college, creates a detailed profile that can compromise their privacy, autonomy, mental health, access to credit, and future employability. The addition of education technology in the classrooms has brought with it the capture of incredibly personal student data including their personal strengths and weaknesses, their behaviors, and their home situations. Imagine these students as adults who experience lowered creditworthiness because they had behavior challenges when they were 5-10 years old!


Student data is highly sought after by bad actors. Identity thieves target students because credit checks are rarely run on children so this crime can remain undetected for decades. Further, just yesterday a story was posted about a Minnesota school district whose data had been hacked. In addition to basic demographic and individual data, compromised files included highly sensitive records related to sexual violence allegations, student discipline, special education, civil rights investigations, student maltreatment and sex offender notifications.


What happens when we layer AI capability on top of this treasure trove of data about our kids? AI algorithms create detailed profiles of students' academic progress, behavior, and personal lives. These profiles are used to make decisions about students' futures, such as college admissions or job opportunities, perpetuating bias and discrimination. Additionally, when this data falls into the wrong hands, it is often used for cyberbullying and ransom purposes. Imagine hackers using this stolen data, which could include voice and video files of students, to create realistic AI-generated pornographic deepfakes designed to garner ransom and haunt these students throughout their entire lives. And in this example breaking news today, students created a racist deepfake video of their prinicipal and there are no laws against it!


Should AI use be regulated to protect privacy?


Privacy is a daunting issue that AI exacerbates for everyone living in an advanced society, even though most are unaware of it. AI is everywhere in our daily lives, from Siri on our phones to AI systems that make decisions about our homes, jobs, and health. Unfortunately, no comprehensive U.S. laws or regulations protect us from AI discrimination and privacy intrusions. Although there is a patchwork of federal executive orders, national initiatives, and specific use case regulations, accompanied by some states with disparate laws, privacy still looms as a huge concern. As an AI strategist, I believe that expert-informed, comprehensive legislation is needed to ensure that data and AI controls exist while also encouraging innovation.


Closing


As an AI proponent, I see tremendous potential for AI to solve some of the world’s biggest problems – cancer, economic inequality, geo-political tensions, and more. I’m also a pragmatist about the dangers that it poses. One of the most pressing dangers is privacy. Generative AI has brought the realities of this issue to the attention of the public. Despite AI's enormous potential to improve our lives, it's important to approach its development and deployment with caution and consideration for the protection of privacy rights.



66 views4 comments
bottom of page