Deepfakes, sharenting, and digital footprints - what does it all mean?
Have you ever thought about just how much data is collected on you, your family, children, loved ones, and/or friends?
Or have you considered the fact that depending on what year you were born, there may be little to nothing that people either don’t already know about you or can easily have access to finding out?
And this is simply because many individuals’ digital footprint now begins before birth. For instance, those who wish to track their pregnancy journey using dedicated apps can do so and often will post about their journey, share announcements, and photos (like ultrasounds) on social media.
Add on top of that the increasing number of public facial recognition devices, computers in schools, and the fact that children are gaining access to social media at younger and younger ages – there’s very little information about individuals that is truly private anymore.
Additionally, we increasingly see our homes being filled with new smart devices and/or apps that are listening, collecting, and storing our data; and it’s common nowadays for many of us to download apps or sign up for a new subscription. And often, many of us don’t take the time to read the Terms & Conditions associated with said apps, technology, and/or smart toys and devices. Nor do we take the time to consider the potential consequences.
And because we so often don’t think twice when downloading a new app or buying a new smart device to bring into our homes – we may just be revealing far more information about ourselves and personal lives than we realize. And although we do this unintentionally, this could potentially be at the expense of our personal privacy or even safety.
CloudPets (Smart Toys & Data Leaks):
One example was in 2017 regarding a toy company called CloudPets. These were internet-connected toy stuffed animals that were used to record audio messages and play them through the toy. The idea, for instance, was that parents who were on travel were able to stay connected with their child by sending audio notes for their child to play through the CloudPet.
However, due to poor data practices, the company leaked the personal information of more than 800,000 customers and 2 million kids' voice recordings. And not only did hackers access the data, but they also held it for ransom (demanded money from the company to return the data safely) and some even sent their own audio voice recordings to children through the CloudPets.
So, why is this risky?
There are many reasons (several that will not be addressed in this post) why having our data collected, stored, shared, and even sold causes concern given the potential risk; however, we will focus on three main topics related to data sharing, collection, and identity theft:
It’s to no surprise that these topics are causes for concern; especially as many individuals, particularly millennials, who have experienced an immense cultural shift from growing up in a time of pre-social media to being the first ones who were heavily targeted and marketed to early on, experience major issues regarding trust and data privacy concerns.
One example is when Facebook failed to keep the personal information of 87 million people secure, which allowed Cambridge Analytica to harvest said data. Or when Facebook executives denied their knowledge of Russia’s interference to the 2016 U.S. presidential election.
And while the collection of all this information may be used for positive purposes, for example, streamlining the collection of massive amounts of data to be stored in robust data centers that have the capacity to withhold masses of information to then be automated and easily accessed when necessary.
For instance, the use of upgraded and automated technology in hospitals has enabled nurses and doctors to easily access patient information and update and share it almost instantly.
However, as Ruha Benjamin states, “access goes both ways. If someone is marked ‘risky’ in one arena, that stigma follows [them] around much more efficiently, streamlining marginalization.” (Race After Technology, p 13).
Benjamin continues with the example of how a Europe-based advocate for workers’ data rights was “denied a bank loan despite having a high income and no debt, because the lender had access to her health file, which showed that she had a tumor.” (Race After Technology, p 13). Meaning, that the woman was not a “good fit” or the “optimal candidate” to receive a loan from a bank because the lender at the bank gained access to her medical records and saw that her health was “questionable.”
And while that might sound shocking to some, we must take note of how often this happens and how ‘normalized’ these types of occurrences are. Consider how companies decide to target advertisements or how our local governments decide what schools are built in which neighborhoods (in this case, in the United States). Or the fact that pretty much all city planning in general is decided based on zip codes, zoning, and the economic status of the surrounding neighborhoods.
And this occurs because of how much data is collected and easily obtained about us as individuals and citizens. So, again, while yes, the collection of our data may be used for positive purposes, we also know that it has had some serious negative consequences. Because at the end of the day, we must remember that these are major corporations and most often, their main concern is profit, not the safety of the public.
Digital Footprints & Sharenting:
According to a 2010 survey, 90% of 2-year-olds already have an online presence, as it is typical for adults to make a pregnancy announcement, and often, share the child’s name and birth date on social media.
Barclays (a British multinational universal bank headquartered in London) shared that it is common for parents and adults to reveal basic information such as names, ages, birth dates, addresses (in geo-tags), mother’s maiden name, the names of pets, schools, their cars, favorite sports teams, and other common yet personal information.
This phenomenon is known as 'sharenting'.
'Sharenting' is often described as any instance where an adult overshares personal information about their child via the Internet, and most often on social media platforms including Facebook and Instagram.
"Another decade of over-sharing personal information online will produce 7.4 million incidents per year of identity fraud by 2030”, says Barclays. And that by 2030, $867 million will be lost to fraudulent information garnered from 'sharenting' (Hopegood, 2020).
The bank has warned that sharing such personal details about a child and ultimately yourself (things like the mother’s maiden name, the name of pets, cars, schools, etc.,), are pretty much like goldmines or gems for hackers as this type of information is typically used to answer the Security Questions on websites (particularly for banking situations).
Hackers can potentially store this data and use this information when the child is of age for fraudulent activity including things like taking out loans, applying for credit cards, creating a fake identity, and other online scams.
Deep-fakes are created using Artificial Intelligence (AI) to swap faces (and bodies) in video and digital content to make extremely real looking, yet ‘fake’ content.
Hany Farid, a UC Berkley Professor who specializes in deep-fake technology explains it in this way - someone feeds into a system a ton of content including photos, videos, and audio of the person that you want to ‘recreate’ and then you have another person who is somewhat of a ‘puppet master’ who says and acts in whatever way they want to synthesize the deep-fake.
“Deep-fake” comes from the term “deep learning” like the deep learning algorithms that are continuously teaching themselves how to problem solve.
Within the past few years, we’ve seen a rise in fake accounts, and we have seen deep-fakes make headlines. Most often we have seen celebrities recreated using deep-fake technology as they have had the most media attention in the past, and so, consequently there is a ton of content of them online to choose from.
And while some deep-fakes have been used for entertainment purposes (click here to check out Deepfake Tom Cruise on TikTok), which is why they are not outlawed, others have been used for malicious purposes. For instance, in 2019, according to a Deeptrace report, pornography made up 96% of deep-fake videos found online.
Farid states, “if you can change visual images, you can change history,” which is concerningly true as we’ve also seen an increase in the use of deep-fake technology to recreate politicians – which obviously so, can potentially have extremely harmful consequences on societies.
Farid states that he would “urge colleagues to spend more time thinking about the consequences of developing this type of technology because the fact is it’s not hypothetical anymore, we are seeing the technology being misused.”
And when asked, “Do you think there is a plausible scenario in which deep-fakes result in war?” Farid responds, “Honestly, I don’t think that’s a stretch. How are we going to believe anything anymore that we see?” And that this is a real “threat to our democracy.”
And while there are things to look for when trying to detect a deep-fake video including, blinking (or lack-thereof), problems with skin or hair, faces that seem blurrier than their environments, and whether the audio is in sync or not - another type of machine learning has been added to the mix.
This technology is known as Generative Adversarial Networks (GANs), which detects and improves any flaws in the deep-fake within multiple rounds, making it much more difficult for deep-fake detectors to decode them.
Therefore, the more that an individual does post content (whether of themselves or of their child), the more they expose themselves or open themselves to the risk of having someone recreate a deep-fake of them (whether it be for entertainment purposes or not) that can potentially land in unwanted places including the ‘dark web.’
It is also important to note that while most content on the internet will not go viral, merely posting anything online increases the chance that a piece of content has the potential to go viral. And if that does happen and the video or image gets into the wrong hands - this too is cause for concern for reasons such as unwarranted behavior (sexual or not), fraudulent activity, amongst other things.
So, now what? This then leads us to a few (but very important) main takeaways:
Now more than ever, it is extremely important to be mindful of the content you not only post but also consume as we now know that seeing isn’t necessarily always believing (*hint: fact check your sources*).
Think twice before you welcome any internet-connected device into your home, particularly ones that children may interact with on a regular basis.
Think twice before posting anything to social media – whether innocently or not and even in Private Groups or if your account is set to Private (as people can easily take screenshots or screen recordings).
Keep in mind that screenshots and screen recordings happen all the time. And content that you may not want others to have access to, can live on forever through a simple screenshot (or screen capture).
Finally, it’s important to be nonjudgmental of yourself and of others. The digital space and social media in general are still a fairly new space for many. We are constantly learning new things about how and why it's important to approach this space more intentionally; and hopefully once we know better, we can begin to do better.
We strive for accuracy and the most up-to-date information; if you have identified an error or any misinformation, please do not hesitate to contact us here.
Race After Technology; Ruha Benjamin
Chief Creative Officer