(The premise of this article is built from Part 1 of 21 Lessons for the 21rst Century, The Technological Challenge. Specifically, the 2nd through 4th chapters on Work, Liberty, and Equality)
My thesis is personal data protection should be a human right. At this point, we should all be actively exploring ways of at the very least protecting, if not claiming full ownership of our data.
It’s my hope that if more people were aware of the potential second and third order consequences of continuing down this path there would be more urgency to implement proper solutions.
These consequences could include everything from giving up the free will to make decisions and monetary loss to making any upward mobility in your life almost impossible.
Can you imagine… everything you’ve worked for your whole career suddenly useless to society?
In this world, the government actually paid you to stay home.
Schools taught the idea of not trusting your parents and you became barely even useful to your 15-year-old teenager.
Why should kids trust you as a parent?
What do you know about the world they’re growing up in?
You existed before Facebook was invented. In your time it took Apple more than 15 years to figure out that you didn’t need cords to listen to music.
Who knows what the world will look like in another 15 years!
In the 20th Century, parents could give valuable advice to their kids because the world hadn’t changed much since they went through it.
They would say things like “trust your heart” or “go to school and get a good job”.
Things are much different in the 21rst Century.
Most people don’t have a baseline of what “trust your heart” means. (Even Peter Rosenberg and G-Eazy agree they can’t get out of bed with an original thought.)
The majority of people have never considered what they want out of life and how technology might be able to HELP THEM achieve it.
Instead, to no fault of their own, they’re consuming more content than ever before (of course, in exchange for their data) while their beliefs and desires are molded by whichever content creator they spend the most time following.
Technology isn’t bad. If you know what you want in life, technology can help you get it. But if you don’t know what you want in life, it will be all too easy for technology to shape your aims for you and take control of your life. – Yuval Harari
This is the fundamental problem. Giving away our data for free creates a chain reaction of impressionism to the point where it’s not ridiculous to consider whether your current beliefs are really even authentic.
The second piece of incorrect advice “go to school and get a good job” unfortunately is also wrong.
It’s not just wrong because secondary education may only incrementally increase your value in the marketplace, it’s wrong because the sentiment of this advice is such that your identity will be derived from your chosen occupation (eg. I’m a dentist or a carpenter).
The same way that the previous example of “trusting your heart” was flawed based on a change in the environment kids are growing up, in the 21rst century it’s unlikely that many people will hold a single career throughout their lives.
Most people have already accepted that many jobs will be replaced by automation and AI in their lifetime. However, an identity crisis caused by this displacement might be an even bigger problem that has yet to be widely considered.
The scary part is that displaced workers who self-identify with their previous job will be searching for a new identity to align with at the exact time the concentration of power (data) is at its height.
This is the perfect storm for fascism to take over.
Remember, we’re considering reasons why protecting our personal data is so important and without much of a stretch, we’re now considering protecting our democracy.
Fascism is a situation where people believe their tribe is superior and they have exclusive obligations to serve the best interests of that tribe.
As Yuval points out in his TED talk below, most people anticipate a fascist regime to be scary and easy to spot. In reality, the way those scenarios play out is you’re indoctrinated to believe that you belong to the most beautiful best thing in the world.
I agree that losing your job may not be directly correlated to protecting your personal data. My point is that the advice to build your identity around a single occupation in an environment where we don’t own our own data (can be heavily influenced by companies with this insight) and job scope is changing so quickly (many people are searching for identity) is bad for everyone.
Fascism is just one example of an outcome from the vulnerability of that situation.
If there is one argument that should be easy to make to any audience, it’s that people are undervaluing their worth.
If you see money laying on the ground you don’t have to convince someone to pick it up, just point.
First, I want to point to the value this data brings to politics.
Have you ever considered why certain Countries outperform others?
Yuval argues that the identity crisis we previously mentioned does play a part. For example, the countries with the most amount of violence typically have the lowest amount of nationalistic affinity.
However, even Adam Smith, author of The Wealth of Nations, agrees neither this nor the natural resources of that country are actually the root cause of success. He coined the term “invisible hand” to define an efficient free market found in the most successful nations.
In other words, in the past democracies with a free market have done the best.
For context, the average GDP of the top 9 countries in the list below is $637B compared to $95B for the bottom 9 (that number would be $25B if not for Saudi Arabi).
This is also because by definition democracies are better at making decisions.
In the past, it was unrealistic for a dictatorship government to be able to centralize enough data to make the same quality of decisions as their democratic counterparts.
What this means is that the data companies are collecting on us isn’t just being used to influence our decisions and put certain individuals in power, it’s fueling the success of entire nations.
This is important because it highlights the fundamental vulnerability of our current democracy.
The same way that countries with democracies generally outperformed in the previous century, dictatorships and their ability to centralize data could now have the advantage.
As an example of the power of centralized data, take a decision like which University you want to go to. You only get one hour to make the decision. Do you think you’d make a more accurate decision by calling 5 friends or let an algorithm prime you with all the relevant info? That scenario plays out over and over again in politics except the consequence affects 300+ million people.
Since we’re on the topic of democracy versus dictatorship I think it’s important to clarify how AI adds value with decentralization.
This is exactly why humans don’t stand a chance at keeping certain jobs in tomorrow’s world. It’s not that we will be replaced one for one on the assembly line, it’s that robots and the artificial intelligence that run them have the ability to collect data from millions of sources and continuously improve their output.
Think of it like if you were a professional words-with-friends player except your opponents start using anagram solvers… Good Luck!
This idea of decentralizing (abolishing a single source of truth/sourcing outside data) is important because it’s actually a great method for solving problems.
In the long run, this approach will definitely make our lives better with improved healthcare, more convenient and instantaneous financial solutions, better education options etc.
Just so we’re clear… from the previous point, the reason why Dictatorships could start to have an advantage is they can leverage decentralized methods of collecting data (eg. enforcing everyone to wear a heart monitor) but make their decisions faster since they don’t represent the public by sourcing their opinion.
In the 21rst Century, those who own the data own the future. – Yuval Harari
(This is the actual description for Yuval’s chapter on Equality in his latest book.)
They own the future because they know exactly how humans make decisions and can (will) push you towards those that make most economic sense for them (eg. the advertisers you’re most likely to click on who are also bidding the highest).
Imagine how powerful our data will be when everyone is volunteering to wear bio-metric sensors like a Fitbit. At that point, it’s possible Facebook or Amazon will allow businesses to present their ads based on the current emotions of the audience (which by the way, they will have already mapped to your buying behavior).
Keep in mind that Silicon Valley is already a long way down this rabbit hole.
So far that people like Bryan Johnson, founder of Braintree, have expressed concern that the tech giants of today are collecting ad revenue as a short-term strategy. The value of all the data extrapolates when considering a scenario where their products are both the producer and the consumer.
Amazon is already doing this by buying companies or providing solutions to the consumer demand they’ve identified to be under served.
Uber is doing the same thing in the restaurant business.
Many of the founding members of Facebook and other large tech companies have even expressed regret for facilitating the process.
“The short-term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, misinformation, mistruth.” – Chamath Palihapitiya
Regretful or not the data is still extraordinarily valuable.
The fact we will hand over data like heart rate and sleep statistics for the reward of a notification chime and a chart of our steps is completely outrageous.
Especially when it’s inevitable that data will be used to identify life-threatening disease or sickness, a solution that’s sure to be worth billions.
Of course, this type of net progress is likely to benefit the majority. The fear is that if we choose to make data protection a pay to play system the consequences unravel quickly.
By unraveling quickly I mean the consequences of not having universal data protection escalate from seemingly minor to threatening the entire fabric of human equality.
One major reason for this is market-based incentives. Even in today's environment, there is a very low incentive to improve yourself as an individual.
You get rich by improving machines not by becoming really smart or strong.
One eye-opening example is that no one is paying you to appreciate food and develop a pallet. In fact, they’re doing the opposite. Most people eat at their desks while they work or watch TV, meaning you’re consistently downgrading your self-awareness.
Sure your employer might give you some mental health days or a discounted gym pass but is there any tangible incentive for you to increase the level of compassion of self-awareness you have?
Why this is important is because if we have to pay to protect our data it could make upward mobility in your life almost impossible.
If you have the resources to protect your data (eg. not see any ads) you will still be connected to your sense of self and the innate desire to want to improve. Your decisions are more likely to be those of your own and more accurate than others.
On the other hand, if you aren’t able to protect your data you will be inundated by ads. You have the “free-will” to improve yourself but you will have to employ a huge amount of willpower to ignore all of the advertisements pointing out your insecurities or promising instant solutions to all your problems.
Let’s say you’re even partially successful there and you stay on the path of diligent self-improvement to the point where you’re ready to start a business.
Whichever path you decide to take you will inevitably be competing with those who are already in possession of large amounts of data. Now you're fighting another uphill battle against companies that can make better, more accurate decisions than you about product lines, pricing, advertising methods etc.
There is currently a small window in time where many large companies have not yet become efficient at psychological manipulation at scale and you would still have the ability to gain market share with a brand and appealing to human relationships. You may have experienced examples where Google or Amazon are still giving you the wrong suggestions and you'd prefer to talk to someone. Inevitably, they will have so much data collected that asking for a human opinion will be like asking someone to mail you a cheque.
The scary part for human equality is if our data continues to be owned by a few companies AND they have access to bio-metric information from your body/brain.
At that point how can you expect anyone to ignore all the hyper-targeted advertisements aimed at them and also compete with these companies to offer a better service at a lower price?
For example, let’s say you want to start a subscription box company. You have to build a product and audience from scratch, handle all the logistics of managing a physical product along with employees etc. Amazon not only has the infrastructure built out to automate a literally unlimited amount of capacity but they also know exactly what people are looking for and in what markets a subscription box would make the most sense. They would also have an existing audience of millions of people who’ve already bought similar products, when they bought, what else they care about.
Given that outlook apathy sets in and people are more inclined to take what they can get instead of try to innovate and improve themselves at all.
If people are being suppressed (another indicator of fascism) then the individuals on the other end of the spectrum are accelerating exponentially in comparison.
To take the scenario one step closer to science fiction, the ultra-rich billionaires of the world would no longer only be spending exorbitant sums on status symbols like yachts they’d also be spending on living longer and genetically modifying themselves…
Yuval continues this scenario even further but my goal isn’t to fear monger about AI. In fact, I’m just as optimistic as the creator of the #ownyourdata movement, Bryan Johnson.
This conversation is meant to specifically be about data and what we should be doing now.
If this topic overwhelms you with fear, Bryan wrote an article called How My Mother Stopped Worrying About AI that is worth a read.
To recap, the ability to own or at least protect our data is important for these 3 reasons:
The reality is, we’re in possession of this brand new asset, our data, what do we do about it?
The whole point is, NOT NOTHING!
Is the way we’re giving away our data for free any different than how Europeans “negotiated” land deals with Native Americans?
After reading Yuval’s book mentioned many times here: 21 Lessons for the 21rst Century, some of the content that Bryan Johnson has put out through his Future Literacy writing and listening to the numerous Ted Talks on the topic, it seems like we have two basic options.
1. Fight For Data Rights With Regulation
2. Outrun AI Towards an Understanding of Human Consciousness.
A good place to start would be enforcing or educating more people on net neutrality.
You may have heard about this law that got overturned by the Trump administration.
Net neutrality is the principle that Internet service providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites.
Meaning that with the current laws Internet Providers can tier their service and limit access to certain websites. It's basically saying we reserve the right to charge you more as we figure out which sites people can't live without.
My personal opinion is that policing the internet this way doesn't make sense for anyone other than the telecommunications companies.
If you agree, I'd encourage you to check out these websites:
As for regulating personal data one of the easiest ways to get involved is to sign the change.org petition below.
Aside from bringing awareness to the topic and making noise around the necessity of regulation one interesting solution that Yuval talks about is building incentives for improvements in the study of human consciousness.
Bryan Johnson, whom I mentioned earlier, has committed $250 million dollars to his OS fund that is focused primarily around this topic and many others are jumping on board.
The main idea here is that AI will take a long time to replicate the human ability to conceptualize, it could even be the one thing that will never be "hacked".
Assuming that's correct and we're able to decode this phenomenon this does solve two of our three reasons data protection is so important. By understanding conscious we'd hopefully be able to give everyone the tools to make decisions on their own that were also in their best interest. That alone will drastically increase the chances for democracy to prevail. We'd also be in a much better shape to preserve human equality. With the majority of people in control of their thoughts, we'd naturally see more people working towards worthy goals, consequently increasing productivity and human ability as a whole.
If you have any other ideas or come across similar articles please let me know via email or comments. I really believe this is will quickly turn into a serious issue so the more people thinking about it the better.