By Lily Warner
Social media is a relatively recent phenomenon. The first social media platform was created in 1997, under the name Six Degrees, about 25 years ago. However, it took another 6 years for social media to really start blowing up, heralded by the emergence of sites like MySpace and Facebook.
Now, some of the richest people on the planet have made their fortunes through social media. They’ve done so through rampant exploitation of the industry’s product: you. Since social media is so recent, there has been little to no protection from the dangers social media can pose, and no guarantee that it will be safe for you, the user. The AI Bill of Rights, a recent blueprint released by the white house, is a proposed set of protections against social media.
So, what exactly does the Bill promise to do?
There are five problems that the Bill wants to solve. It wants to protect you from unsafe and ineffective systems, algorithmic discrimination, ensure data privacy and human fallbacks, and finally make sure you understand what you’re getting into. Let’s break that down one at a time.
The first protection, against unsafe and ineffective services, lays out the foundation for regulations on the construction of algorithmic systems. This protection is all about prevention, and wants to ensure that systems are constructed with input from “diverse communities, stakeholders, and domain experts,” to identify any and all problems that could arise from the system’s implementation. AI systems should be designed without “an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community.” It lays out the minimum requirements for who should be consulted in designing the AI system, and what concerns they should protect against.
The second proposal from the Bill, protection against discriminatory algorithmic processes, promises just that. Social media algorithms are, to put it simply, very very effective information sorting systems. (There is more nuance to that, of course, but for our purposes it is a fitting enough description.) The problem is that the information itself can be biased. Does misinformation ring a bell? Well, since social media algorithms are often information processing algorithms, they can be affected by misinformation just as much as us. If not more. The Bill says that algorithm “designers, developers, and deployers of algorithms should take proactive and continuous measures to protect individuals from algorithmic discrimination and to use and design systems in an equitable way.”
The third proposal relates to data privacy, promises protection against “abusive data practices,” and “agency over how your data is used.” This is a promise a long time coming. Data protection has always been an issue, and recently with Facebook’s “privacy scandal” it has skyrocketed in importance. Teachers are actively warning students about the dangers of social media, about how your data is being used without regard for your safety, parents are discouraging their children from connecting with their generation via social media sites like TikTok, everyone and their mother sees it as necessary to defend themselves from social media. So the fact that this issue is in the Ai Bill of Rights should not be surprising.
It says that “you should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.” So AI systems should be designed in such a way that your privacy is protected by default, and that data collection remains within reasonable expectations and only occurs when strictly necessary. The Bill also says that “Designers, developers, and deployers…should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible.” When not possible, alternatives should be offered.
The fourth proposal has to do with TOS. Yes, the infamous terms of service, those long documents that nobody reads. The fourth part of the Bill, Notice and Explanation, plans to ensure that the TOS is shorter and more readable. It seeks to ensure that the designers, developers, and deployers of automated systems provide accessible and generally understandable information about the role their automated systems play. What they do, why they do it, etc.
The final part of the Bill, Human Alternatives, Consideration, and Fallback, wants to ensure that automated processes have a back-up option. This mostly applies to automated systems that are explicitly created to assist with human processes. Think automated customer service, 911 calls, that sort of thing. In “appropriate cases,” appropriateness determined by the context of the situation, there should be a human alternative that is readily available to the user. In situations where the automated system fails, there should be an available human resource that can do what the automated system could. Additionally, in areas relating to “sensitive domains” the automated system should be further refined to fit the domain better. An automated 911 receiver, for example, should be held to a higher standard than a customer-service bot at Lowes.
The AI Bill of Rights is not a finished document. It promises a lot, but the actual measures for protections the Bill has introduced are badly defined and clearly unfinished. Then again, the document is only a blueprint. It is not a legal document, and these protections will likely not be put in place for at least a couple more years. They need to be reviewed, refined, and re-written until they are sufficient for a full legal document. The AI Bill of Rights is a direction, a possible plan for the protections that might be necessary in the future, and it is not a perfect document.
But it is the first big step towards legislation around social media.