What's the matter with social media?
The problem with platforms is not that they are for-profit or because the beguile viewers with "algorithms." Scale, context collapse, and virality--these make social media different from mass media.
What makes social media special? What makes it social?1 What makes it so difficult to regulate? Why do we worry about its bad effects incessantly even when we also consume it voraciously? What makes it different, in other words, from traditional mass media?
There are two incorrect answers to this question. One answer to this question is that social media companies are for-profit enterprises. But that can’t be what makes them different. The American media industry before social media—books, newspapers, radio, movies, television—was relentlessly profit-seeking; yes, the New York Times is and has always been a for-profit publication. We have regulated for-profit media companies before; and none of the people wringing their hands over social media consider radio even remotely problematic to regulate.2 A corollary of this is that social media is an advertising machine that gives out its “content” for free, hence it is uniquely malignant. Well, again, I have news for you: television was also a similar advertising machine that gave out free content (news, dramas, sitcoms) for ads and in fact, the phrase “you’re not the customer (of social media), you’re the product” was originally conceived for television.
The second answer—voiced by people as different as ex-Googler Tristan Harris and management scholar Shoshanna Zuboff—is that social media companies have at their hand some truly effective algorithms through which they capture our attention, transform our behavior, and make us addicted to their content. This, too, is suspect; as the journalist Will Oremus writes, Facebook’s much-vaunted algorithm often “serve[s] us posts we find trivial, irritating, misleading, or just plain boring.” Facebook engineers seem equally at sea when they try to understand what kind of content their consumers are most interested in. In other words, the “algorithm,” as the anthropologist Nick Seaver has written, is not some coherent entity that is sitting inside Facebook or TikTok with the engineers turning the dials; it’s just a word we’ve slapped on to cover up the chaotic dynamics of media production and consumption on the internet (on which more below).3 And last but not the least, the brouhaha over “addiction” to media is very, very old. We were addicted to—perhaps we still are!—comic books, television, video games, and now social media; as an indicator of virtue, media consumption has never been highly rated.
But still, social media is different; regulating it is a challenge that we are still working through. There are really three things that make social media different from traditional, legacy media like movies, radio, and television. All of these three things have to do with the collapse of traditional gatekeepers and the rise of new ones.
These three things are, in order: scale, context collapse, and virality. The scale of social media is tremendous; billions of photos are posted on Facebook in a day. How does one even begin to process these photos to see if they are legal and/or appropriate? This is the start of the problem. But the second problem is even harder: what even counts as “appropriate”? This is the essence of context collapse. On a social media feed are collapsed together many different types of posts or content: a rant about Donald Trump (or Biden, take your pick) is mixed with cute baby and cat photos and pointers to self-help articles and advice about COVID vaccinations. Is that post someone shared about the COVID vaccines being unsafe just a discussion between friends or is it akin to a news announcer on television telling viewers that vaccines are bad? This question is not easy to answer for social media posts where the “context” of a post is not clear at all and in fact, keeps fluctuating. This leads to the last and final problem: virality, which is the idea that there is an inherent unpredictability about the audience for any post. Most posts are seen by no one; some posts are seen by a lot of people. But it is quite possible that a post that would have been seen by no one was in fact seen by millions of people through a kind of decentralized mechanism, what we call becoming “viral.” Technically, you would apply different standards of regulation to a post that was seen by no one to one that was seen by everyone; but it is very difficult to determine that in advance.
It is these three things together that make social media so difficult to get a grip on and to regulate. All of these things would still be true if we decided tomorrow to nationalize all the social media companies or to turn off their recommendation algorithms. Will we figure out eventually how to manage scale, context collapse, and virality? Absolutely yes, just as we figured it out for mass media like television and radio. But these are thorny issues, far more complicated than slogans about making the internet public again.
Below, I’ll go into each of these factors in a little more detail.
Scale
Anyone can post on social media. Anyone with an internet connection and something to post. And it turns out that a LOT of people like to post. That’s scale.
But how many really? Mike Masnick, one of the reporters who has heroically covered content moderation even when it wasn’t the biggest topic in the world, tells us way back in 2019 that there are 350 million photos on Facebook uploaded every day. This is an astounding number. And as Masnick adds, this is just photos; we haven’t even talked about status updates, videos, links, and more.
Social media is difficult to regulate because of the sheer scale of this content. Some of it might be easy to catch algorithmically. But other cases are difficult. Here are all the cases of content that might be problematic in some way:
Content that is copyrighted by someone else.
Content that is clearly illegal like child pornography and threats.
Content that is illegal in certain contexts and appropriate in others: adult photos, pornography, drugs.
And this is only the start! Consider, for instance, the question of breast-feeding photos, that put content moderation into the spotlight: are these pornographic or no? People disagree. Or consider a photo that has violent content: is it inappropriate or newsworthy? Or someone posting about the COVID vaccine: is it true or false or something else?

To determine illegal and variously problematic content, social media companies often use a mixture of algorithms (a program scans your content for, say, skin color and problematic words) and flags (another user “reports” content as in some way problematic). But many things flagged by humans and algorithms are too complicated and need an actual human being to look at it. That’s why social media platforms employ commercial content moderators: paid employees who are given a set of rules and then asked to figure out whether they apply to the problematic content. Is this holocaust denial or something else? Is this a real threat or just something someone said in jest?
The scale of commercial content moderation is really difficult to imagine. See above for a photo from a talk by the legal scholar Evelyn Douek (via Masnick). In half an hour, Facebook takes down more than 600,000 pieces of content—and of course, it decides to leave up a lot of content as well after taking a look at it. Scale is a logistical problem with no clear solution. What would it even mean to moderate the content “accurately”? And even if moderation is accurate, that’s still a problem:
If there’s a 99.9% accuracy rate, it’s still going to make “mistakes” on 350,000 images. Every. Single. Day. So, add another 350,000 mistakes the next day. And the next. And the next. And so on.
But if scale is a logistical problem with no clear logistical solution, there is one big difference. Not all posts are the same. Most social media posts are not seen by anyone. Some posts are seen by a lot of people. And it’s not clear which posts will be seen by no one and which will be seen by lots of people; that’s the problem of “virality” (on which more below). In other words, if social media allows “speech,” it does not necessarily guarantee “reach.” You are quite likely to be someone who posts a lot but whose posts are never read by anyone. Or you might get lucky and find your post going viral.
But the possibility of virality means that (a) perhaps we don’t need to apply exacting standards to a post if it’s just going to fade into the ether, but then (b) on the other hand, what if we didn’t apply those exacting standards and it did become viral? More on that in the section on virality but before we go there, there’s a bigger problem: what’s the context that makes a post “appropriate?” That’s what I’m calling “context collapse.”
Context collapse
The legal theorist Helen Nissenbaum has coined a concept called “contextual integrity” to think about the question of privacy. Nissenbaum argues that people live their life in disparate zones and there are different norms around information transmission in those zones. As she puts it:
People “are at home with families, they go to work, they seek medical care, visit friends, consult with psychiatrists, talk with lawyers, go to the bank, attend religious services, vote, shop, and more. Each of these sphere, realms, or contexts involves, indeed may even be defined by, a distinct set of norms, which governs its various aspects such as roles, expectations, actions, and practices.”
In other words, there are a set of norms about the disclosure of information related to particular activities. You might be much more frank about your health problems with your close friends and your doctor. Your doctor might talk about your health problems with another doctor, especially if they need help. But typically, you would not talk about your health problems with your work colleagues or friends that you are not that close to. And the doctors with whom you share your health problems do not share it with their other patients. You would tell your friends about the party you went to last night and what happened there; but you would probably not share it with your work colleagues or your boss—unless those work colleagues were also your good friends. The context of “health” is kept separate from the context of “work” and the context of “friends.” 4
But these sort of distinctions that play an important role in everyday life just collapse on social media, a phenomenon that analysts like danah boyd and Alice Marwick and others have called “context collapse.”5
Consider a person who posts a lot on social media. Since the platform I follow the most is Twitter, let’s focus on that (although this would apply to just about any platform). People who post prolifically on Twitter will post on many different things. Here is a hypothetical profile on Twitter with a sequence of tweets:
A post about how the murder of the UnitedHealth CEO was justified
Another post about how burned out the poster is at the end of the semester
A post about the poster’s kids and the painting they did with a picture of the said painting
A post about how the poster’s mixer-grinder stopped working today
A post about the cookies the poster baked
A long detailed thread about an article the poster published in their professional capacity as a researcher
And so on …
Notice how many contexts these posts straddle. A political opinion is mixed with a personal incident and a professional achievement.
This is standard for social media, and what’s more, it’s part of what makes a social media poster authentic to his readers: the authenticity is partly derived from the mixing of contexts, i.e. context collapse. It’s because the poster shares information with their reader about all these different contexts that their readers think they are genuine.
But context collapse does not just happen at the level of individual social media posters. Context collapse is also deliberately engineered by social media companies who seek to mediate as many activities—i.e., contexts—as possible through their technical machinery. So Facebook wants to not just be the conduit through which you share updates with your friends and family but also a channel for political messaging, media distribution, book clubs, commercial advertising, personal shopping, and game playing. It therefore seeks to deliberately collapse contexts. There is no distinction in your Facebook News Feed (see above) between an update from your friend, the new video showing the highlights of the latest tennis match, the new T-shirt from your favorite clothing retailer, or the latest viral article from the news publication that you and your friends read regularly.
So Facebook does the same thing in its News Feed that its individual posters also do in their own postings: it mixes together various contexts in an effort to engage you with content that you like.
Traditional mass media was absolutely not like this. Take newspapers. Over time, newspapers and reporters instituted a whole host of procedures and norms to keep contexts separate. A newspaper page shows very clearly whether an article is news, opinion, or an advertisement. Television stations will have different time slots for entertainment shows, news shows, and adult entertainment shows. Even advertisements for adult products are usually shown late at night. A drama on a network show is very clear that it is not a political news show (even if it, obviously, has a certain political vision); people who want to watch the news are not going to tune in to a drama.
The communications scholar Tarleton Gillespie has called this social media’s problem of “checkpoints.” Traditional media, says Gillespie, had, over time, been successful at creating checkpoints to determine access to content. This includes, for example:
putting the X-rated movies in the back room at the video store, putting the magazines on the shelf behind the counter, wrapped in brown paper, scheduling the softcore stuff on Cinemax after bedtime, or scrambling the adult cable channel, all depend on the same logic. Somehow the provider needs to keep some content from some people and deliver it to others. (All the while, of course, they need to maintain their reputation as defender of free expression, and not appear to be “full of porn,” and keep their advertisers happy. Tricky.)
Social media has a harder time with a checkpoint system because “to run such a checkpoint requires (1) knowing something about the content, (2) knowing something about the people [who want to access this content], and (3) having a defensible line between them [i.e., knowing what contexts we want to keep separate].” And social media companies, just because of the scale of their content, and because the whole point is to mix contexts, have a very hard time doing this.
What some consider the big problem with social media, “misinformation,” can actually be seen as a problem of context collapse, especially regarding the category of “journalism.’ The ease of creating posts on social media has meant that traditional journalistic outlets are now challenged by newcomers who are not exactly doing journalism, at least as it was traditionally conceived. Consider the very popular Facebook page, Occupy Democrats, which currently has 10 million followers on Facebook. Their slogan? “We support the Occupy movement, oppose Trump, and we vote.” The page shares memes and articles continuously and these are in turn shared by their followers. Is this journalism? Partisan propaganda? Political education? Entertainment? It’s all of the above. Perhaps you like this group. But consider that there are thousands of other groups like this (maybe anti-mask or anti-vaccine groups) and when an article from this group is shared in the same way as an article from the Times or Slate, then we have what is sometimes called “misinformation” but is really about context collapse.
Virality
If the sheer scale and the collapse of contexts weren’t a problem, virality ups the ante even more. But what is virality? Let’s take the plunge.
Consider two drug dealers who are making a drug deal and who are both using AT&T wireless-enabled cellphones. Typically, most of us will not hold AT&T wireless responsible for the drug deals and the harm they caused to society. Instead, we will just hold these drug dealers responsible; the AT&T wireless network was just a tool they used to carry out the drug deal; it was no more responsible for it than a park that the drug dealers had decided to meet in to make the deal face-to-face.
Now consider a different problem: the network station CBS, for reasons unknown to us, manages to telecast an episode with shockingly high levels of nudity—a point where it might even be considered pornographic—in its prized 8 pm Thursday night slot. Most of us would argue that the broadcasting of this episode was primarily CBS’s fault rather than the creators of the episode or the actors in it. CBS has the responsibility to its viewers to make sure that content that is borderline pornographic cannot be shown in its primetime slots. Perhaps CBS should have asked for more cuts. Perhaps CBS should have changed the timeslot of this particular program so that it was broadcast after 10 pm when most children are asleep.
So, in one case, the medium (cellphone network) was not held responsible for the message (the drug deal). In the other, the medium (television) was held responsible for the message (the inappropriate episode).
These different social understandings come from our analysis of the nature of these communications. The first case of the two drug dealers is a simple one-on-one interaction; the reach of the interaction is limited and as such, since it only reaches its intended recipient, we do not hold the medium responsible. The second case is a one-to-many interaction; CBS has millions of viewers who watch it which means that its reach is very high. In return for this privilege, CBS is held responsible for content that violates the standards for this content.
So what is social media then? Is it more like a telephone conversation between two people or more like a television station? The answer is that this is very very complicated. The interactions on social media span the gauntlet from a one-one telephone conversation to a mass broadcast. Some posts might start out as one-one conversations and then spiral out and become broadcasts. This is the thorny problem of virality. As the computer scientist Arvind Narayanan writes:
Virality is not popularity. It’s about whether the piece of content spread in the manner of a virus, that is, from person to person in the network, rather than as a broadcast.
To understand more about virality, I highly recommend this video essay, “Visualizing Virality,” by Samia Menon and Sahil Patel.
Virality is a boon for information entrepreneurs—everyone from activists to politicians to academics and journalists—because it changes the rules through which a cause or voice with low visibility can make itself heard to the multitudes. The reason Twitter tended to be so influential in its heyday—about 2012-2021—was that many important people, especially journalists, activists, and politicians, were on Twitter and so if you had a cause that you wanted to focus attention on, your job was really to get someone important to retweet it. And when politicians and journalists assessed “public opinion” and the opinion of influential people, they often looked for it on Twitter. And so if you could tweet enough to get engagement or make something into the topic of the day on Twitter, you could really improve the visibility of your cause.
Virality is also a curse because it can bring unwanted attention to people.6 The journalist Phoebe Matz Bovy wrote last week about how dramas play out on Twitter and now, Bluesky. Bovy wrote something about Bluesky in her column and got piled on by Bluesky posters. And on Twitter, a graduate student from the UK posted a photo about her dissertation (and her joy at graduating with her PhD!) and then was subject to her pile-on by right-wing Twitter posters who thought her dissertation was too woke. Clearly, the student who posted about her dissertation was not trying to market her cause to the world. (Or was she? This brings us back to the problem of context collapse. Was her post just an announcement to her friends about her joy at finishing her PhD and being done with her dissertation? Or was it meant to endorse the content of the dissertation which had something to do with smells and the logic of injustice?) But her post was seen by millions of people on Twitter, some of whom responded to her with outrage and abuse, and others, who may never have heard of her otherwise, ended up defending her. Perhaps there is no other case of a viral tweet with disastrous consequences for the poster than Justin Sacco who tweeted a joke about AIDS and Africa, boarded her flight to South Africa, and then landed to find that she had been fired from her job.
Virality raises hard questions about how social media posts should be regulated or moderated. If a post is only going to be viewed by a few people, it makes sense to take a hands-off approach and protect the speech rights of the poster. But how do we know? Should Twitter have moderated a simple post where the poster celebrated the completion of her dissertation (of which I am sure there are hundreds in any given day)? But then, is Twitter responsible for the fact that she was piled on and downright abused by social media mobs or is it those people who retweeted her content to their followers and almost invited everyone to pile in on her? Was Justin Succo’s tweet prima facie racist or was it a joke? Perhaps, on another day, no one would even have noticed it and she would still have a job.
Conclusion: Scale, context collapse and virality make a potent mix for regulation
So, to sum up, social media is not a one-on-one communication medium like the telephone and nor is it a mass medium like television. It combines features of both along with features of neither. Gatekeeping and reach on social media are mediated by a wholly different set of factors that we don’t see on mass media.
None of these three problems of social media: scale, context collapse, and virality—and even more important, their combination!—can be settled by some simple policy fix. You can ban all the advertisements on social media—Twitter tried to ban all political advertisements—but that won’t solve the problem because most social media content is “organic,” not paid. Banning political advertisements just means that less well-known politicians lose an important channel of taking their cause to their constituents. You can even nationalize all the social media companies and force them to rewrite the “engagement” metrics they use in their recommendation algorithms. But that’s not going to solve the problem of scale and context collapse. You can break up the social media companies so that we live in a world where there is not just one Facebook but three or four of them. And all of them will still have to figure out how to regulate their content and deal with scale, context collapse, and virality.
Will we eventually figure it out? Yes. Just as we did with radio and television and the movies. But it’s important to remember that the reason social media is hard to solve is NOT the profit-motive or the advertisements.
Paul Dourish (link to his website since I can’t find the place where he says this) says that the term “social media” is nonsensical because all media are, by definition, social. He is, obviously, correct but there is a reason that social media was labeled “social:” people felt that it differed significantly from traditional “mass” media.
I have lived in a country in which, for years, there was only one television and one radio station, all state-owned. That has its own problems—which should come as no surprise to anyone reading this.
It’s the opposite of the blind men and the elephant parable, if you will. The blind men were grasping the different parts of a whole coherent thing; they were missing the whole for the parts, the trees for the forest. But here, it’s the opposite. It’s like we think there is an elephant (aka, the algorithm) when all we have is a few disparate things and we’re trying to say that they are all part of this much more coherent thing called the algorithm.
Not just because there’s a law that forbids them from doing so but because it is frowned upon in our social circles.
Elisabetta Costa has a wonderful paper about how social media users in Kurdish areas of Turkey do not experience context collapse because their Facebook usage is mediated through a different set of norms around audience segmentation.
This is tricky because, to some extent, attention is the currency of social media so no attention is fundamentally unwanted.
I would suggest that a post on a wide-open network like twitter is never *just* a post to one's friends, such posts would be made in a private group like those that on Facebook.
I agree with much of this, certainly the pushback against "algorithms" -- but I think it's not an accident that it's called social media:
"Social media is social. The primary information it conveys is social; the primary reason people use it is to get social information."
https://kevinmunger.substack.com/p/we-lived-in-a-society