The metaverse is a cubicle

It’s been almost two years since I’ve used this thing. I doubt anyone has been holding their breath—if you were, apologies. Life got crowded. It’s still pretty crowded, but lately I’ve been feeling like maybe I should make room for some more newslettering.

What I most liked about doing this Substack was that it gave me a place to put a certain kind of provisional, half-baked thinking—a drafts folder of sorts. Every so often a Take materializes in my mind that is too long for Twitter and not thought-through enough for a proper piece, and it’s nice to have a home for such thoughts.

Today’s half-baked thought is about the metaverse. Sorry.

But first, a bit of self-promotion. I’ve got a book coming out next June. It’s called Internet for the People, and it tells the story of how the internet was privatized, and how privatization set in motion the crises that consume it today. The book project actually grew out of an old newsletter post, though it evolved a lot in the intervening years. Anyway, you can pre-order it from Verso.

On to the newsletter. As always, if you’re reading this on the web and you want this in your inbox, you can subscribe.

From virtual boys to lawnmower men

What will the internet look like in 10 years?

One answer is the “metaverse.” At least, this is what Facebook, Microsoft, and a handful of other tech companies are saying. It’s not exactly a new idea. The dream of an immersive, embodied internet is an old one. It predates the modern internet itself: the “cyberspace” of William Gibson’s immensely influential Neuromancer (1984) envisioned users plugging their nervous systems into a networked sensory environment, at a time when the internet was in its infancy. And VR has an equally long history. It saw a boom in the late 1980s and early 1990s, and a bunch of headsets came out—I rented a Virtual Boy from a Blockbuster for one disappointing weekend in 1995—before the hype cycle hit a wall of consumer indifference and the bubble popped.

If the metaverse is an old dream that’s never quite taken flight, why would this time be any different? Many observers argue that it’s not—that Zuckerberg et al are spinning up another hype cycle, same as the old, and everything will come crashing down soon enough. “VR is a bit like a rich white kid with famous parents: It never stops failing upward, forever graded on a generous curve, always judged based on its ‘potential’ rather than its results,” writes David Karpf in Wired.

Fair enough. But I’d like to propose the possibility that this time actually is different, for a couple reasons.

The first (and least important) reason is technical. As Karpf admits, the technology has come a long way: “As a technical matter, we could pretty much cobble together a 1.0 version of the Metaverse or the Oasis next week.” Now, it would suck: what stood out to me about Zuckerberg’s supremely weird metaverse presentation at the annual Facebook Connect conference was just how shitty the tech was. (Why don’t avatars have legs?!) Still, VR is way more sophisticated than it used to be. Something halfway usable is emerging from all of those billions of dollars being pumped into VR and AR by tech companies and venture investors. Facebook alone is now spending $18.5 billion a year on VR/AR R&D.

But it doesn’t matter how good the technology is if there’s nothing to do with it. As Karpf puts it, “VR’s limiting flaw might instead be on the demand side.” This is a point frequently made by Benedict Evans: there’s still no “killer app” to drive widespread VR adoption. “The issue I circle around is not just that we don’t have a ‘killer app’ for VR beyond games,” he wrote in a post last year, “but that we don’t know what the path to getting one might be.”

This brings me to the second (and more important) reason that this time is different: the pandemic. The pandemic has generated a confluence of factors that, in my view, is conducive to a particular version of the metaverse taking root.

White-collar blues

What are those factors?

First, to state the obvious: the pandemic is reorganizing white-collar work by making remote and hybrid working arrangements more common. It’s important to note that these arrangements are at present limited to a fairly small portion of the workforce, as Doug Henwood points out. The latest Bureau of Labor Statistics numbers from October say that only 11.6 percent of the workforce—18 million people—“teleworked or worked at home for pay at any time in the last 4 weeks because of the coronavirus pandemic.” And, as you might expect, teleworkers are concentrated in the higher-end professions: tech, law, finance. Still, the fact that 18 million workers are still WFH this long into the pandemic is a big deal.

Why has WFH persisted for so long, and why is it likely to continue, or even to grow? A few reasons:

  • The pandemic keeps going: A bunch of companies, especially in tech and finance, were eager to push for an office return in September 2021. They had to postpone those plans, largely due to the delta variant. Now we’ve got omicron. It’s anyone’s guess how much disruption omicron will bring. But there will certainly be more bad variants (so long as vaccine apartheid and vaccine hesitancy continue) and, beyond that, more novel zoonotic diseases, because the forces that underlie the emergence of those diseases—deforestation, industrial agriculture, climate change—show no signs of abating. There will also be all sorts of other disasters as the world continues to heat up and various natural systems go haywire.

    So remote/hybrid will probably be a permanent adaptation to an increasingly inhospitable biosphere. Back in May 2020, 35 percent of the US workforce was teleworking. I don’t see why we couldn’t get back to that number, or even exceed it, if the worst-case scenarios for the next decade play out.

  • Workers want flexibility: WFH is popular with workers, as study after study shows. Drawing on 33,250 survey responses collected from May 2020 through March 2021, a paper by Jose Maria Barrero, Nicholas Bloom, and Steven J. Davis found that most workers want to work from home two or more days per week even after the pandemic is over. The sentiment appears to be global: the World Economic Forum did a survey of 12,500 workers in 29 countries and found that a “majority (66%) said employers should allow more flexible working in the future.” And support for WFH is consistently strongest among women and people of color.

    WFH’s popularity among workers has led to a number of confrontations between the rank-and-file and management when the latter has tried to cut down on remote work. (For example, at Apple.) The failure of the September 2021 return-to-office push wasn’t just about delta; it was about workers successfully defending WFH. The high-pressure economy that’s producing unprecedent quit rates (the Great Resignation) is giving these workers ample power to push back against managerial attempts to force them back into the office. (Barrero, Bloom, and Davis found that more than forty percent of Americans who are WFH at least one day a week would look for another job if their employers made them go back to the office full-time—that’s a very large number.)

  • Employers want labor savings: Facebook, one of the more remote-friendly companies, announced in June 2021 that nearly all of its employees could continue to work from home indefinitely. But their salaries would be adjusted for the labor market of wherever they decided to live, so somebody moving from San Francisco to Reno would take a pay cut.

    This suggests another reason that remote/hybrid will endure: because it gives companies a way to cut wages. Now, I haven’t seen evidence that this is happening yet at any scale. And the much-discussed Covid exodus from big cities didn’t actually happen; people have mostly been staying put. Still, as remote/hybrid arranagements become more normalized in white-collar workplaces, and as more companies recruit for remote-only positions from a national labor pool, the opportunities for wage arbitrage by employers increase.

So some significant role for remote/hybrid in white-collar workplaces is probably here to stay. But the transition is not without its difficulities. What are some of those difficulties?

  • Collaboration: Opinions vary widely on the question of whether remote/hybrid negatively affects productivity. Workers tend to think it doesn’t; many executives think it does; but the reality is that productivity is notoriously hard to measure in most white-collar settings, so it’s basically unknowable.

    Collaboration is probably a more useful way to evaluate work quality. How well do workers collaborate in a remote/hybrid setup instead of a fully in-person one? Here, even the most pro-WFH worker must concede that there’s a lot that doesn’t work well. Zoom fatigue is real, workplace communication software is mostly pretty terrible and uncreative (Slack is IRC with emojis), and hybrid in particular presents all sorts of headaches.

    There’s also the matter of how one generates affective attachments among coworkers that contribute to a sense of social cohesion—what tech companies in particular like to call “culture.” The goal here is to make workers feel more connected to their work by making them feel more connected to one another. Thus the importance of the office “campus,” ping-pong tables, offsites, “team-building,” etc. Some companies that have been remote-first for awhile, like GitHub, have put a lot of thought into how to do “culture” in a distributed way. But suffice to say, it’s not easy, especially with the existing state of collaboration software.

  • Managerial control: If people are working at home, how do bosses know they’re getting work done? You can’t shoulder-surf to see what they’re up to; you can’t walk around the office to see who came in too late, left too early, or took too luxurious of a lunch break. This fear of losing control is clearly what’s driving some of the skepticism among executives about remote work. And it’s also driving the proliferation of “bossware”: software used to surveil remote workers. Here’s how Ali S. Qadeer and Edward Millar put it in a recent piece:

    “With the recent large-scale shift to remote work since the onset of the COVID-19 pandemic, digital workplace managerial tools have proliferated at a geometric rate. Software such as Activtrak, Hubstaff, and Teramind all boasted a tripling of demand in the early months of the pandemic. As the scale and scope of remote work increases, the essence of managing ‘work from home’ remains rooted in the principles of scientific management: tracking mouse movements, recording workers’ screens, and surmising attention and time on task.”

    The most intrusive and repressive forms of bossware are targeted at low-wage workers like call center operators. But, as a new report by Wilneida Negrón discusses, softer forms of surveillance are also becoming more pervasive. Take Microsoft Workplace Analytics, a product that “assigns every employee an ‘influence score’ that indicates ‘how well-connected a person is within the company,’ based on extensive email, calendar, call, and chat data.” There’s also presumably a lot of surveillance happening that doesn’t rely on specialized software: managers keeping an eye on Slack statuses or git commits, say. As hedge fund manager turned thought leader Ray Dalio says, there are plenty of ways to keep an eye on remote workers short of installing a keystroke logger on their machines: “You don’t have to have them in the office. There are so many tools that make it clear that how productive people are.” Dalio would know; the surveillance culture at his fund Bridgewater was famously brutal.

The Matrix meets Office Space

So where does that leave us? To summarize:

  • Employers want to find a way to exercise managerial control over remote workers, to bring the collaborative and socially cohesive aspects of in-person work to remote/hybrid environments, and to push work to lower-wage regions.

  • Employees want to retain flexibility around WFH while also finding a way to mitigate the unpleasant elements of WFH (Zoom fatigue, hybrid headaches, anxieties about lack of visibility leading to being passed over for promotions, etc).

If you put this all together, I think you start to see the contours of a particular version of the metaverse emerging. It looks like Office Space by way of The Matrix. It promises to give both employers and employees enough of what they want that it might come to be seen as the necessary cost of a permanently remote/hybrid white-collar world.

I don’t think this is some huge theoretical insight; it seems to be one of the main business strategies among metaverse architects. It’s why Facebook is pushing Horizon Workrooms (where your avatar can sit leglessly in a cartoon conference room), why Microsoft is pushing Mesh (its metaverse offering), and why Accenture recently bought 60,000 Oculus headsets. And the way the metaverse’s builders and boosters talk about its advantages is very much keyed to the set of desires and concerns I laid out above; in Microsoft’s metaverse demo, Ellyn Shook, Accenture's chief leadership and human resources officer, said the technology “[enables] presence and connection that transcends location, keeps our culture vibrant wherever we’re working, and levels the playing field to create equal and inclusive experiences.”

Significantly, this is not the metaverse as hedonist escapist fantasy-land—the new Vegas, as Izabella Kaminska argues—but rather as the new cubicle: the new organizing architecture of white-collar work. The cubicle was first introduced as a kind of balancing act between the need to give white-collar workers some personal space to let them concentrate while also enabling collaboration and managerial surveillance. The metaverse might offer another way to strike the same balance under very different circumstances.

Now, precisely where that balance lands is an open question—and I would argue that it has everything to do with the balance of class forces. How surveillant this metaverse would be, for example, would come down to how much power workers have to push back, either by quitting or by engaging in collective action. Also, just like with the cubicle, it’s entirely possible, even likely, that the metaverse makes white-collar work worse—or maybe just bad in a different way. A new virtual workplace in which everyone is equally “present” and “connected” might be one that makes everyone feel more absent and alienated, especially if the tech doesn’t get much better; I found the scene of two legless avatars playing ping-pong in the Microsoft metaverse demo terribly depressing. There are also physiological costs to keeping a VR headset strapped to your head for any length of time; metaverse fatigue could very well make workers nostalgic for Zoom fatigue.

As with all technology, however, the metaverse doesn’t have to deliver on its promises, or even work particularly well, to be deployed. Especially if Facebook decides to keep dropping the price of VR headsets—its lowest-spec Oculus Quest 2 is already the cheapest headset available ($300)—and continuing to pump tons of money into R&D, which will probably help with some of the above problems (VR ping-pong with your coworkers will get less depressing).

PC load letter

A parting thought: a lot of the commentary on VR assumes that consumer demand is what matters for adoption because it’s a consumer technology. In this view, VR is the new VCR; the tipping point is reached when you’ve persuaded enough people to go walk into Target to buy a headset.

But if the metaverse is the new cubicle, then consumer demand isn’t what will make it mainstream. Rather, you have to look at how the technology interacts with, and is shaped by, the relations of production in particular industries and workplaces—which is to say, the practices, habits, desires, fears, interests, and anxieties that mediate the relationship between the people who do the work and the people who oversee the work and own the product. If you do, I think the path to the mainstreaming of the metaverse becomes clearer.

Updated to correct the BLS numbers on number of teleworkers. Thanks to Sean Collins for catching my errors!

The old gods are dead

2020’s first installment of Metal Machine Music will be a little thin, I’m afraid. Life has been getting in the way of newslettering lately—a trend that’s likely to continue—but there were a few things I wanted to share with you.

Making the most of the techlash

First, and mainly, I wrote a long piece for the new issue of Logic that just went online this week. It’s called “From Manchester to Barcelona,” and it’s an attempt to think through the relationship between capitalism and the internet (or “tech,” if you like). The ideas in it have been percolating for awhile, and have gone through multiple iterations, so it’s gratifying to have it out in the world.

Here’s a quick summary of the main points:

  1. The techlash is a constructive development, but it’s mostly been performing the labor of the negative: it has done the (invaluable) work of demolishing the old techno-utopian-libertarian pieties, but it’s still far from clear what new ideologies will rise up to replace them. There’s a bit of a scramble for hegemony at the moment when it comes to the next big narrative about tech: different camps are putting forward different alternatives, but no clear winner has emerged.

  2. So far, the left has played a very small (perhaps nonexistent) role in this conversation. There are no shortage of brilliant left thinkers out there thinking about tech—read Logic!—but it’s safe to say that a clear left agenda for tech hasn’t yet materialized. If you read Bernie Sanders’s interview with the New York Times editorial board, you’ll see what I’m talking about: when asked about tech, he has trouble differentiating his approach from liberal antitrust.

  3. How does the left come up with an agenda for tech? What’s the tech equivalent of Medicare for All, the Green New Deal, and so on? A good way to start, I think, is to put capitalism at the center of our story. Even in the midst of the techlash, tech is too often thought about in isolation from capitalism—or when the word is invoked, it’s used imprecisely. This is a problem, because if we can’t think about capitalism clearly—or acknowledge that such a system even exists—we’re going to have trouble thinking about tech, much less coming up with a plan to improve it.

  4. The bulk of my piece explores how tech acts through and within capitalism, as an agent and accelerant of its core dynamics. I examine how tech intensifies capitalism’s tendencies to generate imbalances of wealth and power, and to heighten the hierarchical sorting of human beings according to race, gender, and other categories. Towards the end of the piece, I also offer some provisional thoughts, drawn from the past and present of social movements, on how to combat these tendencies by democratizing (or dismantling) tech.

Anyway, read the piece and tell me what you think.

Other readings

Before I let you go, some of the things I’ve been reading lately:

Discipline at a distance

Welcome to your next installment of Metal Machine Music. I’ve fallen a little behind on my weekly cadence—which, fair warning, will probably keep happening. But I was very pleasantly surprised by how widely my last post, “Platforms don’t exist,” traveled. Jacobin published an edited version, and I’ll be doing a couple of interviews this week on the themes of the piece. It might also serve as the basis for a bigger project.

Anyway, on to today’s newsletter. As always, if you want this in your inbox, you can subscribe.

Elastic factories

Katrina Forrester has a very interesting piece in the London Review of Books called “What counts as work?” It’s a review of a new-ish book by Colin Crouch, Will the Gig Economy Prevail?, and it has some important insights into what we call the gig economy.

One of the questions I always struggle with when thinking about digital things is the precise balance between continuity and discontinuity. What’s old and what’s new? The mainstream tech conversation tends to emphasize discontinuity—everything digital is treated as a sharp departure from the past. Clearly, this interpretation serves certain interests: if particular products and services really are unprecedented, then the firms that produce them acquire a certain prestige as innovators, and can make the case that laws and regulations that might impinge on their profits are too antiquated to apply to this brave new world.

We might be tempted to react to this narrative by drawing the exact opposite conclusion—that nothing about “tech” is novel. But this would be wrong. There are discontinuities and continuities, and they’re often deeply entangled with one another. Trying to identify which is which, and how they’re connected, is essential for thinking through what tech is and how it works.

This is the approach that Forrester takes in her review. On the one hand, she makes the point that the gig economy strongly resembles the “putting-out” system that existed in an earlier era of capitalism in the Global North, and which still exists in the Global South. Under such a system, subcontractors perform piece work. “Non-standard” employment prevails—that is, not formal, full-time work of the kind we have come to see as normal. For most of the history of capitalism, in fact, normal work was non-standard employment:

Historically speaking, standard employment has been the norm only briefly, and only in certain places. Until the ‘industrious revolution’ of the 18th century, work was piecemeal. People worked where they lived, on the farm or at home: in the ‘putting-out system’—which still exists in cloth production in parts of the global South—manufacturers delivered work to workers, mostly women, who had machinery at home and organised their work alongside their family life. Then work moved out of the home. Over the next two centuries, the workforce was consolidated into factories, then into offices. Waged work was standardised, then became salaried.

Reading this, I’m reminded of a passage from Michael Denning’s “Wageless Life”:

Unemployment precedes employment, and the informal economy precedes the formal, both historically and conceptually. We must insist that ‘proletarian’ is not a synonym for ‘wage labourer’ but for dispossession, expropriation and radical dependence on the market. You don’t need a job to be a proletarian: wageless life, not wage labour, is the starting point in understanding the free market.

Clearly there’s a continuity here: what we now call “gig work” is a permanent feature of capitalist economies. That doesn’t mean it always looks the same, however. “Modern precarity takes a distinctive form,” Forrester writes, “which is a result of the major political and economic changes of the 1970s.” These changes are known by a few different names—neoliberalism, post-Fordism, deindustrialization—but their consequence is the erosion of standard employment, particularly its “enriched” social-democratic variant, which secured a range of rights and benefits for a significant portion of the workforce in the Global North.

Where does “tech” fit into all of this? One argument often heard on the left is that tech companies owe their fortunes mostly to legal and political maneuvering rather than to technological innovation. Uber seems like a case in point. Their business model rests on the fiction that drivers are independent contractors, a fiction that they help sustain with lots of lobbying dollars. But the technology also matters. Depending on the firm, it may not be the single most determinant factor in how they make money. But it does have a specific effectivity of its own.

This specific effectivity is at the heart of how the gig economy relates to the putting-out system. One of the problems with the putting-out system is that the capitalist who pays the various subcontractors doesn’t have much control over the labor process. If people are doing piece work at home, they are generally working at their own pace, on their own terms, with their own tools. Capitalists can’t transform the labor process because they don’t control it. The rise of the modern factory system is in large part a response to this problem: manufacturers begin to put workers under the same roof in order to more closely control their work. This greater control in turn enables a (very) full working day, speed-ups, mechanization, a complex division of labor—all of which greatly enhance profitability.

Yet this new model also creates problems of its own. Concentrated in factories, workers are now potentially a lot more powerful. They can disrupt production far more easily and at a far greater scale than they could as relatively isolated subcontractors in a putting-out system. Thus the extraordinary militancy of the industrial worker, which, as Beverly Silver explores in her book Forces of Labor, crops up wherever mass production appears.

But what if you could have the advantages of both systems? What if you could control the labor process and keep workers as relatively isolated subcontractors? This is precisely what networked digital technologies make possible. As Forrester writes:

What is new about the gig economy isn’t that it gives workers flexibility and independence, but that it gives employers something they have otherwise found difficult to attain: workers who are not, technically, their employees but who are nonetheless subject to their discipline and subordinate to their authority.

Creating a hybrid of the factory and the putting-out system is feasible because networked digital technologies enable employers to project their authority farther than before. They enable discipline at a distance. The elastic factory, we could call it: the labor regime of Manchester, stretched out by fiber optic cable until it covers the whole world.

It’s important to note that this isn’t a recent phenomenon. It’s been going on ever since computers, and more specifically computer networking, began entering the corporate world. Joan Greenbaum, in her book Windows on the Workplace, talks about how even before the internet, computer networking let companies relocate “back-office” functions offsite and, eventually, offshore. Mainstream commentators are likely to put the emphasis on communication when describing this phenomenon. The very terms that are used to describe these developments—telecommunications, information and communications technology (ICT)—reflect that emphasis. But as good cyberneticians, we know that communication is also always about control. And when we situate the rise of networked digital technologies within the broader history of capitalism, it becomes clear that control—specifically, control of the labor process—is where our emphasis should be.

That said, there are novel elements to how the current crop of networked digital technologies are implementing discipline at a distance. (See what I mean about how entangled the continuities and the discontinuities are?) The sophisticated forms of algorithmic management deployed by a company like Uber through their driver app wouldn’t be possible without various advances in machine learning and the development and proliferation of the smartphone, for instance.

How does one organize in the elastic factory? Uber and Lyft drivers are figuring it out, partly by building their own apps. It’s fair to say that such workers have less structural power at the point of production than, say, autoworkers in the 1940s. But they certainly still have some power, and they’re currently innovating the organizational forms that will help them exercise it.

Municipal algorithms, model cards, and other things

A few other things I’ve been reading and thinking about:

  • Excel jockeys: In 2017, the New York City Council established the Automated Decision Systems (ADS) Task Force to examine how local government agencies were currently using automated decision systems and to propose guidelines for how they should use such systems in the future. It was the first of its kind in the country, and it generated a lot of excitement. Two years later, the task force’s report has finally been published. It’s pretty thin, and Albert Fox Cahn, who served on the original task force as a representative of CAIR, the Muslim civil rights organization, has a piece in Fast Company that helps explain why. It seems that city officials stonewalled the task force after realizing it wouldn’t just serve as a rubber stamp. One interesting point of contention was the very definition of an automated decision system. Cahn writes:

    City officials brought up the specter of unworkable regulations that would apply to every calculator and Excel document, a Kafkaesque nightmare where simply constructing a pivot table would require interagency approval. In lieu of this straw man, they offered a constricted alternative, a world of AI regulation focused on algorithms and advanced machine learning alone.

    The problem is that at a moment when the world is fascinated with stories about the dire power of machine learning and other confabulations of big data known with the catchphrase “AI,” some of the most powerful forms of automation still run on Excel, or in simple scripts. You don’t need a multi-million-dollar natural-language model to make a dangerous system that makes decisions without human oversight, and that has the power to change people’s lives. And automated decision systems do that quite a bit in New York City.

    This is an important point: the most consequential algorithmic systems are often not particularly advanced. Think about something like shift scheduling software—it’s way less complex than the Facebook News Feed, but it arguably has far greater impact on the lives of millions of low-wage service workers.

    To return to the question of automated decision systems, what are some examples of those systems? AI Now has produced a valuable report outlining the different kinds of automated decision systems deployed by various government agencies around the country. AI Now’s Meredith Whittaker, another member of the New York task force, has also been critical of how the city handled the initiative: you can listen to her talk about it on WNYC. Finally, AI Now is hosting an event devoted to automated decision systems in New York on Saturday; if you’re nearby, you should go.

  • RTFM: A group at Google that includes Margaret Mitchell, Timnit Gebru, Parker Barnes, and others have launched a public “model cards” site for two features of Google’s Cloud Vision API: Face Detection and Object Detection. Initially proposed in a paper earlier this year called “Model Cards for Model Reporting” by Mitchell et al, model cards are intended to give more context on how a machine learning model works, what its limitations and trade-offs are, and how its performance varies across different conditions—the skin tone of a person’s face, for example. In Mitchell’s words, it’s an “example of what transparent documentation in AI could look like.”

    There are limits to algorithmic transparency—it can’t take us nearly as far as we need to go, and in certain cases can play a diversionary role—but explaining how machine learning systems work (and making them explainable in the first place) is an integral element of any political project to democratize AI. So I’m excited about Google’s model cards, and I look forward to seeing the experiment develop.

  • Thermidor: Speaking of Google, the employees who were fired last week as part of the company’s ongoing crackdown on organizers—engineered with the help of IRI Consultants, a union-busting consulting firm—have filed Unfair Labor Practice (ULP) charges with the National Labor Relations Board. Google almost certainly violated federal labor law by terminating the employees for engaging in protected concerted activity, not to mention subjecting them to intimidating interrogations where they were asked to provide names of other organizers. That’s no guarantee of a favorable verdict from the NLRB, of course—labor law is easily broken in this country—but they have a strong case.

  • Stocking stuffers: There’s a new book by Charlton D. McIlwain called Black Software that I really need to read. According to a liveblog of a talk that McIlwain gave at the Strand by J. Nathan Matias, the book “draws an analogy between the development of cocaine and crack cocaine in the 1980s and the history of the tech industry.” I’m interested!

  • Oily data: Last week, we released a piece from Logic’s new “Nature” issue called “Oil is the New Data.” Written by a Microsoft engineer, it’s a firsthand account of how tech companies are helping the fossil fuel industry use machine learning to intensify extraction. A must-read.

Platforms don't exist

This week’s newsletter is a little unusual. It only has one section, which is devoted to sketching out some possible contours of a left tech policy. In what follows, I take the basic principles of decommodification and democratization and try to come up with a model for how to apply them to our actually existing digital sphere.

Read on! Or subscribe.

What to do about the internet

What should we do about Google, Facebook, and Amazon? People from across the political spectrum are urgently trying to answer this question. So far, however, relatively few answers have come from the socialist left. At least in the United States, the cutting edge of the platform regulation conversation is dominated by the liberal antitrust community, perhaps best represented by the Open Markets Institute. They have some good ideas, and they’re serious about confronting corporate power. But they come from the Brandeisian reform tradition. Their horizon is a less consolidated capitalism: more competitive markets, more smaller firms, more widely dispersed property ownership.

For those of us with our eye on a different horizon, one beyond capitalism, this approach isn’t particularly satisfying. There are elements of the antitrust toolkit that can be very constructively applied to the task of reducing the power of Big Tech and restoring a degree of democratic control over our digital infrastructures. But the antitrusters want to make markets work better. By contrast, a left tech policy should aim to make markets mediate less of our lives—to make them less central to our survival and flourishing.

This is typically referred to as decommodification, and it’s closely related to another core principle, democratization. Capitalism is driven by continuous accumulation, and continuous accumulation requires the commodification of as many things and activities as possible. Decommodification tries to roll this process back, by taking certain things and activities off the market. This lets us do two things:

  1. The first is to give everybody the resources (material and otherwise) that they need to survive and to flourish—as a matter of right, not as a commodity. People get what they need, not just what they can afford.

  2. The second is to give everybody the power to participate in the decisions that most affect them. When we remove certain spheres of life from the market, we can come up with different ways to determine how the resources associated with them are allocated. In particular, we can come up with ways to make such choices collectively, by turning spaces formerly ruled by the market into forums of political contestation and democratic debate. If maximizing profit and maintaining class power were no longer the main considerations in the organization of our material world, what new sorts of arrangements could a democratic process generate?

These principles offer a useful starting point for thinking about a left tech policy. Still, they’re pretty abstract. What might they look like in practice?

Step One: Grab the low-hanging fruit

First, the easy part.

A portion of the internet is devoted to shuttling packets of data from one place to another. It consists of a lot of physical stuff: fiber optic cables, switches, routers, internet exchange points (IXPs), and so on. It also consists of firms large and small (mostly large) who manage all this stuff, from the broadband providers that sell you your home internet service to the “backbone” providers who handle the internet’s deeper plumbing.

This entire system is a good candidate for public ownership. Depending on the circumstance, it might make sense to have a different kind of public entity own different pieces of the system: municipally owned broadband in coordination with a nationally owned backbone, for instance.

But the “pipes” of the internet should be fairly straightforward to run as a publicly owned utility, since the basic mechanics aren’t all that different from gas or water. This was one of the points I made in a recent piece for Tribune about the Labour Party’s newly announced plan to roll out a publicly owned network and offer free broadband to everybody in the UK. It’s good politics and, even better, it works. Publicly owned networks can provide better service at lower cost. They can also prioritize social imperatives, like improving service for underconnected poor and rural communities. For a deep-dive into one of the more successful experiments in municipal broadband in the US, I highly recommend Evan Malmgren’s piece “The New Sewer Socialists” from Logic.

Step Two: Taxonomize the fruit higher up the tree

Further up the stack are the so-called “platforms.” This is where most of the power is, and where most of the public discussion is centered. It’s also where we run into the most difficulty when thinking about how to decommodify and democratize.

Part of the problem is the name: “platform.” None of our metaphors are perfect, but I think it might be time to give this one up. It’s not only self-serving—it enables a service like Facebook to project a misleading impression of openness and neutrality, as Tarleton Gillespie argues—it’s imprecise. There is no meaningful single thing called a platform. We can’t figure out what to do about the platforms because “platforms” don’t exist.

Before we can begin to put together a left tech policy, then, we need to come up with a better taxonomy for the things we’re trying to decommodify and democratize. We might start by analyzing some of the services that are currently called platforms and trying to discern the principal features that distinguish them from one another:

  1. The first is size. How many users does the service have? Sometimes this is an easy question to answer. Sometimes it’s not, because the way we define “user” will vary, and these differences may be substantial:

    • Sometimes what it means to be a user isn’t all that complicated. The number of monthly active users (MAU) of Facebook, the Google product suite, and Amazon Web Services (AWS) are easy to calculate.

    • But what about a service like Uber or Instacart, where you have both workers (“drivers,” “shoppers”) and customers? Both are users, but they’re using different parts of the service. So it probably makes sense to include both in the overall user count.

    • What about a service that has “targets” that aren’t exactly users? In last week’s newsletter, I talked about the Axon policing platform that enables law enforcement agencies to connect various devices and services—bodycams, tasers, in-car cameras, a digital evidence management system, smartphone apps, etc—into a single integrated portal. The users of this platform are police officers. The targets are the individuals whose information is being recorded and processed by the platform. Should they be included in the overall user account, even though they aren’t really users? If our goal is to measure the overall impact of the service, then the answer is yes.

  2. The second dividing line is function. What does the service do? Nick Srnicek, in his invaluable book Platform Capitalism, uses this approach to define five different kinds of “platforms,” though I’m inclined to use the word “services”:

    • Advertising services like Google and Facebook that hoover up personal data and monetize it by selling targeted ads.

    • Cloud services like AWS and Salesforce that sell various cloud-based “as-a-service” products to enterprise clients, from infrastructure-as-a-service (IaaS) to platform-as-a-service (PaaS) to customer relationship management (CRM).

    • Industrial services like Predix designed to support “industrial internet” applications like wiring up a factory with Internet of Things (IoT) devices and using the data that flows from them to optimize efficiency.

    • Product services like Rolls Royce and Spotify that “transform a traditional good into a service.” Rolls Royce is now renting jet engines to airlines, so that they pay by the hour instead of buying the whole thing up front, and using sensors and analytics to optimize maintenance. Spotify is turning albums into streams. The business model is subscription fees.

    • Lean services like Uber and Airbnb that match buyers and sellers while minimizing their own asset ownership. Matching isn’t all they do, however: gig-work services like Uber are also very much in the business of algorithmically managing and disciplining their drivers.

      One could think of more types of platforms. And I might quibble with some of Srnicek’s category choices—do Uber and Airbnb really belong in the same bucket? But if we’re looking to differentiate services by function, this list is a good place to start.

  3. The third way to split up services is by the kind of power they exercise. K. Sabeel Rahman wrote an interesting piece for Logic called “The New Octopus” that identifies three kinds of technological power:

    • Transmission power, which is “the ability of a firm to control the flow of data or goods.” He gives the example of Amazon’s massive shipping and logistics infrastructure controlling the “conduits for commerce,” as well as internet service providers (ISPs) controlling the “channels of data transmission.” We might also add AWS and other major cloud providers. A service like AWS S3 is essential to the flow of data across the modern internet.

    • Gatekeeping power, where the firm “controls the gateway to an otherwise decentralized and diffuse landscape.” He gives the example of Facebook’s News Feed or Google Search, which mediate access to online content. Here the power is held at the “point of entry” rather than across the entire infrastructure of transmission.

    • Scoring power, which is “exercised by ratings systems, indices, and ranking databases.” This includes automated systems for screening job applicants, for instance, or for informing sentencing and bail decisions.

Step Three: Enter n-dimensional space

We could spend a lot more time tweaking our taxonomy. But let’s leave it there, and return to the question of how we might decommodify and democratize our digital infrastructures. Given the wide range of services we’re talking about, it follows that the methods we use to decommodify and democratize them will also vary. The purpose of developing a reasonably accurate taxonomy is to help inform which methods we might use for each kind of service.

This is the logic behind Jason Prado’s argument in the latest edition of his Venture Commune newsletter, “Taxonomizing platforms to scale regulation.” Prado argues that we should be differentiating services by the number of users they have, and then implementing different regulations at different sizes. At 0-5 million users, for instance, a service should “only be subject to basic privacy regulations.” At 20-50 million, they should be required to publish “transparency reports about what data is collected and exactly how it is used.” At 100+ million, a service becomes “indistinguishable from the state” and therefore needs to be democratically governed, perhaps by a “governing board made up of owners, elected officials, platform developers/workers, and users.”

I like this basic approach, but I would expand it. Size is an important consideration, but not the only one. The service’s function and the kind of power it exercises are also significant factors.

We could certainly identify more factors. But for now let’s assume size, function, and kind of power are the three most salient features of a service. We could map each feature to an axis—x, y, and z—and then plot each service as a point somewhere along those three axes. Then, depending on where the service sits in our three-dimensional space (or n-dimensional, if we refine our taxonomy by increasing our number of features), we could select a method of decommodification and democratization that is particularly well suited to the service.

What are some of those possible methods? Here are four:

  1. Public ownership: In this case, a state entity takes responsibility for operating a service.

    • These entities can be structured in all sorts of ways, and exist at different levels, from the municipal to the national. Services that exercise transmission power (Rahman) or those that involve the cloud (Srnicek) are especially good candidates for such an approach. Along these lines, Jimi Cullen wrote an interesting proposal for a publicly owned cloud provider last year called “We need a state-owned platform for the modern internet.” Public ownership is also probably best suited for services of a certain scale. At the largest size, however, governance can no longer be achieved at the level of the nation-state—at which point we need to think about transnational forms of public ownership.

    • Public entities can also be in the business of managing assets rather than operating a service. For example, they might take the form of “data trusts” or “data commons,” holding a particular pool of data and enforcing certain terms of access when other entities want to process that data: mandating privacy rules, say, or charging a fee. Rosie Collington has written an interesting report about how such an arrangement might work for data already held by the public sector called “Digital Public Assets: Rethinking value, access and control of public sector data in the platform age.”

  2. Cooperative ownership. This involves running services on a cooperative basis, owned and operated by some combination of workers and users.

    • The platform cooperativism community has been conducting experiments in this vein for years, with some interesting results.

    • What Srnicek calls “lean” services would lend themselves to cooperativization. A worker-owned Uber would be very feasible, for example. And there’s all sorts of policy instruments that governments could use to encourage the formation of such cooperatives: grants, loans, public contracts, preferential tax treatment, municipal regulatory codes that only permit ride-sharing by worker-owned firms. It’s possible that cooperatives work best at a smaller scale, however—you might want a bunch of city-specific Ubers rather than a national Uber—in which case the antitrust toolkit might come in handy, since we would need to break up a big firm before cooperativizing its constituent parts.

    • We could also think of data trusts or data commons as being cooperatively owned rather than publicly owned. This is what Evan Malmgren recommends in his piece “Socialized Media”: a cooperatively owned data trust that issues voting shares to its members, who in turn elect a leadership that is empowered to negotiate over the terms of data use with other entities.

  3. Non-ownership. In some cases, services don’t have to be owned at all. Rather, their functions can be performed by free and open-source software.

    • There are plenty of reasons to be skeptical of open source as an ideology—Wendy Liu’s “Freedom Isn’t Free” is essential reading on this front—but free software does have decommodifying potential, even if that potential is suppressed at present by its near-complete capture by corporate interests.

    • This is another realm in which the antitrust toolkit could be helpful. In 1949, the Justice Department filed an antitrust suit against AT&T. As part of the settlement seven years later, the firm was forced to open up its patent vault and license its patents to “all interested parties.” We could imagine doing something similar with tech giants, making them open-source their code so people can develop free alternatives to their services. Prado suggests that a service’s code repositories should be forced open within six months of hitting 50-100 million users.

    • In addition to bigger services, I’d also argue that services whose business model is advertising (Srnicek) and those that exercise gatekeeping power (Rahman) would make good candidates for open-sourcing. One could imagine free and open-source alternatives to Google Search, for instance, or existing social media services.

    • Another useful idea drawn from the antitrust toolkit that could help promote open-sourcing is enforced interoperability. Matt Stoller and Barry Lynn from the Open Markets Institute have called for the FTC to make Facebook adopt “open and transparent standards.” This would make it possible for open-source alternatives to work interoperably with Facebook. It doesn’t get our data off of Facebook’s servers, but it starts to erode the company’s power by giving people various (ad-free) clients that can access that data and present it differently. If these interfaces caught on, Facebook would no longer be able to sell ads and its business would eventually collapse. At which point it could be refashioned into a publicly owned or cooperatively owned data trust that furnishes data to a variety of open-source social media services, themselves perhaps federated on the model of Mastodon.

  4. Abolition. Certain services shouldn’t be decommodified and democratized, but abolished altogether.

    • Governments deploy a range of automated systems for the purposes of social control. These include carceral technologies like predictive policing algorithms that intensify policing of working-class communities of color. (This is also an example of what Rahman calls scoring power.) Scholars like Ruha Benjamin and community organizations like the Stop LAPD Spying Coalition are applying the abolitionist framework to these kinds of technologies, calling for their outright elimination: in her new book Race After Technology, Benjamin talks about the need to develop “abolitionist tools for the New Jim Code.”

    • Another set of systems worthy of the abolitionist treatment are the forms of algorithmic austerity documented by Virginia Eubanks in her book Automating Inequality. In the United States and around the world, public officials are using software to shrink the welfare state. This deprives people of dignity and self-determination in a way that’s fundamentally incompatible with democratic values.

    • Another technology I would put in this category is facial recognition, which can be deployed by public or private entities. The growing movement to ban facial recognition, a demand advanced by a range of organizations and now embraced by Bernie Sanders, is a good example of abolition in action.

    • There is also an ecological case to be made for abolishing certain services, given the carbon-intensive nature of machine learning, the cloud, and mass digitization. This is a point I made in a recent Guardian piece.

One final note worth mentioning: while the goal of a left tech policy should be to strike at the root of private power by transforming how our digital infrastructures are owned, we will also need legislative and administrative rule-making to govern how those infrastructures are allowed to operate. This might take the form of GDPR-style restrictions on data collection and processing, measures aimed at reducing right-wing radicalization, or various algorithmic accountability mandates. These rules should apply across the board, no matter how the entity is owned and organized.

The above is a provisional sketch. It has lots of holes and rough edges. Plotting all the major services along three axes according to their features may ultimately be impossible—and even if it can be done, it runs the risk of locking us into an excessively rigid model for making policy. More broadly, there are severe limits to this sort of programmatic thinking, which can too easily tilt in a technocratic direction.

Still, I hope these thoughts offer some preliminary materials towards developing a left tech policy that takes the basic principles of decommodification and democratization and tries to apply them to our actually existing digital sphere. At the moment there is relatively little political space for such an agenda in the United States, but there may come a time when more space is available. It would be good to be ready.

Cloud fortress

This week’s Metal Machine Music is about:

  • what happens when you platformize the police

  • a roundup of readings on subjects ranging from the smart city to VC

  • a bit of history involving a group of French anarchists who went around destroying computers

Subscribe to get this newsletter in your inbox, or read on.

Platform-involved shootings

“Platform” is one of those words that, like “innovation,” has become so overextended as to become almost meaningless. If you wrote down all of the things that companies are calling platforms these days, you would end up with a very long list of very different things.

In fairness, this ambiguity has been there since the beginning. And it’s been a productive ambiguity, as Tarleton Gillespie explores in his 2010 classic, “The Politics of ‘Platforms’.” It has been useful for tech firms to define “platform” rather broadly, since the qualities associated with the metaphor—openness, neutrality—are ones that firms can use to absolve themselves of responsibility for what happens on their services. The business model of these firms rests on the fiction that they are not publishers (thanks Section 230!) and thus not liable, legally or otherwise, for the content that their services circulate. The platform metaphor is a valuable tool for sustaining this fiction—particularly now, as it comes under strain on Capitol Hill.

This is why, following Gillespie, it makes sense to see the platform as a discursive phenomenon as much as a technical one. It also probably makes sense to think of this phenomenon as a process rather than a thing: as an ongoing practice of platforming.

One place where the practice of platforming has been producing some alarming effects is the world of policing. Police platforms aren’t all that widely known: there’s been a lot of mainstream conversation in recent years about how law enforcement agencies are using ML-based technologies like “predictive policing” algorithms, but I’ve seen relatively little discussion of the platform angle.

That’s why I was excited to read Stacy E. Wood’s new article, “Policing through Platform.” Wood looks at a cloud-based platform service sold by Axon, the company that makes police tasers and bodycams. The platform enables law enforcement agencies to connect various Axon devices and services—bodycams, tasers, in-car cameras, a digital evidence management system, smartphone apps, etc—into a single integrated portal. It also makes it feasible for even small police departments to engage in some version of “big data policing” without the cost and headache of managing their own infrastructure—a bit like how AWS made it possible for small companies to get the benefits of a big data center.

Here’s what struck me most strongly as I read the piece:

  • Patriot Act as ImageNet: The era of big data policing wouldn’t have been possible without a series of post-9/11 federal policy changes that both vastly extended the scale of government data collection and integrated formerly siloed databases from various agencies. To do ML well, you need lots of data. And just as one of the preconditions of the current ML boom was the rise of the web, which offered a new source of abundant training data—ImageNet needed Flickr, for instance—the policy response to 9/11 was a precondition for the emergence of big data policing.

  • Insulate, insulate, insulate. We’re living at a moment when popular struggles over policing are proliferating. Social movements are pushing back against police violence, organizers and scholars are putting terms like mass incarceration and prison abolition into mainstream circulation, and progressive DAs are winning races around the country (hello Chesa). Interestingly, Axon seems to be marketing its platform with exactly these developments in mind. According to Wood, Axon claims that the use of their platform will lead to “a reduction in the number of false complaints (against the police); decreased use of force… enhanced public trust… [and] decreased litigation.” In other words, platformization can help insulate police departments from criticism, protest, and legal action at a time of growing public anger.

  • Opacity-as-a-service. Platformization is often presented as a process of opening: you open an API to let developers build apps around your service. Actually existing platforms, of course, are riddled with black boxes: you might be able to talk to Facebook’s APIs, but its internals are totally opaque. In the case of Axon’s platform, however, even the pretense of openness has been dispensed with. The value proposition, it seems, lies precisely in the platform’s opacity. Opacity is a mechanism for hiding what law enforcement does, to preclude the possibility of public oversight. As Wood writes:

    • “In fact, through Axon’s platforms, even more aspects of police labor are hidden. In the world of platform policing, opacity is a feature not an accident. A lack of understanding about what exactly goes into the functioning of the platform allows for the performance of process, precluding intervention, questioning or dispute. The record claims further authority through this process of automation, even as the sources of data are no less problematic or even more accurate.”

  • Content creators. Viral videos of police shootings shared on social media have become a major phenomenon in recent years. They have indisputably played a role in propelling the current cycle of popular struggles around policing. Two of the apps within the Axon platform that Wood examines are designed in part to help police counter this dynamic. They enable law enforcement agencies to produce social media narratives of their own, by sourcing and selecting video that seems to substantiate their version of events. Axon Citizen lets members of the public submit smartphone video directly to police via “public evidence submission portals” that can be advertised on social media, while Axon View gives police officers the ability to do both instant replay and livestream of their bodycam footage. “Mimicking the user interface and informational flow of social media platforms,” Wood writes, “these apps give the impression that police work is another form of content creation.”

Some other stuff

Here’s a handful of other things worth reading:

  • Urban warfare: Jathan Sadowksi has a new piece, “The Captured City,” that’s quite relevant to the above discussion. The “smart city” concept has typically been sold as a way to make cities more convenient, more efficient, more entrepreneurial—think Sidewalk Labs. But Jathan argues that the smart city is in fact primarily about the militarization of urban space. He talks in particular about the Domain Awareness System, a collaboration between the NYPD and Microsoft that uses a vast network of cameras and sensors to create a unified system of ubiquitous surveillance. Here’s Jathan:

    • These technologies treated the city like a battlespace, redeploying information systems originally created for military purposes for urban policing. Sensors, cameras, and other networked surveillance systems gather intelligence through quasi-militaristic methods to feed another set of systems capable of deploying resources in response… Contrary to the suggestions of ‘smartness’ shills, these systems are not used by the general public but on it. This urban war machine (as I call it in my forthcoming book Too Smart) is the true essence of “smart” urbanism. It is the next step in the high-tech militarization of society… The idea of the captured city requires an adversarial view of a city’s inhabitants: When the enemy can be anywhere, the battlespace is everywhere; all places and people must be accounted for at all times.”

  • Minnesota nice: This newsletter has been a bit of a downer so far, so here’s a pick-me-up: “Meet the Immigrants Who Took On Amazon” by Jessica Bruder. It’s a story about a group of Somali immigrants who are organizing for better working conditions at Amazon warehouses in Minnesota, and pulling off the first strike actions the company has seen in North America.

  • It’s getting crowded: Sai Krishna Kamepall, Raghuram Rajan, and Luigi Zingales from the University of Chicago have produced an interesting report with a fun title: “Kill Zone.” They look at major acquisitions conducted by Facebook and Google from 2006 to 2018 and conclude that VC investments in startups in the same space as the company acquired fall by 46 percent and the number of deals by 42 percent in the three years following an acquisition. Big acquisitions by the tech majors generate “kill zones” that other investors don’t want to enter, in other words, because they figure there’s no hope of competing. If Facebook buys a social photo-sharing app and integrates it into its massive network, then why invest in another social photo-sharing app? The report offers an interesting glimpse at how much the Silicon Valley ecosystem has changed over the past decade or so, as the big firms have grown so big that they’re crowding out VC.

  • Information wants to be free: The Labour Party has just announced a proposal to provide free high-quality broadband to everyone in the UK by 2030. I have a new piece in Tribune about it called “Internet for All.” Taking internet access off the market and making it a social right will improve the lives of a lot of people, particularly those in rural and poor communities. It will also open up political space for a deeper democratization of digital life, as we work our way up the stack from the pipes to the platforms.

History corner

In the 1980s, a French anarchist organization called CLODO conducted a series of attacks on computer centers. While “clodo” is slang for homeless, the name was also an acronym—although there seems to be some confusion about what exactly the acronym stood for. A few possibilities: “Committee for the Liquidation and Misappropriation of Computers,” “Computer Liquidation and Hijacking Committee,” and “Committee for Releasing or Setting Fire to Computers.” You get the idea.

In 1980, they broke into the offices of Philips Data Systems in Toulouse and destroyed its computers. In 1985, they firebombed the offices of computer manufacturer Sperry Univac, also in Toulouse. In a letter to Libération, they explained their reasoning:

We are computer workers and therefore well placed to know the present and future dangers of computer systems. Computers are the favorite instrument of the powerful. They are used to classify, control, and repress. We do not want to be shut up in the ghettos of programs and organizational patterns.

In 1984, the great underground magazine Processed World—which is a treasure if you haven’t encountered it before—ran a translation of an interview with a CLODO member that offers a bit more detail on their thinking:

Why do you do computer sabotage?

To challenge everyone, programmers and non-programmers, so that we can reflect a little more on this world we live in and which we create, and on the way computerization transforms this society.


We are essentially attacking what these tools lead to: files, surveillance by means of badges and cards, instrument of profit maximization for the bosses and of accelerated pauperization for those who are rejected…


Aren't you really a bit retro, like the machine breakers of the 19th Century?

Faced with the tools of those in power, dominated people have always used sabotage or subversion. It's neither retrograde nor novel. Looking at the past, we see only slavery and dehumanization, unless we go back to certain so-called primitive societies. And though we may not all share the same "social project,'' we know that it's stupid to try and turn back the clock.

Computer tools are undoubtedly perverted at their very origin (the abuse of the quantitative and the reduction to the binary are proof of this) but they could be used for other ends than the ones they now serve. When we recognize that the most computerized sector is the army, and that 94% of civilian computer-time is used for management and accounting, we don't feel like the loom-breakers of the 19th century (even though they fought against dehumanization in their jobs). Nor are we defenders of the computer-created unemployed… if microprocessors create unemployment, instead of reducing everyone's working-time, it's because we live in a brutal society, and this is by no means a reason to destroy microprocessors.

Loading more posts…