The metaverse is a cubicle
It’s been almost two years since I’ve used this thing. I doubt anyone has been holding their breath—if you were, apologies. Life got crowded. It’s still pretty crowded, but lately I’ve been feeling like maybe I should make room for some more newslettering.
What I most liked about doing this Substack was that it gave me a place to put a certain kind of provisional, half-baked thinking—a drafts folder of sorts. Every so often a Take materializes in my mind that is too long for Twitter and not thought-through enough for a proper piece, and it’s nice to have a home for such thoughts.
Today’s half-baked thought is about the metaverse. Sorry.
But first, a bit of self-promotion. I’ve got a book coming out next June. It’s called Internet for the People, and it tells the story of how the internet was privatized, and how privatization set in motion the crises that consume it today. The book project actually grew out of an old newsletter post, though it evolved a lot in the intervening years. Anyway, you can pre-order it from Verso.
On to the newsletter. As always, if you’re reading this on the web and you want this in your inbox, you can subscribe.
From virtual boys to lawnmower men
What will the internet look like in 10 years?
One answer is the “metaverse.” At least, this is what Facebook, Microsoft, and a handful of other tech companies are saying. It’s not exactly a new idea. The dream of an immersive, embodied internet is an old one. It predates the modern internet itself: the “cyberspace” of William Gibson’s immensely influential Neuromancer (1984) envisioned users plugging their nervous systems into a networked sensory environment, at a time when the internet was in its infancy. And VR has an equally long history. It saw a boom in the late 1980s and early 1990s, and a bunch of headsets came out—I rented a Virtual Boy from a Blockbuster for one disappointing weekend in 1995—before the hype cycle hit a wall of consumer indifference and the bubble popped.
If the metaverse is an old dream that’s never quite taken flight, why would this time be any different? Many observers argue that it’s not—that Zuckerberg et al are spinning up another hype cycle, same as the old, and everything will come crashing down soon enough. “VR is a bit like a rich white kid with famous parents: It never stops failing upward, forever graded on a generous curve, always judged based on its ‘potential’ rather than its results,” writes David Karpf in Wired.
Fair enough. But I’d like to propose the possibility that this time actually is different, for a couple reasons.
The first (and least important) reason is technical. As Karpf admits, the technology has come a long way: “As a technical matter, we could pretty much cobble together a 1.0 version of the Metaverse or the Oasis next week.” Now, it would suck: what stood out to me about Zuckerberg’s supremely weird metaverse presentation at the annual Facebook Connect conference was just how shitty the tech was. (Why don’t avatars have legs?!) Still, VR is way more sophisticated than it used to be. Something halfway usable is emerging from all of those billions of dollars being pumped into VR and AR by tech companies and venture investors. Facebook alone is now spending $18.5 billion a year on VR/AR R&D.
But it doesn’t matter how good the technology is if there’s nothing to do with it. As Karpf puts it, “VR’s limiting flaw might instead be on the demand side.” This is a point frequently made by Benedict Evans: there’s still no “killer app” to drive widespread VR adoption. “The issue I circle around is not just that we don’t have a ‘killer app’ for VR beyond games,” he wrote in a post last year, “but that we don’t know what the path to getting one might be.”
This brings me to the second (and more important) reason that this time is different: the pandemic. The pandemic has generated a confluence of factors that, in my view, is conducive to a particular version of the metaverse taking root.
What are those factors?
First, to state the obvious: the pandemic is reorganizing white-collar work by making remote and hybrid working arrangements more common. It’s important to note that these arrangements are at present limited to a fairly small portion of the workforce, as Doug Henwood points out. The latest Bureau of Labor Statistics numbers from October say that only 11.6 percent of the workforce—18 million people—“teleworked or worked at home for pay at any time in the last 4 weeks because of the coronavirus pandemic.” And, as you might expect, teleworkers are concentrated in the higher-end professions: tech, law, finance. Still, the fact that 18 million workers are still WFH this long into the pandemic is a big deal.
Why has WFH persisted for so long, and why is it likely to continue, or even to grow? A few reasons:
The pandemic keeps going: A bunch of companies, especially in tech and finance, were eager to push for an office return in September 2021. They had to postpone those plans, largely due to the delta variant. Now we’ve got omicron. It’s anyone’s guess how much disruption omicron will bring. But there will certainly be more bad variants (so long as vaccine apartheid and vaccine hesitancy continue) and, beyond that, more novel zoonotic diseases, because the forces that underlie the emergence of those diseases—deforestation, industrial agriculture, climate change—show no signs of abating. There will also be all sorts of other disasters as the world continues to heat up and various natural systems go haywire.
So remote/hybrid will probably be a permanent adaptation to an increasingly inhospitable biosphere. Back in May 2020, 35 percent of the US workforce was teleworking. I don’t see why we couldn’t get back to that number, or even exceed it, if the worst-case scenarios for the next decade play out.
Workers want flexibility: WFH is popular with workers, as study after study shows. Drawing on 33,250 survey responses collected from May 2020 through March 2021, a paper by Jose Maria Barrero, Nicholas Bloom, and Steven J. Davis found that most workers want to work from home two or more days per week even after the pandemic is over. The sentiment appears to be global: the World Economic Forum did a survey of 12,500 workers in 29 countries and found that a “majority (66%) said employers should allow more flexible working in the future.” And support for WFH is consistently strongest among women and people of color.
WFH’s popularity among workers has led to a number of confrontations between the rank-and-file and management when the latter has tried to cut down on remote work. (For example, at Apple.) The failure of the September 2021 return-to-office push wasn’t just about delta; it was about workers successfully defending WFH. The high-pressure economy that’s producing unprecedent quit rates (the Great Resignation) is giving these workers ample power to push back against managerial attempts to force them back into the office. (Barrero, Bloom, and Davis found that more than forty percent of Americans who are WFH at least one day a week would look for another job if their employers made them go back to the office full-time—that’s a very large number.)
Employers want labor savings: Facebook, one of the more remote-friendly companies, announced in June 2021 that nearly all of its employees could continue to work from home indefinitely. But their salaries would be adjusted for the labor market of wherever they decided to live, so somebody moving from San Francisco to Reno would take a pay cut.
This suggests another reason that remote/hybrid will endure: because it gives companies a way to cut wages. Now, I haven’t seen evidence that this is happening yet at any scale. And the much-discussed Covid exodus from big cities didn’t actually happen; people have mostly been staying put. Still, as remote/hybrid arranagements become more normalized in white-collar workplaces, and as more companies recruit for remote-only positions from a national labor pool, the opportunities for wage arbitrage by employers increase.
So some significant role for remote/hybrid in white-collar workplaces is probably here to stay. But the transition is not without its difficulities. What are some of those difficulties?
Collaboration: Opinions vary widely on the question of whether remote/hybrid negatively affects productivity. Workers tend to think it doesn’t; many executives think it does; but the reality is that productivity is notoriously hard to measure in most white-collar settings, so it’s basically unknowable.
Collaboration is probably a more useful way to evaluate work quality. How well do workers collaborate in a remote/hybrid setup instead of a fully in-person one? Here, even the most pro-WFH worker must concede that there’s a lot that doesn’t work well. Zoom fatigue is real, workplace communication software is mostly pretty terrible and uncreative (Slack is IRC with emojis), and hybrid in particular presents all sorts of headaches.
There’s also the matter of how one generates affective attachments among coworkers that contribute to a sense of social cohesion—what tech companies in particular like to call “culture.” The goal here is to make workers feel more connected to their work by making them feel more connected to one another. Thus the importance of the office “campus,” ping-pong tables, offsites, “team-building,” etc. Some companies that have been remote-first for awhile, like GitHub, have put a lot of thought into how to do “culture” in a distributed way. But suffice to say, it’s not easy, especially with the existing state of collaboration software.
Managerial control: If people are working at home, how do bosses know they’re getting work done? You can’t shoulder-surf to see what they’re up to; you can’t walk around the office to see who came in too late, left too early, or took too luxurious of a lunch break. This fear of losing control is clearly what’s driving some of the skepticism among executives about remote work. And it’s also driving the proliferation of “bossware”: software used to surveil remote workers. Here’s how Ali S. Qadeer and Edward Millar put it in a recent piece:
“With the recent large-scale shift to remote work since the onset of the COVID-19 pandemic, digital workplace managerial tools have proliferated at a geometric rate. Software such as Activtrak, Hubstaff, and Teramind all boasted a tripling of demand in the early months of the pandemic. As the scale and scope of remote work increases, the essence of managing ‘work from home’ remains rooted in the principles of scientific management: tracking mouse movements, recording workers’ screens, and surmising attention and time on task.”
The most intrusive and repressive forms of bossware are targeted at low-wage workers like call center operators. But, as a new report by Wilneida Negrón discusses, softer forms of surveillance are also becoming more pervasive. Take Microsoft Workplace Analytics, a product that “assigns every employee an ‘influence score’ that indicates ‘how well-connected a person is within the company,’ based on extensive email, calendar, call, and chat data.” There’s also presumably a lot of surveillance happening that doesn’t rely on specialized software: managers keeping an eye on Slack statuses or git commits, say. As hedge fund manager turned thought leader Ray Dalio says, there are plenty of ways to keep an eye on remote workers short of installing a keystroke logger on their machines: “You don’t have to have them in the office. There are so many tools that make it clear that how productive people are.” Dalio would know; the surveillance culture at his fund Bridgewater was famously brutal.
The Matrix meets Office Space
So where does that leave us? To summarize:
Employers want to find a way to exercise managerial control over remote workers, to bring the collaborative and socially cohesive aspects of in-person work to remote/hybrid environments, and to push work to lower-wage regions.
Employees want to retain flexibility around WFH while also finding a way to mitigate the unpleasant elements of WFH (Zoom fatigue, hybrid headaches, anxieties about lack of visibility leading to being passed over for promotions, etc).
If you put this all together, I think you start to see the contours of a particular version of the metaverse emerging. It looks like Office Space by way of The Matrix. It promises to give both employers and employees enough of what they want that it might come to be seen as the necessary cost of a permanently remote/hybrid white-collar world.
I don’t think this is some huge theoretical insight; it seems to be one of the main business strategies among metaverse architects. It’s why Facebook is pushing Horizon Workrooms (where your avatar can sit leglessly in a cartoon conference room), why Microsoft is pushing Mesh (its metaverse offering), and why Accenture recently bought 60,000 Oculus headsets. And the way the metaverse’s builders and boosters talk about its advantages is very much keyed to the set of desires and concerns I laid out above; in Microsoft’s metaverse demo, Ellyn Shook, Accenture's chief leadership and human resources officer, said the technology “[enables] presence and connection that transcends location, keeps our culture vibrant wherever we’re working, and levels the playing field to create equal and inclusive experiences.”
Significantly, this is not the metaverse as hedonist escapist fantasy-land—the new Vegas, as Izabella Kaminska argues—but rather as the new cubicle: the new organizing architecture of white-collar work. The cubicle was first introduced as a kind of balancing act between the need to give white-collar workers some personal space to let them concentrate while also enabling collaboration and managerial surveillance. The metaverse might offer another way to strike the same balance under very different circumstances.
Now, precisely where that balance lands is an open question—and I would argue that it has everything to do with the balance of class forces. How surveillant this metaverse would be, for example, would come down to how much power workers have to push back, either by quitting or by engaging in collective action. Also, just like with the cubicle, it’s entirely possible, even likely, that the metaverse makes white-collar work worse—or maybe just bad in a different way. A new virtual workplace in which everyone is equally “present” and “connected” might be one that makes everyone feel more absent and alienated, especially if the tech doesn’t get much better; I found the scene of two legless avatars playing ping-pong in the Microsoft metaverse demo terribly depressing. There are also physiological costs to keeping a VR headset strapped to your head for any length of time; metaverse fatigue could very well make workers nostalgic for Zoom fatigue.
As with all technology, however, the metaverse doesn’t have to deliver on its promises, or even work particularly well, to be deployed. Especially if Facebook decides to keep dropping the price of VR headsets—its lowest-spec Oculus Quest 2 is already the cheapest headset available ($300)—and continuing to pump tons of money into R&D, which will probably help with some of the above problems (VR ping-pong with your coworkers will get less depressing).
PC load letter
A parting thought: a lot of the commentary on VR assumes that consumer demand is what matters for adoption because it’s a consumer technology. In this view, VR is the new VCR; the tipping point is reached when you’ve persuaded enough people to go walk into Target to buy a headset.
But if the metaverse is the new cubicle, then consumer demand isn’t what will make it mainstream. Rather, you have to look at how the technology interacts with, and is shaped by, the relations of production in particular industries and workplaces—which is to say, the practices, habits, desires, fears, interests, and anxieties that mediate the relationship between the people who do the work and the people who oversee the work and own the product. If you do, I think the path to the mainstreaming of the metaverse becomes clearer.
Updated to correct the BLS numbers on number of teleworkers. Thanks to Sean Collins for catching my errors!