The old gods are dead

2020’s first installment of Metal Machine Music will be a little thin, I’m afraid. Life has been getting in the way of newslettering lately—a trend that’s likely to continue—but there were a few things I wanted to share with you.

Making the most of the techlash

First, and mainly, I wrote a long piece for the new issue of Logic that just went online this week. It’s called “From Manchester to Barcelona,” and it’s an attempt to think through the relationship between capitalism and the internet (or “tech,” if you like). The ideas in it have been percolating for awhile, and have gone through multiple iterations, so it’s gratifying to have it out in the world.

Here’s a quick summary of the main points:

  1. The techlash is a constructive development, but it’s mostly been performing the labor of the negative: it has done the (invaluable) work of demolishing the old techno-utopian-libertarian pieties, but it’s still far from clear what new ideologies will rise up to replace them. There’s a bit of a scramble for hegemony at the moment when it comes to the next big narrative about tech: different camps are putting forward different alternatives, but no clear winner has emerged.

  2. So far, the left has played a very small (perhaps nonexistent) role in this conversation. There are no shortage of brilliant left thinkers out there thinking about tech—read Logic!—but it’s safe to say that a clear left agenda for tech hasn’t yet materialized. If you read Bernie Sanders’s interview with the New York Times editorial board, you’ll see what I’m talking about: when asked about tech, he has trouble differentiating his approach from liberal antitrust.

  3. How does the left come up with an agenda for tech? What’s the tech equivalent of Medicare for All, the Green New Deal, and so on? A good way to start, I think, is to put capitalism at the center of our story. Even in the midst of the techlash, tech is too often thought about in isolation from capitalism—or when the word is invoked, it’s used imprecisely. This is a problem, because if we can’t think about capitalism clearly—or acknowledge that such a system even exists—we’re going to have trouble thinking about tech, much less coming up with a plan to improve it.

  4. The bulk of my piece explores how tech acts through and within capitalism, as an agent and accelerant of its core dynamics. I examine how tech intensifies capitalism’s tendencies to generate imbalances of wealth and power, and to heighten the hierarchical sorting of human beings according to race, gender, and other categories. Towards the end of the piece, I also offer some provisional thoughts, drawn from the past and present of social movements, on how to combat these tendencies by democratizing (or dismantling) tech.

Anyway, read the piece and tell me what you think.

Other readings

Before I let you go, some of the things I’ve been reading lately:

Discipline at a distance

Welcome to your next installment of Metal Machine Music. I’ve fallen a little behind on my weekly cadence—which, fair warning, will probably keep happening. But I was very pleasantly surprised by how widely my last post, “Platforms don’t exist,” traveled. Jacobin published an edited version, and I’ll be doing a couple of interviews this week on the themes of the piece. It might also serve as the basis for a bigger project.

Anyway, on to today’s newsletter. As always, if you want this in your inbox, you can subscribe.

Elastic factories

Katrina Forrester has a very interesting piece in the London Review of Books called “What counts as work?” It’s a review of a new-ish book by Colin Crouch, Will the Gig Economy Prevail?, and it has some important insights into what we call the gig economy.

One of the questions I always struggle with when thinking about digital things is the precise balance between continuity and discontinuity. What’s old and what’s new? The mainstream tech conversation tends to emphasize discontinuity—everything digital is treated as a sharp departure from the past. Clearly, this interpretation serves certain interests: if particular products and services really are unprecedented, then the firms that produce them acquire a certain prestige as innovators, and can make the case that laws and regulations that might impinge on their profits are too antiquated to apply to this brave new world.

We might be tempted to react to this narrative by drawing the exact opposite conclusion—that nothing about “tech” is novel. But this would be wrong. There are discontinuities and continuities, and they’re often deeply entangled with one another. Trying to identify which is which, and how they’re connected, is essential for thinking through what tech is and how it works.

This is the approach that Forrester takes in her review. On the one hand, she makes the point that the gig economy strongly resembles the “putting-out” system that existed in an earlier era of capitalism in the Global North, and which still exists in the Global South. Under such a system, subcontractors perform piece work. “Non-standard” employment prevails—that is, not formal, full-time work of the kind we have come to see as normal. For most of the history of capitalism, in fact, normal work was non-standard employment:

Historically speaking, standard employment has been the norm only briefly, and only in certain places. Until the ‘industrious revolution’ of the 18th century, work was piecemeal. People worked where they lived, on the farm or at home: in the ‘putting-out system’—which still exists in cloth production in parts of the global South—manufacturers delivered work to workers, mostly women, who had machinery at home and organised their work alongside their family life. Then work moved out of the home. Over the next two centuries, the workforce was consolidated into factories, then into offices. Waged work was standardised, then became salaried.

Reading this, I’m reminded of a passage from Michael Denning’s “Wageless Life”:

Unemployment precedes employment, and the informal economy precedes the formal, both historically and conceptually. We must insist that ‘proletarian’ is not a synonym for ‘wage labourer’ but for dispossession, expropriation and radical dependence on the market. You don’t need a job to be a proletarian: wageless life, not wage labour, is the starting point in understanding the free market.

Clearly there’s a continuity here: what we now call “gig work” is a permanent feature of capitalist economies. That doesn’t mean it always looks the same, however. “Modern precarity takes a distinctive form,” Forrester writes, “which is a result of the major political and economic changes of the 1970s.” These changes are known by a few different names—neoliberalism, post-Fordism, deindustrialization—but their consequence is the erosion of standard employment, particularly its “enriched” social-democratic variant, which secured a range of rights and benefits for a significant portion of the workforce in the Global North.

Where does “tech” fit into all of this? One argument often heard on the left is that tech companies owe their fortunes mostly to legal and political maneuvering rather than to technological innovation. Uber seems like a case in point. Their business model rests on the fiction that drivers are independent contractors, a fiction that they help sustain with lots of lobbying dollars. But the technology also matters. Depending on the firm, it may not be the single most determinant factor in how they make money. But it does have a specific effectivity of its own.

This specific effectivity is at the heart of how the gig economy relates to the putting-out system. One of the problems with the putting-out system is that the capitalist who pays the various subcontractors doesn’t have much control over the labor process. If people are doing piece work at home, they are generally working at their own pace, on their own terms, with their own tools. Capitalists can’t transform the labor process because they don’t control it. The rise of the modern factory system is in large part a response to this problem: manufacturers begin to put workers under the same roof in order to more closely control their work. This greater control in turn enables a (very) full working day, speed-ups, mechanization, a complex division of labor—all of which greatly enhance profitability.

Yet this new model also creates problems of its own. Concentrated in factories, workers are now potentially a lot more powerful. They can disrupt production far more easily and at a far greater scale than they could as relatively isolated subcontractors in a putting-out system. Thus the extraordinary militancy of the industrial worker, which, as Beverly Silver explores in her book Forces of Labor, crops up wherever mass production appears.

But what if you could have the advantages of both systems? What if you could control the labor process and keep workers as relatively isolated subcontractors? This is precisely what networked digital technologies make possible. As Forrester writes:

What is new about the gig economy isn’t that it gives workers flexibility and independence, but that it gives employers something they have otherwise found difficult to attain: workers who are not, technically, their employees but who are nonetheless subject to their discipline and subordinate to their authority.

Creating a hybrid of the factory and the putting-out system is feasible because networked digital technologies enable employers to project their authority farther than before. They enable discipline at a distance. The elastic factory, we could call it: the labor regime of Manchester, stretched out by fiber optic cable until it covers the whole world.

It’s important to note that this isn’t a recent phenomenon. It’s been going on ever since computers, and more specifically computer networking, began entering the corporate world. Joan Greenbaum, in her book Windows on the Workplace, talks about how even before the internet, computer networking let companies relocate “back-office” functions offsite and, eventually, offshore. Mainstream commentators are likely to put the emphasis on communication when describing this phenomenon. The very terms that are used to describe these developments—telecommunications, information and communications technology (ICT)—reflect that emphasis. But as good cyberneticians, we know that communication is also always about control. And when we situate the rise of networked digital technologies within the broader history of capitalism, it becomes clear that control—specifically, control of the labor process—is where our emphasis should be.

That said, there are novel elements to how the current crop of networked digital technologies are implementing discipline at a distance. (See what I mean about how entangled the continuities and the discontinuities are?) The sophisticated forms of algorithmic management deployed by a company like Uber through their driver app wouldn’t be possible without various advances in machine learning and the development and proliferation of the smartphone, for instance.

How does one organize in the elastic factory? Uber and Lyft drivers are figuring it out, partly by building their own apps. It’s fair to say that such workers have less structural power at the point of production than, say, autoworkers in the 1940s. But they certainly still have some power, and they’re currently innovating the organizational forms that will help them exercise it.

Municipal algorithms, model cards, and other things

A few other things I’ve been reading and thinking about:

  • Excel jockeys: In 2017, the New York City Council established the Automated Decision Systems (ADS) Task Force to examine how local government agencies were currently using automated decision systems and to propose guidelines for how they should use such systems in the future. It was the first of its kind in the country, and it generated a lot of excitement. Two years later, the task force’s report has finally been published. It’s pretty thin, and Albert Fox Cahn, who served on the original task force as a representative of CAIR, the Muslim civil rights organization, has a piece in Fast Company that helps explain why. It seems that city officials stonewalled the task force after realizing it wouldn’t just serve as a rubber stamp. One interesting point of contention was the very definition of an automated decision system. Cahn writes:

    City officials brought up the specter of unworkable regulations that would apply to every calculator and Excel document, a Kafkaesque nightmare where simply constructing a pivot table would require interagency approval. In lieu of this straw man, they offered a constricted alternative, a world of AI regulation focused on algorithms and advanced machine learning alone.

    The problem is that at a moment when the world is fascinated with stories about the dire power of machine learning and other confabulations of big data known with the catchphrase “AI,” some of the most powerful forms of automation still run on Excel, or in simple scripts. You don’t need a multi-million-dollar natural-language model to make a dangerous system that makes decisions without human oversight, and that has the power to change people’s lives. And automated decision systems do that quite a bit in New York City.

    This is an important point: the most consequential algorithmic systems are often not particularly advanced. Think about something like shift scheduling software—it’s way less complex than the Facebook News Feed, but it arguably has far greater impact on the lives of millions of low-wage service workers.

    To return to the question of automated decision systems, what are some examples of those systems? AI Now has produced a valuable report outlining the different kinds of automated decision systems deployed by various government agencies around the country. AI Now’s Meredith Whittaker, another member of the New York task force, has also been critical of how the city handled the initiative: you can listen to her talk about it on WNYC. Finally, AI Now is hosting an event devoted to automated decision systems in New York on Saturday; if you’re nearby, you should go.

  • RTFM: A group at Google that includes Margaret Mitchell, Timnit Gebru, Parker Barnes, and others have launched a public “model cards” site for two features of Google’s Cloud Vision API: Face Detection and Object Detection. Initially proposed in a paper earlier this year called “Model Cards for Model Reporting” by Mitchell et al, model cards are intended to give more context on how a machine learning model works, what its limitations and trade-offs are, and how its performance varies across different conditions—the skin tone of a person’s face, for example. In Mitchell’s words, it’s an “example of what transparent documentation in AI could look like.”

    There are limits to algorithmic transparency—it can’t take us nearly as far as we need to go, and in certain cases can play a diversionary role—but explaining how machine learning systems work (and making them explainable in the first place) is an integral element of any political project to democratize AI. So I’m excited about Google’s model cards, and I look forward to seeing the experiment develop.

  • Thermidor: Speaking of Google, the employees who were fired last week as part of the company’s ongoing crackdown on organizers—engineered with the help of IRI Consultants, a union-busting consulting firm—have filed Unfair Labor Practice (ULP) charges with the National Labor Relations Board. Google almost certainly violated federal labor law by terminating the employees for engaging in protected concerted activity, not to mention subjecting them to intimidating interrogations where they were asked to provide names of other organizers. That’s no guarantee of a favorable verdict from the NLRB, of course—labor law is easily broken in this country—but they have a strong case.

  • Stocking stuffers: There’s a new book by Charlton D. McIlwain called Black Software that I really need to read. According to a liveblog of a talk that McIlwain gave at the Strand by J. Nathan Matias, the book “draws an analogy between the development of cocaine and crack cocaine in the 1980s and the history of the tech industry.” I’m interested!

  • Oily data: Last week, we released a piece from Logic’s new “Nature” issue called “Oil is the New Data.” Written by a Microsoft engineer, it’s a firsthand account of how tech companies are helping the fossil fuel industry use machine learning to intensify extraction. A must-read.

Platforms don't exist

This week’s newsletter is a little unusual. It only has one section, which is devoted to sketching out some possible contours of a left tech policy. In what follows, I take the basic principles of decommodification and democratization and try to come up with a model for how to apply them to our actually existing digital sphere.

Read on! Or subscribe.

What to do about the internet

What should we do about Google, Facebook, and Amazon? People from across the political spectrum are urgently trying to answer this question. So far, however, relatively few answers have come from the socialist left. At least in the United States, the cutting edge of the platform regulation conversation is dominated by the liberal antitrust community, perhaps best represented by the Open Markets Institute. They have some good ideas, and they’re serious about confronting corporate power. But they come from the Brandeisian reform tradition. Their horizon is a less consolidated capitalism: more competitive markets, more smaller firms, more widely dispersed property ownership.

For those of us with our eye on a different horizon, one beyond capitalism, this approach isn’t particularly satisfying. There are elements of the antitrust toolkit that can be very constructively applied to the task of reducing the power of Big Tech and restoring a degree of democratic control over our digital infrastructures. But the antitrusters want to make markets work better. By contrast, a left tech policy should aim to make markets mediate less of our lives—to make them less central to our survival and flourishing.

This is typically referred to as decommodification, and it’s closely related to another core principle, democratization. Capitalism is driven by continuous accumulation, and continuous accumulation requires the commodification of as many things and activities as possible. Decommodification tries to roll this process back, by taking certain things and activities off the market. This lets us do two things:

  1. The first is to give everybody the resources (material and otherwise) that they need to survive and to flourish—as a matter of right, not as a commodity. People get what they need, not just what they can afford.

  2. The second is to give everybody the power to participate in the decisions that most affect them. When we remove certain spheres of life from the market, we can come up with different ways to determine how the resources associated with them are allocated. In particular, we can come up with ways to make such choices collectively, by turning spaces formerly ruled by the market into forums of political contestation and democratic debate. If maximizing profit and maintaining class power were no longer the main considerations in the organization of our material world, what new sorts of arrangements could a democratic process generate?

These principles offer a useful starting point for thinking about a left tech policy. Still, they’re pretty abstract. What might they look like in practice?

Step One: Grab the low-hanging fruit

First, the easy part.

A portion of the internet is devoted to shuttling packets of data from one place to another. It consists of a lot of physical stuff: fiber optic cables, switches, routers, internet exchange points (IXPs), and so on. It also consists of firms large and small (mostly large) who manage all this stuff, from the broadband providers that sell you your home internet service to the “backbone” providers who handle the internet’s deeper plumbing.

This entire system is a good candidate for public ownership. Depending on the circumstance, it might make sense to have a different kind of public entity own different pieces of the system: municipally owned broadband in coordination with a nationally owned backbone, for instance.

But the “pipes” of the internet should be fairly straightforward to run as a publicly owned utility, since the basic mechanics aren’t all that different from gas or water. This was one of the points I made in a recent piece for Tribune about the Labour Party’s newly announced plan to roll out a publicly owned network and offer free broadband to everybody in the UK. It’s good politics and, even better, it works. Publicly owned networks can provide better service at lower cost. They can also prioritize social imperatives, like improving service for underconnected poor and rural communities. For a deep-dive into one of the more successful experiments in municipal broadband in the US, I highly recommend Evan Malmgren’s piece “The New Sewer Socialists” from Logic.

Step Two: Taxonomize the fruit higher up the tree

Further up the stack are the so-called “platforms.” This is where most of the power is, and where most of the public discussion is centered. It’s also where we run into the most difficulty when thinking about how to decommodify and democratize.

Part of the problem is the name: “platform.” None of our metaphors are perfect, but I think it might be time to give this one up. It’s not only self-serving—it enables a service like Facebook to project a misleading impression of openness and neutrality, as Tarleton Gillespie argues—it’s imprecise. There is no meaningful single thing called a platform. We can’t figure out what to do about the platforms because “platforms” don’t exist.

Before we can begin to put together a left tech policy, then, we need to come up with a better taxonomy for the things we’re trying to decommodify and democratize. We might start by analyzing some of the services that are currently called platforms and trying to discern the principal features that distinguish them from one another:

  1. The first is size. How many users does the service have? Sometimes this is an easy question to answer. Sometimes it’s not, because the way we define “user” will vary, and these differences may be substantial:

    • Sometimes what it means to be a user isn’t all that complicated. The number of monthly active users (MAU) of Facebook, the Google product suite, and Amazon Web Services (AWS) are easy to calculate.

    • But what about a service like Uber or Instacart, where you have both workers (“drivers,” “shoppers”) and customers? Both are users, but they’re using different parts of the service. So it probably makes sense to include both in the overall user count.

    • What about a service that has “targets” that aren’t exactly users? In last week’s newsletter, I talked about the Axon policing platform that enables law enforcement agencies to connect various devices and services—bodycams, tasers, in-car cameras, a digital evidence management system, smartphone apps, etc—into a single integrated portal. The users of this platform are police officers. The targets are the individuals whose information is being recorded and processed by the platform. Should they be included in the overall user account, even though they aren’t really users? If our goal is to measure the overall impact of the service, then the answer is yes.

  2. The second dividing line is function. What does the service do? Nick Srnicek, in his invaluable book Platform Capitalism, uses this approach to define five different kinds of “platforms,” though I’m inclined to use the word “services”:

    • Advertising services like Google and Facebook that hoover up personal data and monetize it by selling targeted ads.

    • Cloud services like AWS and Salesforce that sell various cloud-based “as-a-service” products to enterprise clients, from infrastructure-as-a-service (IaaS) to platform-as-a-service (PaaS) to customer relationship management (CRM).

    • Industrial services like Predix designed to support “industrial internet” applications like wiring up a factory with Internet of Things (IoT) devices and using the data that flows from them to optimize efficiency.

    • Product services like Rolls Royce and Spotify that “transform a traditional good into a service.” Rolls Royce is now renting jet engines to airlines, so that they pay by the hour instead of buying the whole thing up front, and using sensors and analytics to optimize maintenance. Spotify is turning albums into streams. The business model is subscription fees.

    • Lean services like Uber and Airbnb that match buyers and sellers while minimizing their own asset ownership. Matching isn’t all they do, however: gig-work services like Uber are also very much in the business of algorithmically managing and disciplining their drivers.

      One could think of more types of platforms. And I might quibble with some of Srnicek’s category choices—do Uber and Airbnb really belong in the same bucket? But if we’re looking to differentiate services by function, this list is a good place to start.

  3. The third way to split up services is by the kind of power they exercise. K. Sabeel Rahman wrote an interesting piece for Logic called “The New Octopus” that identifies three kinds of technological power:

    • Transmission power, which is “the ability of a firm to control the flow of data or goods.” He gives the example of Amazon’s massive shipping and logistics infrastructure controlling the “conduits for commerce,” as well as internet service providers (ISPs) controlling the “channels of data transmission.” We might also add AWS and other major cloud providers. A service like AWS S3 is essential to the flow of data across the modern internet.

    • Gatekeeping power, where the firm “controls the gateway to an otherwise decentralized and diffuse landscape.” He gives the example of Facebook’s News Feed or Google Search, which mediate access to online content. Here the power is held at the “point of entry” rather than across the entire infrastructure of transmission.

    • Scoring power, which is “exercised by ratings systems, indices, and ranking databases.” This includes automated systems for screening job applicants, for instance, or for informing sentencing and bail decisions.

Step Three: Enter n-dimensional space

We could spend a lot more time tweaking our taxonomy. But let’s leave it there, and return to the question of how we might decommodify and democratize our digital infrastructures. Given the wide range of services we’re talking about, it follows that the methods we use to decommodify and democratize them will also vary. The purpose of developing a reasonably accurate taxonomy is to help inform which methods we might use for each kind of service.

This is the logic behind Jason Prado’s argument in the latest edition of his Venture Commune newsletter, “Taxonomizing platforms to scale regulation.” Prado argues that we should be differentiating services by the number of users they have, and then implementing different regulations at different sizes. At 0-5 million users, for instance, a service should “only be subject to basic privacy regulations.” At 20-50 million, they should be required to publish “transparency reports about what data is collected and exactly how it is used.” At 100+ million, a service becomes “indistinguishable from the state” and therefore needs to be democratically governed, perhaps by a “governing board made up of owners, elected officials, platform developers/workers, and users.”

I like this basic approach, but I would expand it. Size is an important consideration, but not the only one. The service’s function and the kind of power it exercises are also significant factors.

We could certainly identify more factors. But for now let’s assume size, function, and kind of power are the three most salient features of a service. We could map each feature to an axis—x, y, and z—and then plot each service as a point somewhere along those three axes. Then, depending on where the service sits in our three-dimensional space (or n-dimensional, if we refine our taxonomy by increasing our number of features), we could select a method of decommodification and democratization that is particularly well suited to the service.

What are some of those possible methods? Here are four:

  1. Public ownership: In this case, a state entity takes responsibility for operating a service.

    • These entities can be structured in all sorts of ways, and exist at different levels, from the municipal to the national. Services that exercise transmission power (Rahman) or those that involve the cloud (Srnicek) are especially good candidates for such an approach. Along these lines, Jimi Cullen wrote an interesting proposal for a publicly owned cloud provider last year called “We need a state-owned platform for the modern internet.” Public ownership is also probably best suited for services of a certain scale. At the largest size, however, governance can no longer be achieved at the level of the nation-state—at which point we need to think about transnational forms of public ownership.

    • Public entities can also be in the business of managing assets rather than operating a service. For example, they might take the form of “data trusts” or “data commons,” holding a particular pool of data and enforcing certain terms of access when other entities want to process that data: mandating privacy rules, say, or charging a fee. Rosie Collington has written an interesting report about how such an arrangement might work for data already held by the public sector called “Digital Public Assets: Rethinking value, access and control of public sector data in the platform age.”

  2. Cooperative ownership. This involves running services on a cooperative basis, owned and operated by some combination of workers and users.

    • The platform cooperativism community has been conducting experiments in this vein for years, with some interesting results.

    • What Srnicek calls “lean” services would lend themselves to cooperativization. A worker-owned Uber would be very feasible, for example. And there’s all sorts of policy instruments that governments could use to encourage the formation of such cooperatives: grants, loans, public contracts, preferential tax treatment, municipal regulatory codes that only permit ride-sharing by worker-owned firms. It’s possible that cooperatives work best at a smaller scale, however—you might want a bunch of city-specific Ubers rather than a national Uber—in which case the antitrust toolkit might come in handy, since we would need to break up a big firm before cooperativizing its constituent parts.

    • We could also think of data trusts or data commons as being cooperatively owned rather than publicly owned. This is what Evan Malmgren recommends in his piece “Socialized Media”: a cooperatively owned data trust that issues voting shares to its members, who in turn elect a leadership that is empowered to negotiate over the terms of data use with other entities.

  3. Non-ownership. In some cases, services don’t have to be owned at all. Rather, their functions can be performed by free and open-source software.

    • There are plenty of reasons to be skeptical of open source as an ideology—Wendy Liu’s “Freedom Isn’t Free” is essential reading on this front—but free software does have decommodifying potential, even if that potential is suppressed at present by its near-complete capture by corporate interests.

    • This is another realm in which the antitrust toolkit could be helpful. In 1949, the Justice Department filed an antitrust suit against AT&T. As part of the settlement seven years later, the firm was forced to open up its patent vault and license its patents to “all interested parties.” We could imagine doing something similar with tech giants, making them open-source their code so people can develop free alternatives to their services. Prado suggests that a service’s code repositories should be forced open within six months of hitting 50-100 million users.

    • In addition to bigger services, I’d also argue that services whose business model is advertising (Srnicek) and those that exercise gatekeeping power (Rahman) would make good candidates for open-sourcing. One could imagine free and open-source alternatives to Google Search, for instance, or existing social media services.

    • Another useful idea drawn from the antitrust toolkit that could help promote open-sourcing is enforced interoperability. Matt Stoller and Barry Lynn from the Open Markets Institute have called for the FTC to make Facebook adopt “open and transparent standards.” This would make it possible for open-source alternatives to work interoperably with Facebook. It doesn’t get our data off of Facebook’s servers, but it starts to erode the company’s power by giving people various (ad-free) clients that can access that data and present it differently. If these interfaces caught on, Facebook would no longer be able to sell ads and its business would eventually collapse. At which point it could be refashioned into a publicly owned or cooperatively owned data trust that furnishes data to a variety of open-source social media services, themselves perhaps federated on the model of Mastodon.

  4. Abolition. Certain services shouldn’t be decommodified and democratized, but abolished altogether.

    • Governments deploy a range of automated systems for the purposes of social control. These include carceral technologies like predictive policing algorithms that intensify policing of working-class communities of color. (This is also an example of what Rahman calls scoring power.) Scholars like Ruha Benjamin and community organizations like the Stop LAPD Spying Coalition are applying the abolitionist framework to these kinds of technologies, calling for their outright elimination: in her new book Race After Technology, Benjamin talks about the need to develop “abolitionist tools for the New Jim Code.”

    • Another set of systems worthy of the abolitionist treatment are the forms of algorithmic austerity documented by Virginia Eubanks in her book Automating Inequality. In the United States and around the world, public officials are using software to shrink the welfare state. This deprives people of dignity and self-determination in a way that’s fundamentally incompatible with democratic values.

    • Another technology I would put in this category is facial recognition, which can be deployed by public or private entities. The growing movement to ban facial recognition, a demand advanced by a range of organizations and now embraced by Bernie Sanders, is a good example of abolition in action.

    • There is also an ecological case to be made for abolishing certain services, given the carbon-intensive nature of machine learning, the cloud, and mass digitization. This is a point I made in a recent Guardian piece.

One final note worth mentioning: while the goal of a left tech policy should be to strike at the root of private power by transforming how our digital infrastructures are owned, we will also need legislative and administrative rule-making to govern how those infrastructures are allowed to operate. This might take the form of GDPR-style restrictions on data collection and processing, measures aimed at reducing right-wing radicalization, or various algorithmic accountability mandates. These rules should apply across the board, no matter how the entity is owned and organized.

The above is a provisional sketch. It has lots of holes and rough edges. Plotting all the major services along three axes according to their features may ultimately be impossible—and even if it can be done, it runs the risk of locking us into an excessively rigid model for making policy. More broadly, there are severe limits to this sort of programmatic thinking, which can too easily tilt in a technocratic direction.

Still, I hope these thoughts offer some preliminary materials towards developing a left tech policy that takes the basic principles of decommodification and democratization and tries to apply them to our actually existing digital sphere. At the moment there is relatively little political space for such an agenda in the United States, but there may come a time when more space is available. It would be good to be ready.

Cloud fortress

This week’s Metal Machine Music is about:

  • what happens when you platformize the police

  • a roundup of readings on subjects ranging from the smart city to VC

  • a bit of history involving a group of French anarchists who went around destroying computers

Subscribe to get this newsletter in your inbox, or read on.

Platform-involved shootings

“Platform” is one of those words that, like “innovation,” has become so overextended as to become almost meaningless. If you wrote down all of the things that companies are calling platforms these days, you would end up with a very long list of very different things.

In fairness, this ambiguity has been there since the beginning. And it’s been a productive ambiguity, as Tarleton Gillespie explores in his 2010 classic, “The Politics of ‘Platforms’.” It has been useful for tech firms to define “platform” rather broadly, since the qualities associated with the metaphor—openness, neutrality—are ones that firms can use to absolve themselves of responsibility for what happens on their services. The business model of these firms rests on the fiction that they are not publishers (thanks Section 230!) and thus not liable, legally or otherwise, for the content that their services circulate. The platform metaphor is a valuable tool for sustaining this fiction—particularly now, as it comes under strain on Capitol Hill.

This is why, following Gillespie, it makes sense to see the platform as a discursive phenomenon as much as a technical one. It also probably makes sense to think of this phenomenon as a process rather than a thing: as an ongoing practice of platforming.

One place where the practice of platforming has been producing some alarming effects is the world of policing. Police platforms aren’t all that widely known: there’s been a lot of mainstream conversation in recent years about how law enforcement agencies are using ML-based technologies like “predictive policing” algorithms, but I’ve seen relatively little discussion of the platform angle.

That’s why I was excited to read Stacy E. Wood’s new article, “Policing through Platform.” Wood looks at a cloud-based platform service sold by Axon, the company that makes police tasers and bodycams. The platform enables law enforcement agencies to connect various Axon devices and services—bodycams, tasers, in-car cameras, a digital evidence management system, smartphone apps, etc—into a single integrated portal. It also makes it feasible for even small police departments to engage in some version of “big data policing” without the cost and headache of managing their own infrastructure—a bit like how AWS made it possible for small companies to get the benefits of a big data center.

Here’s what struck me most strongly as I read the piece:

  • Patriot Act as ImageNet: The era of big data policing wouldn’t have been possible without a series of post-9/11 federal policy changes that both vastly extended the scale of government data collection and integrated formerly siloed databases from various agencies. To do ML well, you need lots of data. And just as one of the preconditions of the current ML boom was the rise of the web, which offered a new source of abundant training data—ImageNet needed Flickr, for instance—the policy response to 9/11 was a precondition for the emergence of big data policing.

  • Insulate, insulate, insulate. We’re living at a moment when popular struggles over policing are proliferating. Social movements are pushing back against police violence, organizers and scholars are putting terms like mass incarceration and prison abolition into mainstream circulation, and progressive DAs are winning races around the country (hello Chesa). Interestingly, Axon seems to be marketing its platform with exactly these developments in mind. According to Wood, Axon claims that the use of their platform will lead to “a reduction in the number of false complaints (against the police); decreased use of force… enhanced public trust… [and] decreased litigation.” In other words, platformization can help insulate police departments from criticism, protest, and legal action at a time of growing public anger.

  • Opacity-as-a-service. Platformization is often presented as a process of opening: you open an API to let developers build apps around your service. Actually existing platforms, of course, are riddled with black boxes: you might be able to talk to Facebook’s APIs, but its internals are totally opaque. In the case of Axon’s platform, however, even the pretense of openness has been dispensed with. The value proposition, it seems, lies precisely in the platform’s opacity. Opacity is a mechanism for hiding what law enforcement does, to preclude the possibility of public oversight. As Wood writes:

    • “In fact, through Axon’s platforms, even more aspects of police labor are hidden. In the world of platform policing, opacity is a feature not an accident. A lack of understanding about what exactly goes into the functioning of the platform allows for the performance of process, precluding intervention, questioning or dispute. The record claims further authority through this process of automation, even as the sources of data are no less problematic or even more accurate.”

  • Content creators. Viral videos of police shootings shared on social media have become a major phenomenon in recent years. They have indisputably played a role in propelling the current cycle of popular struggles around policing. Two of the apps within the Axon platform that Wood examines are designed in part to help police counter this dynamic. They enable law enforcement agencies to produce social media narratives of their own, by sourcing and selecting video that seems to substantiate their version of events. Axon Citizen lets members of the public submit smartphone video directly to police via “public evidence submission portals” that can be advertised on social media, while Axon View gives police officers the ability to do both instant replay and livestream of their bodycam footage. “Mimicking the user interface and informational flow of social media platforms,” Wood writes, “these apps give the impression that police work is another form of content creation.”

Some other stuff

Here’s a handful of other things worth reading:

  • Urban warfare: Jathan Sadowksi has a new piece, “The Captured City,” that’s quite relevant to the above discussion. The “smart city” concept has typically been sold as a way to make cities more convenient, more efficient, more entrepreneurial—think Sidewalk Labs. But Jathan argues that the smart city is in fact primarily about the militarization of urban space. He talks in particular about the Domain Awareness System, a collaboration between the NYPD and Microsoft that uses a vast network of cameras and sensors to create a unified system of ubiquitous surveillance. Here’s Jathan:

    • These technologies treated the city like a battlespace, redeploying information systems originally created for military purposes for urban policing. Sensors, cameras, and other networked surveillance systems gather intelligence through quasi-militaristic methods to feed another set of systems capable of deploying resources in response… Contrary to the suggestions of ‘smartness’ shills, these systems are not used by the general public but on it. This urban war machine (as I call it in my forthcoming book Too Smart) is the true essence of “smart” urbanism. It is the next step in the high-tech militarization of society… The idea of the captured city requires an adversarial view of a city’s inhabitants: When the enemy can be anywhere, the battlespace is everywhere; all places and people must be accounted for at all times.”

  • Minnesota nice: This newsletter has been a bit of a downer so far, so here’s a pick-me-up: “Meet the Immigrants Who Took On Amazon” by Jessica Bruder. It’s a story about a group of Somali immigrants who are organizing for better working conditions at Amazon warehouses in Minnesota, and pulling off the first strike actions the company has seen in North America.

  • It’s getting crowded: Sai Krishna Kamepall, Raghuram Rajan, and Luigi Zingales from the University of Chicago have produced an interesting report with a fun title: “Kill Zone.” They look at major acquisitions conducted by Facebook and Google from 2006 to 2018 and conclude that VC investments in startups in the same space as the company acquired fall by 46 percent and the number of deals by 42 percent in the three years following an acquisition. Big acquisitions by the tech majors generate “kill zones” that other investors don’t want to enter, in other words, because they figure there’s no hope of competing. If Facebook buys a social photo-sharing app and integrates it into its massive network, then why invest in another social photo-sharing app? The report offers an interesting glimpse at how much the Silicon Valley ecosystem has changed over the past decade or so, as the big firms have grown so big that they’re crowding out VC.

  • Information wants to be free: The Labour Party has just announced a proposal to provide free high-quality broadband to everyone in the UK by 2030. I have a new piece in Tribune about it called “Internet for All.” Taking internet access off the market and making it a social right will improve the lives of a lot of people, particularly those in rural and poor communities. It will also open up political space for a deeper democratization of digital life, as we work our way up the stack from the pipes to the platforms.

History corner

In the 1980s, a French anarchist organization called CLODO conducted a series of attacks on computer centers. While “clodo” is slang for homeless, the name was also an acronym—although there seems to be some confusion about what exactly the acronym stood for. A few possibilities: “Committee for the Liquidation and Misappropriation of Computers,” “Computer Liquidation and Hijacking Committee,” and “Committee for Releasing or Setting Fire to Computers.” You get the idea.

In 1980, they broke into the offices of Philips Data Systems in Toulouse and destroyed its computers. In 1985, they firebombed the offices of computer manufacturer Sperry Univac, also in Toulouse. In a letter to Libération, they explained their reasoning:

We are computer workers and therefore well placed to know the present and future dangers of computer systems. Computers are the favorite instrument of the powerful. They are used to classify, control, and repress. We do not want to be shut up in the ghettos of programs and organizational patterns.

In 1984, the great underground magazine Processed World—which is a treasure if you haven’t encountered it before—ran a translation of an interview with a CLODO member that offers a bit more detail on their thinking:

Why do you do computer sabotage?

To challenge everyone, programmers and non-programmers, so that we can reflect a little more on this world we live in and which we create, and on the way computerization transforms this society.


We are essentially attacking what these tools lead to: files, surveillance by means of badges and cards, instrument of profit maximization for the bosses and of accelerated pauperization for those who are rejected…


Aren't you really a bit retro, like the machine breakers of the 19th Century?

Faced with the tools of those in power, dominated people have always used sabotage or subversion. It's neither retrograde nor novel. Looking at the past, we see only slavery and dehumanization, unless we go back to certain so-called primitive societies. And though we may not all share the same "social project,'' we know that it's stupid to try and turn back the clock.

Computer tools are undoubtedly perverted at their very origin (the abuse of the quantitative and the reduction to the binary are proof of this) but they could be used for other ends than the ones they now serve. When we recognize that the most computerized sector is the army, and that 94% of civilian computer-time is used for management and accounting, we don't feel like the loom-breakers of the 19th century (even though they fought against dehumanization in their jobs). Nor are we defenders of the computer-created unemployed… if microprocessors create unemployment, instead of reducing everyone's working-time, it's because we live in a brutal society, and this is by no means a reason to destroy microprocessors.

The dead are at our backs

Welcome to this week’s edition of Metal Machine Music.

Here’s what I’ve got for you:

  • Why the tech left needs a usable past

  • How a bunch of British socialists tried to make technology more democratic back in the 1980s, and what lessons their experiments might hold for today

  • A roundup of things to read, mostly related to labor (no surprise there)

Subscribe to get this newsletter in your inbox, or read on.

A usable past

I’ve included some history in each of these newsletters so far. This is partly because I love history and find it endlessly fascinating. But it’s also because history has real political value. The tech left needs a usable past.

We’re not the first generation that has attempted to construct a more democratic relationship to technology. On the contrary: we belong to a long tradition of movements and organizations that have done this work in the past. This includes people like Joan Greenbaum, who (among other things) organized tech workers in the 1960s and 1970s with a group called Computer People for Peace. (If you haven’t read it already, I highly recommend the interview we did with Joan in Logic, “Mainframe, Interrupted.”)

People like Joan are not typically featured in the stories we tell about tech. They deserve to be more widely known.

But for the tech left, people like Joan also serve a more specific function. They’re our elders, our ancestors. We can draw ideas and inspiration from them. They can also help us feel a sense of belonging. They can help us feel anchored to a community that exists not merely across space but also across time, a community of the living and the dead.

It might sound strange, but I take great comfort in the idea of being surrounded by ghosts. In his beautiful and heartbreaking Memoirs of a Revolutionary, Victor Serge reflects on having outlived so many of his comrades, most of whom had been massacred in one counterrevolution or another:

I must confess that the feeling of having so many dead men at my back, many of them my betters in energy, talent, and historical character, has often overwhelmed me, and that this feeling has been for me the source of a certain courage, if that is the right word for it.

This summer, I visited the cemetery in Berlin where Rosa Luxemburg and Karl Liebknecht are buried. In the center is a large stone with the following words engraved on its surface: “Die Toten mahnen uns.” The dead admonish us, the dead remind us; the dead are at our backs, urging us forward.

Innovation from below

Now for some actual history.

The year is 1981. The left wing of the Labour Party has just won control of the Greater London Council (GLC), a municipal body that governs London alongside various borough-level councils.

The story of the GLC is well worth revisiting today, particularly for those interested in municipal socialism and/or the intellectual genealogy of Corbynism. (Corbyn’s shadow chancellor John McDonnell got his start as the finance chair and deputy leader of the GLC.) But for our purposes, what’s most relevant is the fact that the GLC undertook some very interesting experiments around democratizing technology.

A core component of the GLC’s economic program was the Greater London Enterprise Board (GLEB). At the time, London had fairly high unemployment, in large part due to deindustrialization. The purpose of the GLEB was to promote economic development while also promoting economic democracy. This involved doing things like putting public money into firms that offered good, unionized jobs, as well as encouraging the creation of worker-owned cooperatives.

It also involved establishing five “Technology Networks” in different locations across London. (In what follows, I’m indebted to Adrian Smith’s invaluable article on the subject, Technology Networks for Socially Useful Production.”)

These Technology Networks were prototyping workshops—a bit like hackerspaces today. People could walk in, get access to machine tools, receive training and technical assistance, and build things. The things they built went into a shared “product bank” that other people could draw from, and which were licensed to companies to help finance the Networks. The innovations that emerged included wind turbines, disability devices, children’s toys, and electric bikes. Energy efficiency was an area of special emphasis.

For some of the folks involved in the effort, the goal was to stimulate local business development. Others, however, saw the Technology Networks in a more radical light. The purpose of these spaces, they believed, was to democratize the design and development of technology.

This meant creating a participatory process whereby working-class communities could obtain the tools and the expertise they needed to make their own technologies. It meant producing to satisfy human need—what organizers at the time called “socially useful production”—rather than to maximize profit. In his article, Smith quotes from Mary Moore, a participant in one of the Technology Networks (bolding mine):

...making sure that what you do is going to be of real use to the intended users […] means somehow getting them to take part in the design process rather than just pop in with a product when you’ve produced it ... So you wouldn’t just market-research a new product, which puts users in a passive role. You’d actually get them in the workshop and enable them to learn more about how such things are made and designed and repaired and modified

Innovation from below, in other words.

The more radical members of the Technology Networks also saw them as sites of political mobilization. The act of prototyping products in a workshop could serve as a useful starting point for a broader conversation about what kinds of transformations would be needed to create a more equitable society. In the process of trying to solve their problems with technology, people came to realize that technology often fell far short of solving their problems. Politics was also needed. Along these lines, one of the Networks kick-started a political campaign called “Right to Warmth” that involved organizing community energy efficiency initiatives, creating local energy cooperatives, and pressuring Thatcher’s government into putting more money towards energy conservation measures.

Sadly, the Technology Networks were relatively short-lived. Thatcher abolished the GLC in 1986, and the Networks eventually disappeared.

So what are some possible takeaways for today?

  • In recent years, a number of socialists have won local elections in the US. (Including some this week.) It’s conceivable that they could push municipalities to create something like Technology Networks. In some ways, running such spaces might be easier today. You don’t need machine tools to prototype software; you can do everything with open-source tools on cheap hardware. We’ve also got 3D printers now, if stuff you can touch is your thing.

  • Still, plenty of problems remain. The original Technology Networks were riven with tensions between people who saw them as engines of business development and people who saw them as engines of social transformation. It’s easy to imagine the same tensions emerging in similar spaces today. It speaks to the contradictions that attend any socialist governing experiment within advanced capitalist societies, even (or perhaps especially) at the municipal scale: how does one steer (or at the least not blow up) the economy while simultaneously trying to completely restructure it?

  • In the US today, we have a few different kinds of spaces that resemble Technology Networks in one way or another: the hackerspace, the startup incubator, and the foundation-funded NGO. But I doubt that any of these spaces are capable of serving as catalysts for meaningful social change. Hackerspaces are about helping people do fun creative projects; startup incubators are about helping people launch businesses; foundation-funded NGOs are about helping people find less disruptive channels for their discontent. I think we need alternatives. Technology Networks, at least in their more radical aspects, might offer a way to think about what those alternatives could look like. For me, the question worth asking is: How do we create spaces of democratic technological practice that politicize participants and develop their capacity for self-organization?

Finally, a few more readings in this vein:

  • The GLEB published a promotional booklet about the Technology Networks in 1984 that’s a real gem. The picture above is drawn from the booklet. Thanks to Adrian Smith for digitizing.

  • If you’re interested in learning more about the history of the GLC, take a look at Michael Rustin’s “Lessons of the London Industrial Strategy” from the New Left Review. (I canvassed some British friends on Twitter and this came up—thanks to Will Davies for the recommendation.)

  • The GLEB’s technology director who created the Technology Networks was Mike Cooley, one of the leading figures behind the Lucas Plan. The Lucas Plan was a 1976 document produced by the workers of an aerospace company proposing a democratic reorganization of the firm around socially useful production. If you’d like to learn more about the Lucas Plan, check out Adrian Smith’s article in the Guardian and this 1978 documentary on YouTube.

Other readings

Time to wrap up. In conclusion, here’s a brief rundown of a few other things I’ve been reading:

  • Brian Dolber has produced an interesting report for the MIC Center called "From Independent Contractors to an Independent Union.” It looks at how the LA-based organization Rideshare Drivers United (RDU) has used social media ad buys and its own in-house smartphone app to organize drivers. Groups like RDU are finding creative ways to overcome the challenges of organizing atomized, algorithmically managed app workers—which includes making their own apps.

  • Speaking of app workers, several thousand Instacart workers recently concluded a 72-hour strike. Read their open letter to the company’s founder, which outlines their grievances and demands. As April Glaser explains in “Instacart Workers Are Striking Because of the App’s User Interface” over at Slate, the dispute partly revolves around the Instacart app’s default tip setting: it’s currently set at 5 percent, and workers want it set at at least 10 percent. Design is a terrain of class struggle!

  • It was recently the one-year anniversary of the Google Walkout, a watershed moment in the story of the tech worker movement. Read the Medium post written by Googlers to commemorate the occasion, and their Year in Review in tweet form. It’s been a long year, and it’s not over yet.

Loading more posts…