On Monday, leaders of the Screen Actors Guild – American Federation of Television and Radio Artists hosted a members-only webinar to discuss the contract the union tentatively agreed last week with the Alliance of Motion Picture and Television Producers. If ratified, the contract will officially end the longest strike in the guild’s history.
For many in the industry, artificial intelligence was one of the most controversial and frightening elements of the strike. Over the weekend, SAG released details of its Agreed conditions regarding AI, a broad set of protections that require consent and compensation for all actors, regardless of their status. With this agreement, SAG has gone much further than the Directors Guild of America or the Writers Guild of America, which preceded the group in overcome with the AMPTP. This is not to say that SAG has succeeded where other unions have failed, but that actors face more of an immediate existential threat from advances in machine learning and other computer-generated technologies.
The SAG agreement is similar to the DGA and WGA agreements in that it requires protections for any instances where machine learning tools are used to manipulate or exploit their work. All three unions claimed their AI agreements were “historic” and “protective,” but whether one agrees with that or not, these agreements function as important benchmarks. AI doesn’t just pose a threat to writers and actors: it has consequences for workers in all fields, creative and otherwise.
For those who view the union struggles in Hollywood as a model for handling AI in their own conflicts, it is important that these agreements have the proper protections. So I understand those who questioned them or pushed them to be stricter. I am among them. But there is a point where we are pushing for things that cannot be accomplished in this round of negotiations and that perhaps do not need to be encouraged at all.
To better understand what the public generally calls AI and the perceived threat, I spent months during the strike meeting with many leading machine learning engineers and technology experts and lawyers from big tech and corporate law. ‘author.
Most of what I learned confirmed three key points: The first is that the most serious threats are not the ones we hear about most in the news: most of the people targeted by surveillance tools Machine learning will have a negative impact not on privileged people but on low-level people. and working-class workers and marginalized and minority groups, due to technology’s inherent biases. The second point is that studios are just as threatened by the rise and unregulated power of Big Tech as the creative workforce, which I talked about in detail earlier in the strike. here and which WIRED’s Angela Watercutter has cleverly developed here.