Manifesting A.I.

At some point last year, I think, when I’d a low supply of television shows to watch that I didn’t actually care about just to have something on when I need to decompress, I started going through Manifest, which at that point had migrated from network to Netflix.

The streamer recently dropped the second half of season four, and it wasn’t until somewhere around six episodes into this batch that I realized something.

Manifest right now is showing us what television as scripted by Large Language Models will be like.

The remarkable thing about Manifest is that it might be the first series I’ve ever watched that literally earns not a single thing it does. Events happen just because the writers want them to at that moment.

Writ large, all television, of course, is contrived. We know none of this is real and was created by people.

There’s a difference, though, between contrived and contrived. In a show written by the steroidal autocomplete of Large Language Models, I rather suspect that no events will happen in their “natural” narrative course. This exactly is what watching Manifest is like.

Watching this show is like watching an algorithm generating at any given moment what it thinks is the statistically most-likely thing to happen next.

Several characters actively are repetitively terrible in ways for which they’re never held accountable. I don’t mean by other characters although that’s true, but editorially by the show itself—and I’m talking about the protagonists. This is not the sole example of how the show’s writing gives me an “uncanny valley” feeling but it’s something of an exemplar.

The show is acutely, abjectly manufactured in a way that feels like it ultimately obviates the need for people to be writing it at all.

In the current strike by television writers, the use of A.I. is a central issue. Some television writers themselves perhaps should do a better job of showing that they bring something to the room that Large Language Models wouldn’t.


Referring posts