The main thing I’ve been working on lately is a sci-fi thriller screenplay set decades from now in a Chinese lunar colony. Writing has been slow going because I always find creative writing much slower than my opinion journalism and criticism, but also because of factors unique to this moment that form important background to the story.
To whit: I’m trying to imagine two very big things differences between this future world and ours. First, I’m trying to imagine a world dominated by China in a way that is somewhat analogous to American dominance in the 20th century or British dominance in the 19th century. I’m not sure that a world like that ever will exist—it may be that the ongoing collapse of the American-led world order leads neither to an apocalyptic global war nor a power transition to a Chinese-dominated world order but to a world of genuinely multi-polar competition. Indeed, I think that’s the most-likely outcome. Maybe that will involve stable spheres of influence with China dominating the most economically-important sphere, or maybe it will be a more kinetic and chaotic world, but either way I don’t think we’ll see a world where China is succeeding the United States in preserving freedom of the seas (or their outer space equivalent), setting terms of international trade, anchoring cooperation on various global issues, etc. For plot purposes, however, I’m imagining a world in which, to some degree, China does prove a successor of sorts. And, for good reason, that’s proving very hard for me to imagine. I made the choice deliberately—it’s not just an ancillary fact about the world I’m building that I could easily change, but is implicated fairly deeply to what the film is about—but it’s proving a troublesome choice.
More seriously, I’m trying to imagine a world in which AI has become a major factor in human life. As with Chinese quasi-hegemony, I’m not trying to make accurate predictions, which is good because if I were reliably good at that I should monetize that ability and make a ton of money—whereas, contrariwise, I am in fact exceptionally bad at predicting or even understanding technological developments specifically. But I do want to at least be plausible and even that bar is proving hard for me to clear. I think there’s a reason why so much of the art that has been made that involves AI as part of the subject is both dystopian and fundamentally philosophical in nature. A movie like 2001: A Space Odyssey or A.I. Artificial Intelligence or Her or Ex-Machina or I’m Your Man or this year’s Companion (which I haven’t seen yet, but intend to) tries to one degree or another to sketch in how the world will change as a result of AI, but it really is just a sketch, with a lot of implausibility around it. All of them lean to a considerable degree on the familiar because otherwise they would just be too disorienting. The audience would lose sight of the human story and of the philosophical questions that are at the heart of each film.
But I keep getting tripped up by thinking along the lines of “but if AI can do that, then surely it would also be doing this and this and this” to the point where it’s hard to imagine what life in an AI-saturated world could actually be like. Which is what leads to pure dystopian (or, more rarely, utopian) projections, things like the recent theater piece, The Antiquities, which I saw at Playwrights Horizons a few weeks ago. There’s nothing wrong with using art to wonder about the possible looming obsolescence of humanity. But I’m trying to tell a more generally typical thriller story set in a world where AI is a major aspect of human life, with major economic and sociological implications, and where I want that future to feel plausible. And that’s proving really hard for me to do.
I’ll keep plugging away though, if only to see if I can do it.
Meanwhile, you know what I’ve been been doing here on Substack, but here’s some of what I’ve done elsewhere:
I had a piece some time ago in Modern Age, “Encounters with East and West,” about a new play, Salesman in China, which I saw at Stratford this past summer, and a movie from 2023, Perfect Days, both of which I deeply enjoyed and which said something important and, I think, contrary to contemporary shibboleths about cross-cultural exploration. It’s been out from behind the paywall for a few weeks now, and so I’m overdue to promote it.
I have a new piece in Modern Age, “We Are the Robots,” no doubt inspired in part by my difficulties writing this screenplay, about the recent films The Wild Robot and Robot Dreams and the current Broadway musical, Maybe Happy Ending. I think it’s currently behind a paywall—but I may be wrong, and if I’m right, perhaps this will be a good opportunity for you to subscribe?
I did a podcast with actor, writer, director, stand-up comedian, jazz clarinetist, surfer dude and all-around fun guy Bill Kalmenson. The conversation was long and wide-ranging and made me eager to do such things again. You can listen on Apple or Spotify or just click below:
Finally, tonight begins the celebration of Purim. A lot of folks are noticing chapter 9 of the Book of Esther for the first time this year, and brooding about it in the context of the events of October 7th and the war Israel has conducted in response. I won’t add to the brooding; I did my share of that in this piece on Psalm 137. Rather, I’ll just say that, in my opinion, chapter 9 should be read along with the rest of the book, which is to say, within the same interpretive framework. Meaning: if you read the Book of Esther as a story about the hidden hand of providence assuring the triumph of justice, then chapter 9 will likely sound to you like an aspect of that justice. If, on the other hand, you read the Book of Esther as a kind of parody of that idea—as a story set in a world of absurdity and chance ruled by a mad king, where God’s name never appears and His hand cannot be discerned—then chapter 9 will read more open-endedly. I think that if The Book of Esther were not canonical, we would be much more inclined to read it the latter way, but its canonicity drives interpretation in the other direction.
Several years ago, I wrote a piece for The Jewish Review of Books called “Hidden Faces and Dark Corners” precisely about this process, showing that it operates even on texts that are not explicitly religion by putting the Book of Esther into dialogue with Shakespeare’s play, Measure for Measure. While the Book of Esther appears to be a parody of biblical accounts of God’s sovereignty and justice, it has been canonically reinterpreted in rabbinic understanding as its opposite, as being about how the hidden hand of divine providence assures ultimate justice. The absence of God’s name interpretively becomes the signifier of His presence. Shakespeare—I believe—set out to show the absurdity of this very conceit, of the idea of the hidden hand secretly arranging for just outcomes and for the ascent of the human soul, by putting it onstage, with a mad-seeming ruler behaving just like that imagined providential deity. His play also takes one human soul on an emotionally harrowing journey to precisely the elevated end to which Shakespeare’s Christianity (and, I would argue, Judaism as well) calls humanity to aspire, juxtaposing to ironic effect this moving transformation to the absurdity of the hidden ruler’s behavior that brought it about. Yet this very irony has itself been reinterpreted and practically tortured out of existence to turn the play into a pious Christian allegory.
I think we could all do well to recognize the pervasiveness of this interpretive tendency, what one might call (with apologies to Spinoza, Freud and people who actually speak German) a Kohärenzantrieb, the drive to impose coherence. But at the deepest level, there is a kinship between utter freedom and total randomness. Our president is reminding us of that truth daily, but it also has deep theological implications, which I explored a bit here in another Purim-related post.
In any event, I want to wish everyone a Happy Purim. By the end of the holiday may we all be unable to tell the difference between Donald Trump and Kamala Harris.
I'm reading Asimov's Foundation to my son. Set over 10,000 years in the future it does not anticipate smartphones, a mapping app (there is a device that dims and intensifies to tell you if you are going the right direction), or self-driving. Among the first things Gaal Dornick does is take a taxi.
You are writing not knowing whether AI will achieve artificial general intelligence, and if it does how that intelligence will differ from human.
In short, I sympathasize. Amazing how well 2001 holds up.