I'm reading Asimov's Foundation to my son. Set over 10,000 years in the future it does not anticipate smartphones, a mapping app (there is a device that dims and intensifies to tell you if you are going the right direction), or self-driving. Among the first things Gaal Dornick does is take a taxi.
You are writing not knowing whether AI will achieve artificial general intelligence, and if it does how that intelligence will differ from human.
In short, I sympathasize. Amazing how well 2001 holds up.
Fair enough, but I don't think that's really analogous. It's a little more like, in 1910, writing a science fiction novel set in 1970, in a world transformed by the automobile, and trying to get it realistically right. Like, you probably wouldn't get it right -- would you imagine the transformation of America into a suburban nation and the decay of the cities? The invention of adolescence as a distinct stage of life and concomitant changes in sexual relations? The rise of the oil producing nations of the Middle East? Probably not. But at a minimum you'd want to try to imagine it, not just paste "more cars" on an otherwise unchanged horse-and-buggy society.
AI has the potentially to be far more transformative than the automobile, and we're also far more aware right now of that potential than most people in 1910 were about how the automobile was going to change their society. I don't need my world to be right, but I want it at least to be plausible. But it poses a real challenge. My main character is the chief of security for the lunar colony. Does she have deputies? What does she need them for -- what can they do that AI or AI-controlled robots can't? What does *she* do that AI or AI-controlled robots can't? Does anybody talk to anybody, or do they just communicate through a computer interface? Why is anybody on the Moon in the first place -- why not just robots? I need to answer those kinds of questions in ways that don't scream "that doesn't make sense" while also still leaving the potential for, you know, cinema. (There have to be people, they have to talk to each other, etc.)
As it stands, there's a pervasive background feeling that a lot of what humans are doing is make-work. Watching robots do jobs, in case they make a mistake; doing jobs that an AI could probably do better, because we want a human to be "accountable" for decision-making; etc. That set of choices serves the plot. But I still don't know. In particular, how much situational awareness would someone in her position have, at all times? Might be a whole lot, right? Too much for my plot to actually happen? Maybe. Does it matter? Maybe not?
I'm reading Asimov's Foundation to my son. Set over 10,000 years in the future it does not anticipate smartphones, a mapping app (there is a device that dims and intensifies to tell you if you are going the right direction), or self-driving. Among the first things Gaal Dornick does is take a taxi.
You are writing not knowing whether AI will achieve artificial general intelligence, and if it does how that intelligence will differ from human.
In short, I sympathasize. Amazing how well 2001 holds up.
Fair enough, but I don't think that's really analogous. It's a little more like, in 1910, writing a science fiction novel set in 1970, in a world transformed by the automobile, and trying to get it realistically right. Like, you probably wouldn't get it right -- would you imagine the transformation of America into a suburban nation and the decay of the cities? The invention of adolescence as a distinct stage of life and concomitant changes in sexual relations? The rise of the oil producing nations of the Middle East? Probably not. But at a minimum you'd want to try to imagine it, not just paste "more cars" on an otherwise unchanged horse-and-buggy society.
AI has the potentially to be far more transformative than the automobile, and we're also far more aware right now of that potential than most people in 1910 were about how the automobile was going to change their society. I don't need my world to be right, but I want it at least to be plausible. But it poses a real challenge. My main character is the chief of security for the lunar colony. Does she have deputies? What does she need them for -- what can they do that AI or AI-controlled robots can't? What does *she* do that AI or AI-controlled robots can't? Does anybody talk to anybody, or do they just communicate through a computer interface? Why is anybody on the Moon in the first place -- why not just robots? I need to answer those kinds of questions in ways that don't scream "that doesn't make sense" while also still leaving the potential for, you know, cinema. (There have to be people, they have to talk to each other, etc.)
As it stands, there's a pervasive background feeling that a lot of what humans are doing is make-work. Watching robots do jobs, in case they make a mistake; doing jobs that an AI could probably do better, because we want a human to be "accountable" for decision-making; etc. That set of choices serves the plot. But I still don't know. In particular, how much situational awareness would someone in her position have, at all times? Might be a whole lot, right? Too much for my plot to actually happen? Maybe. Does it matter? Maybe not?
It is indeed amazing how well 2001 has held up.