the Little Red Reviewer

Guest post from Madeline Ashby, author of iD

Posted on: July 2, 2013


Madeline Ashby is the author of two of my favorite recent novels, vN and iD (links go to my reviews). In the Machine Dynasty series, Ashby envisions a near future world where Von Neumann self replicating androids have become an every-day part of our lives.  They raise and teach our children, take on dangerous occupations, and were supposed to make our lives easier.  Sounds easy, right? not so much, when you get the story from the vN’s point of view.

For more information about Madeline Ashby, her fiction, her travel schedule, and more, I encourage you to visit her website, and follow her on twitter. More than that, I encourage you to read her amazing fiction!

My question/prompt  to Ms. Ashby for her guest post was:

Once upon a time we started with Asimov’s unemotional humanoid robots, and then we evolved to robots who could be tricked/programmed to believe they were human and robots who desperately wanted to be human, and now in the Machine Dynasty series we have robots who know they aren’t human, but tend to feel emotions even stronger and more powerfully than many people.  What’s the next step for robot/AI fiction? Where do we go from here?

And here’s what she had to say:


Madeline-Low-Res-02-e1348636903481I think robot subjectivity is still a wide open space for science fiction writers. I think the challenge is to actually dig in to the reality of computer vision, and algorithmic detection of motion, affect, and identity. One of the things I beat myself up for is not digging more deeply into those things. There are other writers who just kill it when it comes to that kind of rigorous depiction of another’s consciousness. Peter Watts is probably the best at it — in his stories “The Things” and “Malak,” he’s able to write exactly the experience that an alien and a predator drone would have, from their perspective, without making any room for the human element. If you want it dumbed down or warmed up, well, that’s just too bad. He’s that disciplined in his approach.
So I think inevitably, we’ll get more of that kind of story. Less anthropomorphizing, and more cognitive re-framing of what “point of view” really means. When you think about it, the robots we work with on a daily basis have a split point of view: there’s what the drone “sees” (white and neon squares on a field of grey), and what the human “pilot” observes (targets). Together, that data and interpretation work together to create what we might call a vision, or a perspective, but by themselves neither component is entirely complete. Sitting at my desk, that’s an interesting challenge. How do I write something so split, so different? How do I write about that kind of sight? How do I establish that type of consciousness as a distinctive, memorable character?
I think taking that challenge on over the course of a novel would be pretty difficult. Doing it for a short story, or maybe as one character in a multi-POV novel, might be easier. Sustaining a specific character over a long period can be really difficult. You hear of actors sort of losing themselves in a role, or having a heart attack after playing Lear or Willie Loman, and it’s basically the same thing (although the stresses of live shows, including the physical demands, are different). Performing any identity is difficult. Splitting a part of yourself off to be something else is difficult. One of the distinctive qualities of the human animal is its ability to take on multiple identities, to live many different lives within the same lifetime, to code-switch. When we see this in other animals, like the camouflage of chameleons and cuttlefish, or the way mockingbirds can imitate the songs of other birds, we immediately feel a certain empathy. But we still view it as a fundamentally defensive or creative act. It’s not how we live all the time. One of the great strides of the late twentieth and early twenty-first century was to get people out of certain roles, to allow them to stop performing identities that they didn’t want.
One of the other things that we’ve realized, going into the twenty-first century, is that there are many different ways to experience humanity. There are people on the autism spectrum who experience loud noises as physical pain. There are people with synaesthesia who can taste the colour of the sky. There’s faceblindness. There’s psychopathy. All of these people are humans, but their lived experience is wildly different. So faced with that variety, it’s not difficult to argue for a broader definition of humanity that may one day include people with cybernetic augmentation, or even just Turing-complete machines in themselves. It’s from that perspective that I try to write my books. The vN I write about don’t consider themselves  biologically human, but they do consider themselves people. This is why it always strikes me as a little funny when reviewers suggest that the books might be difficult for some people because there aren’t enough humans in them. Who are we to decide, really, who is human and who isn’t?
The emphasis on that last question was mine. It just might be the most important question of the next hundred years.

4 Responses to "Guest post from Madeline Ashby, author of iD"

I really want to read these and have wanted to read the first book for a while – I know this is fickle but to be honest the cover drew me originally. I love that cover I admit but the book sounds great (plus the second book).
Lynn 😀


I love the cover art too. the vN are trapped in who and what they are, and the cover art shows that. Doesn’t really tell you anything about the story, but immediately screams “you want to pick me up, you want to know what I’m all about!”, and we did, and we do!


I think this ties in very nicely with Derek and the vN Susie in the prologue – Derek as a machine-like human because of his Asperger’s or similar, and Susie’s opinion of herself:
“Susie treated her artificiality as a different but equally valid subjectivity. That she was the sum total of years of research by multiple teams competing for funding had no bearing on her self-respect. She was a robot, yes, but she was also a person.”

And of course the idea of humans and robots performing ‘humanity’ is the core of that chapter…

Madeline also wrote a great post for my blog on the relationship between humans and AI, and how we sometimes treat people like machines anyway:


[…] a servant and sex slave for humans. What does this mean for the relationship between humans and AI? As Ashby has pointed out, the vN aren’t human but they think of themselves as people. They simply have a different […]


join the conversation

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,628 other followers

Follow the Little Red Reviewer on



FTC Stuff

some of the books reviewed here were free ARCs supplied by publishers/authors/other groups. Some of the books here I got from the library. the rest I *gasp!* actually paid for. I'll do my best to let you know what's what.
%d bloggers like this: