the Little Red Reviewer

Can we live with artificial intelligence?

Posted on: February 23, 2013


earlier today, my husband randomly asks me “Do you think people could really live with artificial intelligence? Not AI in computers, but real live artificial intelligence?”

So many angles of this question to tackle.  I thought of the movie AI Artificial Intelligence (one of my favorite movies, by the way). I thought of Hal9000. I thought of Data from Star Trek. I’m in the middle of reading Use of Weapons by Iain Banks, so I thought of that, the drones, the Minds.  I thought of Siri. I thought of Madeline Ashby, Ted Chiang,  and every book I’ve ever read where someone began to care for an AI and something went sour.


And notice he didn’t say “will we”, but “could we”, which got me thinking about how people react when facing a very large change in their life that they have no control over.  What about people who are very religious? Do AIs have souls? will it matter? What about the Amish? Will AIs only be for rich people, or will they be as cheap and available as a pay-as-you go cell phone?  If AIs became commonplace, would people have the choice to interact with them or not?

“Sure”, I responded. “It’ll be just like smartphones. All the kids raised with them will think it’s second nature, but us grown ups will have a tough time getting used to it.”

That was a fairly pedestrian answer.

So now it’s your turn:

Do you think people could really live with artificial intelligence? Not AI in computers, but real live artificial intelligence?

18 Responses to "Can we live with artificial intelligence?"

True AI (free will, free thinking) would mean a new life form deserving of the same respect we give ourselves. What you may be pointing to are programmable androids, which would provide assistance to humans–a perfectly acceptable notion.


free thinking, free will, just like humans, with all the rights and privileges of their natural/bio siblings? I’d hope that humanity would choose to read True AI beings like human beings, but the way humans have historically treated other humans tells me that might be an uphill battle.


Tough question. I think the AIs would start off in a servant role/slave role – built for some purpose, like maintaining cities (garbage, tree pruning, etc. ) or for the military. But once folks got use to them, and more came around, and time went by (like decades to centuries), they would either be destroyed or given rights.

And even then, there would be people who would be AIophobes, unable to give up their prejudices towards them, not willing to hire them, making crude jokes, only using them for certain questionable services, etc.

Yeah, kind of like any culture clash humans have been through.


I think anime gets this kind of right by showing it as ubiquitous and inevitable. Technology is going to keep advancing and things like religion will increasingly take a back seat to its advance and morality will just have to adapt. We will probably reach a point like Ghost in the Shell where AI will reach a stage where we have to question our definition of life and the limits of control/creation of AI.


I agree with nrlymrtl and Genki Jason. Once it’s been around long enough, I figure people will adapt to it – same way we adapted to growing our food instead of hunting it, working in factories and then offices instead of in the field, etc. 😀


I think the whole questions is tainted by the books and films we have watched over the years which with a few exceptions boil down to AI being a threat to mankind. The whole idea that people don’t like change being followed by the angry mob with pitchforks and torches crying out for blood! We do seem to be almost programmed to reject change and yet at the same time our own history shows that we are constantly evolving. It’s not that long ago that people thought the invention of steam locomotives would change the world for the worse, not to mention outrage at the thought of measures to improve sanitation and provide fresh water into peoples’ homes. I suppose what this shows is that we do eventually adapt – it just takes a little time. It’s also kind of inevitable given our constant search for the new.
Lynn 😀


i know my answer was completely tainted by movies, anime and books! AI/Robot stories never get old for me, and I love what’s been coming out in that niche in the last 5 years.

this means I’m going ot have to get a smartphone eventually, doesn’t it? 😉


Pretty much so!
Lynn 😀


I don’t think the acceptance would be an easy one unless it was a very gradual shift. If suddenly machines were sentient I think people’s natural tendency to hate change and to react fearfully would make for a lot of dark times until things leveled out and time proved, if it did, that this was not a bad change. Despite it being a very old and frequently revisited topic I continue to find stories about sentient robots and what that means to be fascinating. Regardless of how it is presented or the take the author/film maker has I think it is fascinating. For me personally I think a lot of that stems from my childhood and my tendency to anthropomorphize the objects I cared about, as so many of us have a tendency to do. I wanted my teddy bear and my toys to come alive, etc. so it doesn’t surprise me that even now I am excited by stories in which robots suddenly become more than programmed machines and start having the ability to interact meaningfully with us.

Not sure how I would react in ‘real life’, but I do love those stories.


I agree with Peter, just as he agrees with others. A lot of consensus building up here, it would seem 😉

I would, however, point out that those periods of ‘adapting to growing food and working in factories’ are also known as the Agrarian and Industrial Revolutions, both of which utterly and irreversibly altered human society. While they unarguably pushed us forward intellectually, they also created imbalances and injustices that still affect all of us today. Continue that parallel as you will…


My favorite part of this is that you phrased the question as “real live artificial intelligence” — though in a strictly biological sense, it wouldn’t be “alive”, even though it might exhibit the outward appearance of such.

I think the hardest part is that every technology I can think of gets weaponized. So, sadly, the version I keep coming back to is some sort of Terminator.


Here’s an alternative question: What’s an AI?
Define what that means.
It’s not impossible that we’ll be quite able to take an “impression” of a human brain and run it in simulation on a computer at some point in the (maybe near) future. Is the result a human or an AI? How would people look at that kind of being?
The answer, I suspect, is that such a life form would be considered as less than a human until it became a common practice. Historically, humans don’t like competition near the top of the evolutionary pinnacle. We will consider AIs as less than human, maybe a threat, until suddenly they aren’t (or they wipe us out and supplant us).


I think the rise of AIs would be a very gradual thing, and not altogether welcome for many people. I think that we will have machinery running behavior-governing algorithms that become more and more complex, as the tasks they are designed to perform become more complex. We have some relatively simple examples of this now (like the Roomba). Maybe we’d use machines that can intelligently load/unload and sort inventory, make fast food, or work in fields. At this point, we would be displacing a vulnerable portion of our society with machines. I don’t think this will be met with much happiness by the workers, but I think many business owners would jump at the chance to switch to AI employees.

As for human shaped machines? I can only think these would come about to serve a specific purpose that a non-human-shaped body would not be acceptable for, which would displace yet another area of workers. We might have humanoid robots from the waist up serving as receptionists, bank tellers, or cashier workers. Maybe sex workers, like in the movie AI, but I think that would be difficult to design.

As the tasks we want these machines to perform become more complex, their programming would become more complex, until at some point, people would begin behaving as if the machines have thoughts and feelings. I’m not sure if I believe they would ever be accorded the rights of a person, even if/when their programming eventually becomes as complicated as a human brain. I can’t help thinking how difficult it has been historically for human beings to convince each other that they deserve basic human rights, regardless of differences in appearance, religion, or culture. I can only imagine that this would be much, much, worse if the people in question are not even organic.


I love the concept of AIs and enjoy them in fiction. I am skeptical of androids reaching a level of human-like interaction with self-awareness and a similar ability to adapt/problem solve.

But assuming I am wrong and it is economically feasible, I think humans would adapt just fine. I am unsure if AIs would be valued as expensive machines or sentient life. My gut feel is the prior, unless they are some hybrid of cloning and hardware.

Ironically, I am two-thirds through Chiang’s The Lifecycle of Software Objects


That is a very interesting idea. And I always wonder too, how we are influenced by the books and movie about AI which don’t exist in the universe of the books and movies. Then it all becomes very meta in my head. LOL

I also love the movie AI. What a wonderful piece of art!



In more educated discourse, I just finished Tony Daniels’ Metaplanetary, which has AI civil rights at its core. I also have to recommend Rudy Rucker’s Ware Tetralogy as a typically whacked out look at robots/AI and whether or not our goals will mesh with theirs. Finally, Yamamoto Hiroshi’s Stories of Ibis is kind of an updated take on I, Robot, from a Japanese nerd perspective.

Kamo would know all about building consensus, nonetheless I have to agree with him. I’m not sanguine about us dealing with other sentients, but that could just be my cynicism running on overload after 4 years of American politics.


[…] Andrea at Little Red Reviewer discusses AI and questions our relationship to it. […]


[…] Can we live with artificial intelligence? ( […]


join the conversation

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Follow me on Twitter!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,617 other subscribers
Follow the Little Red Reviewer on



FTC Stuff

some of the books reviewed here were free ARCs supplied by publishers/authors/other groups. Some of the books here I got from the library. the rest I *gasp!* actually paid for. I'll do my best to let you know what's what.
%d bloggers like this: