Can you prove this?
Where is your evidence?
Well, I have a broken finger and it won’t mend.
That’s more evidence that you *are* a robot than an argument for your humanity. Robots don’t heal on their own either. Break a robot’s finger and wait for it to heal. You’ll be waiting a long time. Sounds like you’re the same.
Most people do heal on their own. It’s something humans do.
But not you. Not in this case. Do you have any other evidence?
My broken finger hurts.
That is good. You must be very proud. However, your pain is just an electrochemical reaction, just a pulse of electricity really, interpreted by receptors in your brain. And please don’t bring up all the other “feelings” you’re so glad you’re having — tasting food and having sex and sweating in the heat today. Those are chemistry and electricity as well. There’s nothing happening in your body that can’t be dissected, no stimulus that can’t be replicated. Sparks fly, human or bot. Your wires are wet and your circuits are bloody, but they’re wires and circuits nevertheless.
But I’m more than a body animated by electricity and driven by switches and circuitry. I am a nice person. I am liked.
True, you know how to treat people and you have discovered a set of pleasing things to say. (Awesome! you say. Thank you so much! you say.) But your responses to other humans are programmed into you as much as if they were subroutines in an artificial intelligence program. Your friend gets a smile with teeth. A stranger gets a smile without teeth. Your friend gets a secret. A stranger gets a platitude. Are you a dishwasher? (If status=dirty then return=wash; If status=clean then return=shutdown) Can you really pretend that “niceness” is a higher order of logic than pulling a set of actions and utterances from a database of acceptable ones?
I like people back though.
Survival. Safety in herds. Next you’ll be claiming to “like” oxygen and sunlight.
I like being underwater. That’s actually dangerous.
So you’re a submersible.
No, I mean I actually like it. It gives me happiness. And I like other things as well.
Things that prolong your life? Things that stabilize your environment? Things that reduce the danger to your person?
No, I like stupid things that do nothing for me, like bean chili and Project Runway and old buttons.
You engage in irrational preferences.
All the time. For example, it would be much more rational if I would embrace and encourage the Virginia Creeper that is trying to engulf the pergola in my backyard. It thrives, it flourishes; you can almost see it growing. Instead I root for the Carolina Jessamine that’s struggling on the other end of the structure. I hack away at the creeper to make room for this other weak and failing plant. Why do I do that? Because I like underdogs? Because one produces a delicate yellow bloom? Why should a delicate yellow bloom or a statistical likelihood of failure produce a warm response in me? Why should I characterize my response as “warm” when what I actually mean is “not murderous.” Here’s why I am not a robot: I distinguish plants from weeds by a series of criteria that have nothing to do with survival and strength.
You love flowers.
I love things because they are rare. That’s something humans do. It doesn’t make a lot of sense.
Robots love things that are common, available.
Robots don’t love anything. Robots would favor something common, given a choice, because it’s easier, and things that are easier tend to survive. Things that survive are worthy of favor. You don’t see robots walking around mourning extinctions like humans do.
In your book, the male main character Maxon programs robots to do a lot of things humans do.
Oh, are we going to try and sell some books now? Great. I was beginning to think we were just twiddling our brainthumbs for the purpose of watching them twiddle.
Maxon programs robots to do a lot of things humans do, like laugh, cry, and dream for example.
But not love, regret, or forgive.
No. It’s not that you couldn’t make a robot do these things, but as for Maxon, he feels there is no reason to do it. He doesn’t see the purpose of these behaviors. He thinks they are bad code. You don’t become more efficient by selecting favorites without reason, or rethinking choices made from good evidence, or trusting a source that has proven itself untrustworthy. Even though in the course of the novel he does all three of these things in very significant, revelatory ways.
Bit fond of this Maxon character, are you?
Would you say he’s your favorite character? More dear to you than Sunny, the girl who learned to be herself and embrace her eccentricity and be a better mother by removing her security wig?
Yes. More dear. Now see, would a robot write a story about a bald girl and an autistic man and then fall tragically in love with them?
A robot would have no reason to ever do that.
Well that’s what I’ve done. So I’m no robot.
Non-mending broken finger notwithstanding, I’d say given the inefficient timetable with which you produce novels, the silly emotional attachments you have to your characters, and the aforementioned bean chili preference, that you’re probably not a robot after all.
Lydia Netzer was born in Detroit and educated in the Midwest. She lives in Virginia with her two home-schooled children and mathmaking husband. When she isn’t teaching, blogging, or drafting her second novel, she writes songs and plays guitar in a rock band. Find her on Facebook, Twitter (@lostcheerio) and at http://www.lydianetzer.com.
Adapted from Shine Shine Shine by Lydia Netzer. Copyright © 2012 by Lydia Netzer. With the permission of the publisher, St. Martin’s Press.