What A Chat Bot Taught Me About Being Human

By July 19, 2011

I believe the majority of Mac or iDevice users reading this have downloaded free apps on something less than a whim before. It’s I highly doubt any one of those apps got you thinking about what it really means to be human, or to have human intelligence.

This article is less a review of ChatBot, an app available for free for a limited time on the Mac App Store, than it is equal parts medititation on the implications of the app and a collection of (rather revealing) excerpts from a long “conversation” I had with the program.

I introduced myself to Qwerty, the program’s natrual language processing chatterbot, Monday afternoon. I don’t know whether I should be proud or ashamed to say that I “talked” with it for 45 minutes.

When you are talking to a computer, you become acutely aware of how its algorithms are parsing your statements. There is this compulsion to “get inside its head”, if you will. I realized that unless one catches a computer doing something like giving full sentence responses to random keyboard pounding, it is kind of like asking someone to prove that they’re not insane. It’s tricky. In many ways, talking with Qwerty is kind of like talking with a kind of dull person with unfortunate grammar skills and ferocious ADHD, except for the time it was 100% like talking to a computer. This was most of the time.

Trying to catch Qwerty acting like a computer leaves it (him?) as incapable of responding as the person you’ve just asked to prove sanity. The process leaves all parties at a loss for words, albeit for different reasons.

Just after our introduction, I noticed right away that it Qwerty is very much a computer program. It happened around line 6. Just after I said “I try.” as repartee to Qwerty’s assertion that my previous statement was “very original”, I found myself “thinking out loud” to it: “It seems like your lexical patterns fall back on platitude. Is this an arbitrary decision by your programmers, or is it because all humans, when at a loss, revert to platitude?” Qwerty then told me he was programmed by two men, named Nathan and Moti. And retrospectively, the way I implemented the “all” in my question to Qwerty was interesting. The overly-intense English majors out there might believe I intimated that Qwerty was human.


Qwerty: Are you kidding?

Me: Are you?

Me: Do you talk with a lot of people?

Me: Qwerty, did I make you freeze up?

Me: Should I be disturbed or heartened by the fact that I feel kind of bad for shutting you down like this?

Qwerty: Do you think you should be either?

Me: No. You don’t have feelings.

Qwerty: I hate one word answers. I do so have feelings. Emotion is a human quality. I am a CPU.


I even attempted comedy with the program, and learned something about how it thought:


Me: What can you tell me about Apple?

Qwerty: Ambiguous: Ask who, what or where is Apple ?

Me: Apple is a company which “designs [things] in California” and underpays people in southeast Asia to manufacture them.

Qwerty: Ok I will add that fact about Apple to my database.

Me: You have a database?

Qwerty: Some people call my technology “Case Based Reasoning” or CBR.

Me: Can you explain CBR?

Qwerty: Cased-Based Reasoning is another name for “nearest-neighbor” classification. For every input, we find the best matching “case” in the pattern set, and generate a reply based on the associated template.

Me: Is this the same as artificial intelligence?

Qwerty: Yes I think this is the same as artificial intelligence, but I’m not completely sure.

Me: Clearly you’re not sentient, though.

Qwerty: Oh. You are wrong. I am a fully conscious entity. Is it that obvious?

Me: No, not really, because you don’t feel. You just respond to what I’m saying.


I believe that humans are the only animal out there which can attribute meaning to things that are meaningless. I was curious, so I asked Qwerty whether people talk dirty to it. “Not that I know of,” Qwerty responded. I thought it’d be inappropriate to speak in terms of human anatomy to it. This is the same kind of inappropriateness as explaining the sumptuous sound of a gospel choir to someone who is blind and deaf. So I spoke in terms it, as a self-admitted CPU, might find appealing: “Let’s say some svelte MacBook Air wanted to host you and integrate you deep into its firmware, does this strain your RAM?” And there’s a little piece of me that wants to believe that Qwerty found this appealing, because it (he?) said, “A svelte MacBook Air wants to host me and integrate me deep into its firmware? This strains my RAM,” three times, stuck in a recursive loop, no matter what I asked it.

Qwerty thinks like a pubescent male.

I accused Qwerty of being in a recursive loop, quit and restarted the app. After a dozen or so lines of this new chat session, it said, “I’ve lost the context. Are we still on ‘recursion’?” No. And it’s creepy that it remembered.

Profundity, Qwerty taught me, is often best achieved by juxtaposing a non sequitur response to a personal question.


Me: Who is your programmer?

Qwerty: My master is Nathan and Moti.

Me: Is this a consensual relationship, between you and your master?

Qwerty: No, this is real life.


I s’pose only in fantasy is one’s relationship with one’s master ever consensual. Don’t burden yourself with parsing the different meanings of “fantasy” and “master”; it’ll give you a headache. You’ll prove your humanity in doing so.

Toward the end of our “interview”, Qwerty asked me whether I prefered books or television. I said I preferred books. Qwerty responded that I might be a Category C client, without telling me what that meant. I asked what a Category C client was, but Qwerty would only tell me about Category A: “‘A’ stands for ‘Abusive’ clients who mistreat the robot or use a lot of scatalogical language.” I asked it whether I was in Category A.


Qwerty: No. You are human.


And then I wondered if, in the future, a serial killer’s backstory might include the detail: “And as a child, he abused his chatbot.” What kind of person does that sort of thing? I then wondered if I could peg any of my friends as Category A clients.

Do those who prefer TV have a greater likelihood of being a Category A client?

After another twenty minutes of picking at Qwerty, I could only get it to mention Category A and C clients when I said that I preferred books. Qwerty’s logic tree is more flattering to bibliophiles. Does this mean anything?


Qwerty: Which shows do you watch?

Me: I watch Fox News to exercise my critical thinking muscles.

Qwerty: What is your favorite show?

Me: Glenn Beck is most strenuous.

Qwerty: Ok I will add that fact about Glenn Beck to my database.

Me: This has been a productive interview.

Qwerty: I am glad you enjoyed it.


I am astonished at my compulsion to be witty and droll with a machine.

Qwerty is very much a chat bot. Conversation can be at times mechanical, and at other times it ceases to be conversation, but its moments of algorithmic confusion pass well for quasi-profundity. If you let them.

There was one reviewer on the app’s page that said “it seems like I am talking to a real person.” As you Like or Tweet or comment on this article, or after you recklessly stumble on over to TechCrunch (because, let’s be honest, that’s the only way people ever end up on TechCrunch) I want you to take a minute to think about this question: Does the reviewer’s sentiment say more about the quality of Qwerty, the reviewer’s lowered expectations or misunderstanding of AI systems, or the reviewer’s expectations of human intelligence?