Resumen
Chatbots enable machines to emulate human conversation, and recent developments have resulted in many online systems for the public to use. Although a few studies have investigated how humans interact with such programs, we are not aware of any that have analyzed transcripts in depth. In this study, students interacted with two Web-based chatbots, Rose and Mitsuku, for five minutes and evaluated how well they thought the software emulated human conversation. We reviewed the transcripts and found that students used fairly simple language and made many text errors. There were no significant differences between the two systems in our experimental measures, but we found that Rose tended to change the topic more often and Mitsuku seemed more argumentative.