ACES was designed, created and conceived by Joshua Hailpern and Marina Danilevsky. The idea for an IM language impairment emulator came up during a dinner in 2009, celebrating Marina’s acceptance to graduate school in the Department of Computer Science at the University of Illinois at Urbana Champaign. Joshua and Marina speculated that if there was a literature about a disorder, how it manifests, and what types of errors a particular population makes, then why couldn’t a computer emulate that impairment. This could, in theory, let family members and clinicians ”walk in the shoes” of individuals with a disorder. This could let the general population see what it is like to communicate with a language impairment, and hopefully increase empathy. Initially, Joshua and Marina speculated about emulating Apraxia.
Joshua asked Professor Laura DeThorne, from the Department of Speech and Hearing Science, for input on such an IM client. Joshua and Professor DeThrone were collaborating on another project relating to Autism. Professor DeThorne suggested that Aphasia may be a better impairment to experiment with, due to the large corpus of literature. She also suggested talking with Professor Julie Hengst in the Department of Speech and Hearing Science (who specializes in Aphasia).
During the early Summer of 2009, Joshua met with Professor Hengst who confirmed that Aphasia has a robust literature and may be a prime application for an IM emulation software. She further provided Joshua and Marina with a collection of literature about Aphasia and suggested that Joshua speak to Professor Gary Dell in the Department of Psychology. Professor Dell’s background specifically dealt with modeling and understanding the probabilities of different errors made by individuals with Aphasia. Joshua then proposed the ACES system to his advisor, Karrie Karahalios, who strongly supported this project.
By the end of the Summer of 2009, Joshua and Marina had run an initial pilot study on a prototype of ACES (which at the time was called LES or Language Emulation Software). During Fall 2009 and Spring 2010, Joshua and Marina worked closely with Professor Dell and his models to build a more complete system. It was at this point that LES was renamed to ACES. Professor Dell, Joshua, Marina and Professor Karahalios designed a user study that was designed to illicit the impact of ACES of empathy and understanding. This research was conducted in the summer of 2010 with the aid of Andrew Harris. Results showed impressive impact on empathy and understanding of aphasia, and was subsequently published at CHI 2011, the premier Human Computer Interaction venue.
The following fall, Marina and Joshua investigated applying network modeling techniques to the logs from ACES, and determined that computers can actually predict conversation partners based simply upon a person’s change in conversation structure.
In Spring 2011, Joshua, Marina, Professor Dell and Professor Karahalios conducted a follow up study with ACES to assess the realism of the distortions. This study provided a validation of ACES’ distortions through a Turing Test experiment with participants from the Speech and Hearing Science community. It illustrated that text samples generated with ACES distortions are generally not distinguishable from text samples originating from individuals with aphasia. This work also explored ACES distortions through a `How Human’ is it test, in which participants explicitly rate how human- or computer-like distortions appear to be. Results from this study were accepted for publication at the ACM SIG ASSETS 2011 conference in Dundee Scotland.