NSF Grant to Support Augmentative and Alternative Communication
The team will develop augmentative and alternative communication technologies, enhanced by artificial intelligence, to improve outcomes for individuals with developmental disabilities who have limited speech and language.
Christine Holyfield, an associate professor of communication sciences and disorders at the U of A, is the principal investigator. Elizabeth Lorah, an associate professor of curriculum and instruction, is the co-principal investigator. Roughly half of the grant will go to computer science researchers at Temple University as a subaward.
The researchers noted in their proposal that millions of Americans struggle with limitations in their speech and language. These limitations can include a “lack of speech, speech that most communication partners do not understand and limited language understanding.” The consequences of this can lead to estrangement from daily life, including the ability to participate in social, educational and occupational activities.
Holyfield underscored the significance of limited language by paraphrasing Michael Williams, a communication technology advocate: “Communication is a basic human need, a fundamental human right and an elemental human power. It is through communication that all individuals participate and affect change within their daily lives.”
Devices for augmentative and alternative communication, or AAC devices for short, can help people with speech and language limitations express their thoughts, needs and ideas. They are not without challenges, though, including teaching users to interact with these devices, personalizing the devices to fit the communication needs of each user and making communication as frictionless as possible.
The team believes that artificial intelligence may be the key to rapid advances in the development of AAC devices. The goal of the convergence grant is to create AAC technologies enhanced by artificial intelligence. Areas of computer science that should prove helpful include natural language processing and computer vision, which teaches computers to interpret and recognize the visual world and could provide helpful labels and prompts for users.
Through a collaborative, iterative process, the research team will gather input from stakeholders, oversee how users explore AI-powered AAC devices and evaluate the concepts through rapid prototyping and user testing. Improved communication support from these devices would result in better educational outcomes and increased participation in the workforce for individuals with communication limitations, while helping their families and the professionals who support them.
“The more burden we can take off the individual and put on technology, the better,” Holyfield said.
The Convergence Accelerator program and awarded grant are part of NSF’s new Directorate for Technology, Innovation and Partnerships. The grant aligns with Track H, “Enhancing Opportunities for Persons with Disabilities,” and represents Phase 1, a Team Convergence and Proof-of-Concept Development phase. After the first year, Holyfield’s team will participate in a formal NSF pitch and Phase 2 proposal, which will need to consider how to make their innovation sustainable. Selected teams will then advance to Phase 2, which provides additional funding of up to $5 million.
You must be logged in to post a comment.