Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
Interaction Lab
Top
Overview

The aim of this work is to develop technologies, computational methods, and algorithms that can be used to assist the stroke population. We are currently developing a socially assistive robotic framework that can be used to assist with and speed up motor deficit recovery of individuals post-stroke.

Project Details

Background

This project aims to develop effective, safe, user-friendly, and affordable robotic in-home therapy for accelerated recovery of subacute and chronic stroke patients based on novel human-robot interaction. The primary aims are the development of the technological underpinnings of a stroke intervention. This involves the development of the technologies (robots and sensors), the development of computational methods (data analysis and signal processing), and the development of algorithms to extract useful information from the resulting data. Our motivation is the occurrence of stroke, a major cause of neurological disability and health care cost in the United States. The number of cases exceeds 800,000 every year, and more than half of the survivors are left with motor disability and two thirds of those are still disabled five years later. A key problem in post-stroke rehabilitation is the amount of active exercise required to maximize functional recovery of the affected region and at least partial return of lost function. Without a concerted effort to use the affected extremities, in spite of their diminished functionality, the patient adapts to the loss of function through neural and behavioral compensation, and never regains the lost capabilities. Supervised home therapy is an ideal scenario that decreases the need for resource-intensive expert supervision. The goal of this proposal is to make such in-home therapy automated, effective, and affordable.

  1. Develop computational methods and algorithms that can be used with sensor data to create a dynamic, human-robot interaction for motor task deficit recovery.
  2. Develop an adaptive, inexpensive, reliable, and user-friendly in-home robot-based rehabilitation framework that can both monitor and remind the participant to use the weaker arm for activities of daily life, but not physically handle the patient.
  3. Compare and evaluate the relative effectiveness of this new system with standard manual approaches (e.g., home diary, homework assignments, compliance devices, etc.).
  4. Determine the efficacy of our system at monitoring and assessing motor function capability in individuals post-stroke.

Unlike most of current work in Rehabilitation Robotics, which focuses on physically aiding movement, our approach involves a hands-off strategy, focusing instead on employing human-robot interaction to achieve the desired therapy goals. Robots that physically impact humans must address the major as-yet largely unsolved challenges involving safety, cost, and liability. In contrast, the project we propose will bypass having the robot physically contacting the patient, and will instead combine collision-free movement, vision-based sensing and following of the patient and tracking of his/her use of the affected limb, and using novel human-robot interaction protocols to guide and encourage rehabilitation.

This work addresses a new niche: in-home hands-off robotic rehabilitation. We will develop novel human-robot interaction techniques capable of interacting with a post-stroke patient in the home, monitoring patient use of the affected arm, reminding him/her to use the arm, and providing guidance, encouragement, and improvement assessment. All these capabilities will be provided in a safe, effective, and user-friendly fashion without physical contact between the human and the robot. The system will be implemented on a physical robot and evaluated in a protocol used for other post-stroke rehabilitation methods, thus providing objective measures of effectiveness as well as useful insights about human-robot interaction in the in-home rehabilitation environment.


Current Studies

Task-oriented training (TOT) has emerged as one of the promising paradigms in the domain of motor neurorehabilitation. TOT, which focuses on challenging, meaningful, quantifiable and adaptive task practice, has been shown to lead to positive clinical outcomes in patients. These characteristics are ideally suited for our robotics rehabilitation framework, which is inherently quantitative. Further, this framework can make use of various sensor modalities to determine how well a participant is performing a given task, and attempt to adapt the task challenge level accordingly. Our specific aims are to:

Experiments
Images

Experiment #2

Experiment image
Participant Interacting with Bandit
Experiment image
Bandit, the Button Game, and a Participant
Experiment #1
Experiment image
Robot Setup
Experiment image
Experiment Setup
Experiment image
Wire Puzzle
Experiment image
Wearable Sensors
Pilot Study #2
Experiment image
Robot Setup 1
Experiment image
Robot Setup 2
Experiment image
Experiment Setup
Experiment image
Wearable Sensors
Pilot Study #1
Experiment image
Robot Setup
Experiment image
Experiment Setup
Experiment image
Wearable Sensors
Early Project Summary Poster
Robot with therapist and patient
Robot with the Therapist and Patient
[also in .EPS]

Videos

These videos were taken from the original pilot study, conducted with non-humanoid robots.

Experiment 1
Patient 1 puts books in a bookshelf. The robot uses a human voice, body motion and is persistent.
Movie (76MB)
The same movie, but reencoded to work for Powerpoint: Movie (123MB)

Experiment 2
Patient 1 puts books in a bookshelf. The robot uses a robot voice and is encouraging.
Movie (18MB)
The same movie, but reencoded to work for Powerpoint: Movie (29MB)

Experiment 3
Patient 1 puts books in a bookshelf. The robot uses sound effects and is not very persistent.
Movie (87MB)
The same movie, but reencoded to work for Powerpoint: Movie (141MB)

Experiment 4
Patient 1 performs arbitrary activities. The robot uses a robot voice and is encouraging.
Movie (19MB)
The same movie, but reencoded to work for Powerpoint: Movie (31MB)

Experiment 5
Patient 1 performs arbitrary activities. The robot uses sound effects and is not very persistent.
Movie (13 MB)
The same movie, but reencoded to work for Powerpoint: Movie (21MB)

Experiment 6
Patient 1 performs arbitrary activities. The robot uses a human voice, body motion and is persistent.
Movie (24MB)
The same movie, but reencoded to work for Powerpoint: Movie (38MB)

Experiment 7
Patient 2 performs arbitrary activities. The robot uses a human voice, body motion and is persistent.
Movie (75MB)
The same movie, but reencoded to work for Powerpoint: Movie (121MB)

Experiment 8
Patient 2 performs arbitrary activities. The robot uses a robot voice and is encouraging.
Movie (38MB)
The same movie, but reencoded to work for Powerpoint: Movie (61MB)

Experiment 9
Patient 2 performs arbitrary activities. The robot uses sound effects and is not very persistent.
Movie (30MB)
The same movie, but reencoded to work for Powerpoint: Movie (48MB)

Experiment 10
Patient 2 puts books in a bookshelf. The robot uses a robot voice and is encouraging.
Movie (22MB)
The same movie, but reencoded to work for Powerpoint: Movie (36MB)

Experiment 11
Patient 2 puts books in a bookshelf. The robot uses sound effects and is not very persistent.
Movie (21MB)
The same movie, but reencoded to work for Powerpoint: Movie (33MB)

Experiment 12
Patient 2 puts books in a bookshelf. The robot uses a human voice, body motion and is persistent.
Movie (43MB)
The same movie, but reencoded to work for Powerpoint: Movie (70MB)

Publications
Support

This work is supported by the National Science Foundation Grant Number IIS-0713697, and the NSF CRI Grant Number CNS-0709296. This work has also received support from the USC Provost's Center for Interdisciplinary Research (CIR) Fellowship and the Okawa Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

People

Current
Katelyn Swift-Spong: swiftspo [at] usc [dot] edu
Elaine Short: elainegs [at] usc [dot] edu

Past
Eric Wade: ericwade [at] usc [dot] edu
Jonathan Dye: jdye [at] usc [dot] edu
Khawaja Shams: khawaja [dot] shams [at] gmail [dot] com
Aras Akbari: aakbari [at] usc [dot] edu
Avinash Parnandi: parnandi [at] tamu [dot] edu
Pierre Johnson: pierre [dot] johnson [at] gmail [dot] com
Adriana Tapus: adriana [dot] tapus [at] ensta-paristech [dot] fr