The emerging fields of autonomous systems and biomechatronics requires an interdisciplinary effort to tackle the increasing demand to use intelligent robots and devices to assist humans in critical applications including elderly/patient care, search and rescue, civilian and military transportation, and surveillance/exploration. As these systems enter the aforementioned real world scenarios, their performance in accomplishing complex tasks or solving problems is highly dependent on their perception and understanding of potentially unknown or cluttered environments. Hence, major advances are needed in robot/device mechanical design, sensor techniques, sensor information interpretation and control architectures to allow these systems to explore and interact in varying 3D environments which may contain humans.
- Assistive Robotics and Assistive Devices
- Social and Personal Robots
- Service Robotics and Vehicles
- Robot-Assisted Emergency Response
- Sensor Agents
- Human-Robot Interaction
- Intelligent Perception
- Affective Computing
- Task-Driven Control and Semi-Autonomous Control
- 3D Simultaneous Mapping and Localization
- Environment and Health Monitoring
- 3D Sensory Systems
- Multi-Robot Teams
The objective of our research work is to develop multi-modal sensory systems that are integrated in heterogeneous health monitoring devices. The devices will be wearable on the patient and some will be placed inside the environment in which the patient occupies.
This research focuses on the development and use of innovative robotic technologies to provide person-centered cognitive interventions to improve the quality of life of elderly adults. Namely, the objective of this work is to develop intelligent assistive robots to engage individuals in social human-robot interactions (HRI) in order to maintain or even improve residual social, cognitive and affective functioning.
Specifically, this work involves the design and development of the sensory systems and HRI control architectures for robotic technology to facilitate natural and realistic social interactions during activities of daily living and cognitive exercises for elderly individuals. The control architectures will allow a robot to monitor activities and determine the emotional state and intent of the user. Thus, as an informed intelligent agent, the robot can act as a social motivator by providing appropriate learned behaviours such as cues, words of encouragement, task assistance, and congratulations to facilitate the completion of tasks.
As the population of the world is aging rapidly, this increase in the elderly population is putting a strain on healthcare systems as older adults use a disproportionately large portion of the available healthcare services. While the demand for healthcare is increasing, large numbers of
healthcare workers who are a part of this aging population are also retiring. The drastic increase in the number of older adults that need care and the decline in the number of healthcare workers could result in the diminishing of the quality of elder care.
This is particularly true for long-term care facilities, where frail older adults that have more complex cognitive and/or physical needs usually live. For these individuals, it is important to provide quality care while also delivering programs that can enhance their independent abilities, increase their social engagement and provide cognitive stimulation.
The importance of such activities to promote active and healthy living has been well documented to be lacking in long-term care facilities. Both social and cognitive stimuli have been found to promote the psychological well-being of older adults and minimize the risk of social isolation. The implementation of such interventions requires considerable resources and people, and implementing and sustaining them on a long-term basis can be very complex and time-consuming for healthcare staff working in long-term care facilities. An alternative approach that could be as effective is through the use of autonomous assistive robots for the implementation of such interventions.
Proof of concept video for Tangy learning from demonstration for high-level tasks from non-expert human demonstrations, January 4, 2017:
Tangy facilitating a Bingo game, a demonstration of the robot's capabilities and the target application, June 7, 2015:
Brian 2.1's interactive abilities being shown and discussed by Prof. Nejat, February 14, 2013:
"Brian 2.0" our socially assistive healthcare robot playing the memory game, March 19, 2010:
"Brian 2.0" our socially assistive healthcare robot displaying body language and facial expressions, August 13, 2010:
Funding Sources: Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs (CRC) Program, AGE-WELL (which is supported by the Government of Canada through the Networks of Centres of Excellence (NCE)), the Connaught Award, Canada Foundation for Innovation (CFI), Canadian Institutes of Health Research (CIHR) and Ontario Research Fund (ORF).
Human-robot interaction (HRI) addresses the design, understanding, and evaluation of robotic systems which are used by people or work alongside them. These robots interact through various forms of communication in different real-world environments. Namely, HRI encompasses both physical and social interactions with a robot in a broad range of applications, such as physical and cognitive rehabilitation, tele-operation for surgery, surveillance and others. Through understanding these interactions we can better design robots to suit the functional, ergonomic, aesthetic, and emotional requirements of different user. One user group especially of importance in our research is the elderly population.
The Intelligent perception research area is related to the investigation of complex objects and operational processes by means of multi-sensorial data provided by sensors which differ in spatial, temporal, and spectral resolutions and in the output sensory format. During perception, hypotheses are formed and tested about percepts that are based on three criteria: sensory data, knowledge, and high-level cognitive processes.
The objective of our research is to develop a team of collaborative mobile rescue robots that can be deployed in the field, and to study human-robot cooperation between teams of rescue workers/volunteers and the robots, as well as multi-robot coordination between different robotic team members, in order to optimize robot design to meet the needs of rescue workers/volunteers in time-critical rescue missions. In a time critical SAR context, it is almost impossible for a single rescue robot to address all the challenges offered by the environments. As a result, teamwork, including human-robot cooperation and multi-robot coordination, become essential for rescue robots.
The objective of this research work is to develop sensor agents for surveillance and monitoring applications in unknown or potentially dangerous environments.
Our research in this area focuses on the development of human-like social robots with the social functionalities and behavioral norms required to engage humans in natural assistive interactions such as providing: 1) reminders, 2) health monitoring, and 3) social and cognitive training interventions. In order for these robots to successfully partake in social HRI, they must be able to interpret human social cues. This can be achieved by perceiving and interpreting the natural communication modes of a human, such as speech, paralanguage (intonation, pitch, and volume of voice), body language and facial expressions.
Interactive robots developed for social human-robot interaction (HRI) scenarios need to be socially intelligent in order to engage in natural bi-directional communication with humans. Social intelligence allows a robot to share information with, relate to, and interact with people in human-centered environments. Robot social intelligence can result in more effective and engaging interactions and hence, better acceptance of a robot by the intended users.
Brian 2.1 during a one-on-one interaction.
We have been developing automated real-time affect (emotion, mood and attitude) detection and classification systems to detect and classify natural modes of human communication, including:
1) body language,
2) paralanguage, and
3) facial expressions
for our social robots to interpret in order to effectively engage a person in a desired activity by displaying appropriate emotion-based behaviours.
Example Brian 2.1 behaviours during accessibility-aware Robot Tutor and Robot Restaurant Finder interactions:
Funding Sources: Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs (CRC) Program, AGE-WELL (which is supported by the Government of Canada through the Networks of Centres of Excellence (NCE)), Canada Foundation for Innovation (CFI) and Ontario Research Fund (ORF).
Swarm robotics is an area of research within multi-robot systems, which consists of physical robots exhibiting intelligence and collective behaviors through local interactions directly between the robots. In this context, we designed and implemented a novel small (16×16 mm2) modular millirobot, mROBerTO (milli-ROBot-TOronto), in order to experiment with collective-behaviour algorithms. These robots have potential use in a wide range of applications, such as environment monitoring and surveillance, micro-assembly, medicine, and search and rescue.
The primary design objectives for our robot were to address the above mentioned issues: maximum (i) modularity, (ii) use of off-the-shelf components, and (iii) processing and sensing, as well as minimum footprint. mROBerTO comprises four modules: mainboard, locomotion, primary sensing, and secondary sensing.
Our modular design allows changes and upgrades to individual modules with little or no disruptions to other modules. Using mostly off-the-shelf components allows easy assembly, production, and maintenance. Small footprint allows more robots to operate in smaller workspaces. Improved processing and sensing capabilities is essential for implementing and testing complex behaviors and tasks.
Below is a brief video of mROBerTO’s workings:
Published in: Justin Yonghui Kim, Tyler Colaco, Zendai Kashino, Goldie Nejat, and Beno Benhabib. “mROBerTO: A Modular Millirobot for Swarm-Behavior Studies,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2109-2114.
Funding Source: This research is funded by the Natural Science and Engineering Research Council of Canada.