TIC4BOT

Context-Aware Robotic Framework

The development of service robots has gained more and more attention over the last years. Advanced robots have to cope with many different situations emerging at runtime, while executing complex tasks. They should be programmed as dynamically adaptive systems, capable of adapting themselves to the execution environment, including the computing, user and physical environment. Recently, dynamic languages are becoming widely used due to the high runtime adaptability they offer. Therefore, we have analyzed the suitability of these languages to implement robotic systems with high runtime adaptability requirements, using Python as a use case because of its maturity. In order to evaluate their appropriateness, we have implemented a reflective robotics framework (TIC4BOT) that can be programmed in both Java and any dynamic language supported by the standard Java Scripting API, as is the case of Python.

The next video shows the simulation of the proof of concept scenario developed for this project. In this scenario the robot provides with reminder services to patients in a nursing home. This system can, for example, remind to the patient that it is time to have his medicine. These reminders could emerge at runtime, so the TIC4BOT re-programming mechanism is used for dynamically include new programs at runtime from a remote system in the framework execution engine. At the time the framework receives the source code, the robot starts to walk to the coordinates of the patient. These coordinates are retrieved from a Web Service that locates the position of patients throught weareable sensors in the patients. Therefore, patients could be in movement while the robot is trying to locate them, so the robot would adapt its path in order to reach the patient position.

In parallel execution, there is a second service module that is subscribed to the face detection event in the artificial vision primary module. This event is triggered when the vision system in the robot detects a known face. When the event is received, the robot greets to the detected person by his name.

In the video, two simulators are shown. On the left, the simulation for the patient location mechanism is provided, so that we move the patient around the map in order to represent the patient movement. The right simulation represents the robot itself and provides navigation and speech functionalities for achieving its tasks. The next steps are performed:

  1. The framework is launched
  2. The robot waits for new tasks
  3. New reminder task is loaded in the framework, throught remote re-programming mechanism
  4. The robot starts its navigation following the patient positions
  5. The robot detects a known person face and greets him
  6. Finally, the robot get to the patient position and provides him with speech information about the reminder task. In this case, the information provided is that it is time to have his medicines.