Brief Research Overview  

Robot learning is a field of AI centered on algorithms that enable robots to learn to perform a given task more intelligently, and is receiving great attention from the entire scientific community at present. Within this field, the SMART Lab is focusing on advanced deep learning and deep reinforcement learning methods with the goal of more actively applicable in the field through learning from and more flexibly interacting with humans. We are specifically studying cognitive computing methods to improve a robot's decision-making ability as it models and learns the fast and accurate decision-making ability of humans, and recognition methods to enable robots to recognize and judge the identity of an object/scene in real-time even when faced with dynamic environments and limited information.

You can learn more about our current and past research on Robot Learning below.

Socially-Aware Robot Navigation (2021 - Present)


Description: Socially aware robot navigation, where a robot is required to optimize its trajectory to maintain comfortable and compliant spatial interactions with humans in addition to reaching its goal without collisions, is a fundamental yet challenging task in the context of human-robot interaction.  While existing learning-based methods have achieved better performance than the preceding model-based ones, they still have drawbacks: reinforcement learning depends on the handcrafted reward that is unlikely to effectively quantify broad social compliance, and can lead to reward exploitation problems; meanwhile, inverse reinforcement learning suffers from the need for expensive human demonstrations. The SMART lab explores various practical and theoretical robot learning topics in the context of robot navigation. For example, we recently proposed a feedback-efficient active preference learning (FAPL) approach for socially-aware robot navigation, which distills human comfort and expectation into a reward model that is then used to guide the robot agent to explore latent aspects of social compliance. The proposed method improved the efficiency of human feedback and samples by introducing hybrid experiential learning, and we evaluated the benefits of the robot behaviors learned from FAPL through extensive experiments.

Grant: NSF
People: Ruiqi Wang, Weizheng Wang
Project Website:

Selected Publications:

  • Ruiqi Wang, Weizheng Wang, and Byung-Cheol Min, "Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation", 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto, Japan, October 23-27, 2022. Paper Link, Video Link, GitHub Link
Learning-based Robot Recognition (2017 - Present)


Description:  The SMART lab explores learning-based robot recognition technology so as to enable robots to recognize and judge the identity of an object/scene in real-time with facility equivalent to humans, even when faced with dynamic environments and limited information. We aims to apply our research and developments to diverse applications including navigation of autonomous robots/cars in dynamic environments, detection of malware/ cyberattacks, object classification and reconstruction, prediction of the cognitive and affective states of humans, and allocating workloads within human-robot teams. As an example, we introduced a system in which a mobile robot autonomously navigates an unknown environment through simultaneous localization and mapping (SLAM), and can further use a tapping mechanism to identify objects and materials in the environment. The key idea here is the tapping mechanism, where the robot taps an object through a linear solenoid and uses a microphone to measure the resulting sound, making it possible to identify objects and materials. We used convolutional neural networks (CNNs) to develop the associated tapping-based material classification system.

Grants: NSF, Purdue University
People: Wonse Jo, Shyam Sundar Kannan, Su Sun, Go-Eum Cha, Vishnunandan Venkatesh, Ruiqi Wang

Selected Publications:

  • Su Sun and Byung-Cheol Min, "Active Tapping via Gaussian Process for Efficient Unknown Object Surface Reconstruction", 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Workshop on RoboTac 2021: New Advances in Tactile Sensation, Interactive Perception, Control, and Learning. A Soft Robotic Perspective on Grasp, Manipulation, & HRI, Prague, Czech Republic, Sep 27 – Oct 1, 2021. Paper Link
  • Shyam Sundar Kannan, Wonse Jo, Ramviyas Parasuraman, and Byung-Cheol Min, "Material Mapping in Unknown Environments using Tapping Sound", 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), Las Vegas, NV, USA, 25-29 October, 2020. Paper Link, Video Link
Application Offloading Problem (2018 - 22)


Description: Robots come with a variety of computing capabilities, and running computationally-intense applications on robots is sometimes challenging on account of limited onboard computing, storage, and power capabilities. Meanwhile, cloud computing provides on-demand computing capabilities, and thus combining robots with cloud computing can overcome the resource constraints robots face. The key to effectively offloading tasks is an application solution that does not underutilize the robot's own computational capabilities and makes decisions based on crucial cost parameters such as latency and CPU availability. In this research, we address the application offloading problem; that is, how to design an efficient offloading framework and algorithm that makes optimum use of a robot’s limited onboard capabilities and also forms a quick consensus on when to offload without any prior knowledge or information about the application. recently we developed a predictive algorithm to predict the execution time of an application under both cloud and onboard computation, based on the application input data size. The algorithm is designed to be trained after the application has been initiated (online learning). In addition, we formulated this offloading problem as a Markovian decision process and developed a deep reinforcement learning-based Deep Qnetwork (DQN) approach.

Grants: Purdue University
People: Manoj Penmetcha , Shyam Sundar Kannan

Selected Publications:

  • Manoj Penmetcha and Byung-Cheol Min, "A Deep Reinforcement Learning-based Dynamic Computational Offloading Method for Cloud Robotics", IEEE Access, Vol. 9, pp. 60265-60279, 2021. Paper Link, Video Link
  • Manoj Penmetcha, Shyam Sundar Kannan, and Byung-Cheol Min, "A Predictive Application Offloading Algorithm using Small Datasets for Cloud Robotics", 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Virtual, Melbourne, Australia, 17-20 October, 2021. Paper Link, Video Link