Avatar

ABOUT ME

RESUME | LINKEDIN

“AI systems are artificial artifacts designed to exhibit intelligent behavior”. This insight from Herbert Simon encapsulates my research passion. With both a researcher's and designer's mindset, I am deeply committed to the art and science of designing intelligent entities, agents, algorithms, and systems that are empathetic, adaptive, and beneficial to humanity. My research spans machine learning, robotics, and computational design, with a particular focus on reinforcement learning (RL), human-robot interaction (HRI), and human-centered AI. At the core of my work is the creation of systems that can understand, reason, adapt to, and respond to human intentions and needs.

I completed my Ph.D. at Carnegie Mellon University, where I was co-advised by Dr. Daniel Cardoso Llach (Computational Design) and Dr. Jean Oh (Robotics). My doctoral research focused on designing intelligent robots that actively support and collaborate with humans, particularly in labor-intensive work. One of the key projects involved prototyping a reinforcement learning-driven "work companion rover"—an autonomous robot designed to function as a co-pilot for workers with tools and material supplies, as well as a delivery companion on complex, real-world work sites. The robot navigates complex, cluttered, and populated workplaces with social awareness and safety, trained to acknowledge and adapt to human workers' bespoke activities. As a Sci-fi fan, the vision for developing empathetic, human-centered robots is also influenced by portrayals of robots like TARS from Interstellar, Baymax from Big Hero 6, and Benben from The Wandering Earth II, which illustrate the potential for robots and machines to interact humanely and support humans in demanding environments.

Beyond physical robots, my research—constantly positioned at the intersection of AI/ML, computation, and design—also explores designing AI agents and algorithms with spatial awareness, intelligence and creativity. I am particularly drawn to developing algorithms that reason in a more human-like way about the creation of physical spaces and forms. Outside of research, I enjoy sketching, photography, reading, drumming, and spending time with my two cats, Sophie (Tangtang) and Huihui. You can explore other aspects of me on SIDE-B: ARCHITECT OF SPACES AND SYSTEMS and HERE .

LOVE, DEATH & ROBOTS (THAT I'VE HACKED WITH)

Husky

This robot is my dearest baby. I designed and built it almost from scratch based on a Clearpath Husky A200 base, kindly made available by Jean and the National Robotics and Engineering Center (NREC). It is equipped with 2D & 3D LiDARs, RGB-D cameras, IMU unit, and a (strong) on-board computer. It is the cornerstone robot used for my PhD research on robot learning and human-robot interaction. To be noted, Jiaying Wei and Sam Shun played a critical role in the building process.

Black Proton

This cute little one is hacked entirely by me at home. It's adapted from a Tamiya 4WD remote control car (yes, a toy car), and is intended to be a mini version of Husky. Black Proton is a prototyping robot that I try all my reinforcement learning algorithms on. It is equipped with a Jetson Nano, a 2D LiDAR, and an RGB-D camera. After the heavy algorithm testing duty, Black Proton now serves as the Cat Detection, Tracking & Chasing (CDTC) robot in the family. I'm sure my cats love it.

Rocky

Rocky is a home-made, omnidirectional robot built by students from the Robotics Institute. It came to us after Fetch decided to sleep forever, and has been nicely rennovated by Sam Shun from NREC. However it is not so safe to work with. Our lab still has some drifting tire mark left by it on the floor. Once Rocky decides to go crazy, we have exactly 30s to catch it and press stop. Otherwise it's gone for good. Its heart is definitely in racing, not in HRI. Play safe next time, Rocky!

Fetch

The Fetch robot is the first member of the robotically supported collaborative work research. It came to us with a broken arm (gripper/end-effector), a malfunctioning 2D LiDAR, and eventually a "sleeping beauty" mainboard that is no longer awake. We tried very hard to fix it, but luck is not on our side. I love you still, Fetch.

Pixhawk

This drone was the star in the RL-driven multi-drone project. My teammate finished its initial building for carrying blocks using magnets. I must say I am not good at flying real drones. But I am really good at training drones with RL in simulation environments, e.g. Unity+MLAgents, NVIDIA Omniverse Isaac Gym. Pixhawk is now retired, and is always waiting for a chance to get a spin outdoor.

UPDATE

2024.12 Our journal article on “Robot in the Loop” and human-centered AI and robotics has been published. See Springer .

2024.10 We presented our research on the "Work Companion Robot" at IROS 2024 in Abu Dhabi, UAE. See IEEE Xplore .

2024.06 Our paper on the "Work Companion Robot" and human-robot work collaboration is accepted to IROS 2024!

2023.07 I am honored to receive another year of CMU Presidential Fellowship.

2023.06 I am happy to present my PhD research at TCS Innovation Forum 2023, hosted at Cornell Tech, NYC. Thank you for the invite!

2023.05 I demonstrated and presented our research (and robot) at Mill-19 to Bosch Leadership and AI team visiting from Germany.

2022.11 I presented our ReAC project at CMU MFI Research Seminar, along with my colleague Emek Erdolu.

2022.10 Our team presented and demonstrated the ReAC research to CMU Board of Trustees at Mill-19.

2022.09 I am humbled to be named a Presidential Fellow at Carnegie Mellon University. The scholarship is generously donated by TCS.

2022.05 I received Manufacturing Futures Initiatives (MFI) Fellowship. See Manufacturing Futures Institute .

2021.12 I received Autodesk Research Merit Scholarship.

2021.08 I helped founded our new lab space at Mill-19. See Mill-19 Advanced Research Center .

2021.08 I received Manufacturing PA Fellowship from State of Pennsylvania. The research later received recognition from the State Governor Tom Wolf.

2021.05 I joined Autodesk AI Lab as research intern, working in affiliation with Autodesk Robotics Lab.

2020.10 I was awarded GSA/Provost GuSH Grant by Carnegie Mellon University.

2019.11 I was awarded LEGO Scholarship for attending HI'19. Thank you, LEGO Foundation!

2018.05 I co-founded Metascope Tech in Beijing, China.

SELECTED

How to design intelligent robots that support and understand humans?

A Human-Centered, Reinforcement Learning-Driven Robotic Framework for Assisting Construction Workers

Yuning Wu
Doctoral dissertation, Carnegie Mellon University, 2024.
CMU KiltHub | BibTeX


Committee: Daniel Cardoso Llach, Jean Oh, Jieliang Luo

cover cover

Towards Human-Centered Construction Robotics: A Reinforcement Learning-Driven Companion Robot For Contextually Assisting Carpentry Workers

Yuning Wu , Jiaying Wei , Jean Oh , Daniel Cardoso Llach
IROS 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems 2024
IEEE Xplore | Video | Slides | BibTeX


TL;DR: This paper outlines the development of a socially-aware "work companion robot" designed to support, rather than replace, skilled human workers by handling frequent and physically demanding tasks like on-site tool/material delivery. It details the robot's architecture, driven by recent reinforcement learning (RL) methods for social navigation, and introduces a novel pipeline for fine-tuning a pretrained RL model to adapt to specific worker behaviors, ensuring smooth Sim2Real transitions. The robot's effectiveness is validated in a real construction environment through direct worker interaction.

cover cover

Robot in the Loop: A Human-Centered Approach to Contextualizing AI and Robotics in Construction

Yuning Wu* , Emek Erdolu* , Jiaying Wei , Jean Oh , Daniel Cardoso Llach
Journal Construction Robotics, Springer Nature, 2025.
PDF | Springer | BibTeX


TL;DR: This paper explores how robotics can be integrated into manual construction work, leveraging ethnography insights to inform the technology design of a robot prototype designed to assist construction workers driven by deep reinforcement learning approaches for adaptive social compliance and comfort. The study advocates for AI and robotic technology that supports, rather than replaces, skilled human labor.

cover cover

Learning Dense Reward with Temporal Variant Self-Supervision

Yuning Wu , Jieliang Luo , Hui Li
ICRA 2022 IEEE International Conference on Robotics and Automation 2022 | RL for Contact-Rich Manipulation Workshop.
arXiv | Slides | Code | BibTeX


TL;DR: This paper introduces an efficient framework for deriving dense rewards in reinforcement learning, designed to improve the training of robots for contact-rich manipulation tasks like joint assembly and door opening. Developed in collaboration between Carnegie Mellon University and Autodesk Research, this framework employs Temporal Variant Forward Sampling (TVFS) and a self-supervised learning architecture. It leverages temporal variance and multimodal observation pairs, enabling faster and more stable policy training in a plug-and-play manner.

cover cover

Towards a Distributed, Robotically Assisted Construction Framework: Using Reinforcement Learning to Support Scalable Multi-Drone Construction in Dynamic Environments

Zhihao Fang , Yuning Wu , Ammar Hassonjee , Ardavan Bidgoli , Daniel Cardoso Llach
ACADIA 2020 Annual Conference of the Association of Computer Aided Design in Architecture 2020
CuminCAD | Slides | BibTeX


TL;DR: This paper presents a distributed, robotically assisted construction framework that leverages reinforcement learning to support scalable multi-drone task coordination in dynamic work environments. The framework is designed to enable multiple drones to collaboratively build complex-shaped geometry, with a focus on the reinforcement learning-based navigation algorithm that allows 10-15 drones to collectively build the structure while avoiding collision. The paper also discusses the challenges and opportunities of using reinforcement learning in the human-centered work environments.

cover cover

How to design AI agents and algorithms with spatial sense and creativity?

Teaching an AI Agent Spatial Sense by Playing LEGO

Yuning Wu , Katerina Fragkiadaki
In preparation.


TL;DR: Whether one is an architect or not, the way we design and build reflects our spatial understanding of the physical world. As children, this exploration typically begins by playing with building blocks. This research retraces these initial steps by training an AI agent to play with LEGOs in a physical simulator (NVIDIA Omniverse Isaac Gym / PyBullet). A crucial aspect of this study is enabling the AI agent to design valid, and eventually elegant, stacked structures within a specified arena by interpreting spatial relationships among the blocks.

cover cover

Design Architecture with Graphs: Utilize Space Connectivity and Graph Neural Networks To Inform Coherent Spatial Generation

Yuning Wu
Independent research project
Link


TL;DR: This thesis project explores the potential of utilizing space connectivity and graph neural networks (GNNs) to inform layout design. It proposes a novel computational framework that leverages the spatial relationships between architectural elements to generate more informed and contextually relevant design solutions, contrasting from solutions that are purely data-driven or rule-based.

cover cover

State Space Paradox of Computational Research in Creativity

Ömer Akin , Yuning Wu
HI 2019 Thirtieth Anniversary 'Heron Island' Conference on Computational and Cognitive Models of Creative Design.
arXiv | Slides | BibTeX


In memory of Professor Ömer Akin (1942-2020). This is one of his last papers, and I am privileged to be organizing and presenting it. This paper explores the State Space Paradox (SSP) in computational creativity research, discussing how current digital systems, designed as closed systems with predefined parameters, are inherently limited in their creative capacity compared to the open systems of human creativity.

cover cover