“AI systems are artificial artifacts designed to exhibit intelligent behavior”. This insight from Herbert Simon encapsulates my research passion. With both a researcher's and designer's mindset, I am deeply committed to the art and science of designing intelligent entities, agents, algorithms, and systems that are empathetic, adaptive, and beneficial to humanity. My research spans machine learning, robotics, and computational design, with a particular focus on reinforcement learning (RL), human-robot interaction (HRI), and human-centered AI. At the core of my work is the creation of systems that can understand, reason, adapt to, and respond to human intentions and needs.
I completed my Ph.D. at Carnegie Mellon University, where I was co-advised by Dr. Daniel Cardoso Llach (Computational Design) and Dr. Jean Oh (Robotics). My doctoral research focused on designing intelligent robots that actively support and collaborate with humans, particularly in labor-intensive work. One of the key projects involved prototyping a reinforcement learning-driven "work companion rover"—an autonomous robot designed to function as a co-pilot for workers with tools and material supplies, as well as a delivery companion on complex, real-world work sites. The robot navigates complex, cluttered, and populated workplaces with social awareness and safety, trained to acknowledge and adapt to human workers' bespoke activities. As a Sci-fi fan, the vision for developing empathetic, human-centered robots is also influenced by portrayals of robots like TARS from Interstellar, Baymax from Big Hero 6, and Benben from The Wandering Earth II, which illustrate the potential for robots and machines to interact humanely and support humans in demanding environments.
Beyond physical robots, my research—constantly positioned at the intersection of AI/ML, computation, and
design—also
explores
designing AI agents and algorithms with spatial awareness, intelligence and creativity. I am
particularly drawn to developing algorithms that reason in a more human-like way about the creation of physical
spaces and forms. Outside
of research, I enjoy sketching, photography, reading, drumming, and spending time with my two cats, Sophie
(Tangtang) and Huihui. You can explore other aspects of me on SIDE-B: ARCHITECT OF SPACES AND SYSTEMS and HERE
.
This robot is my dearest baby. I designed and built it almost from scratch based on a Clearpath Husky A200 base, kindly made available by Jean and the National Robotics and Engineering Center (NREC). It is equipped with 2D & 3D LiDARs, RGB-D cameras, IMU unit, and a (strong) on-board computer. It is the cornerstone robot used for my PhD research on robot learning and human-robot interaction. To be noted, Jiaying Wei and Sam Shun played a critical role in the building process.
This cute little one is hacked entirely by me at home. It's adapted from a Tamiya 4WD remote control car (yes, a toy car), and is intended to be a mini version of Husky. Black Proton is a prototyping robot that I try all my reinforcement learning algorithms on. It is equipped with a Jetson Nano, a 2D LiDAR, and an RGB-D camera. After the heavy algorithm testing duty, Black Proton now serves as the Cat Detection, Tracking & Chasing (CDTC) robot in the family. I'm sure my cats love it.
Rocky is a home-made, omnidirectional robot built by students from the Robotics Institute. It came to us after Fetch decided to sleep forever, and has been nicely rennovated by Sam Shun from NREC. However it is not so safe to work with. Our lab still has some drifting tire mark left by it on the floor. Once Rocky decides to go crazy, we have exactly 30s to catch it and press stop. Otherwise it's gone for good. Its heart is definitely in racing, not in HRI. Play safe next time, Rocky!
The Fetch robot is the first member of the robotically supported collaborative work research. It came to us with a broken arm (gripper/end-effector), a malfunctioning 2D LiDAR, and eventually a "sleeping beauty" mainboard that is no longer awake. We tried very hard to fix it, but luck is not on our side. I love you still, Fetch.
This drone was the star in the RL-driven multi-drone project. My teammate finished its initial building for carrying blocks using magnets. I must say I am not good at flying real drones. But I am really good at training drones with RL in simulation environments, e.g. Unity+MLAgents, NVIDIA Omniverse Isaac Gym. Pixhawk is now retired, and is always waiting for a chance to get a spin outdoor.
2024.12 Our journal article on “Robot in the Loop” and human-centered AI and robotics has been published. See Springer .
2024.10 We presented our research on the "Work Companion Robot" at IROS 2024 in Abu Dhabi, UAE. See IEEE Xplore .
2024.06 Our paper on the "Work Companion Robot" and human-robot work collaboration is accepted to IROS 2024!
2023.07 I am honored to receive another year of CMU Presidential Fellowship.
2023.06 I am happy to present my PhD research at TCS Innovation Forum 2023, hosted at Cornell Tech, NYC. Thank you for the invite!
2023.05 I demonstrated and presented our research (and robot) at Mill-19 to Bosch Leadership and AI team visiting from Germany.
2022.11 I presented our ReAC project at CMU MFI Research Seminar, along with my colleague Emek Erdolu.
2022.10 Our team presented and demonstrated the ReAC research to CMU Board of Trustees at Mill-19.
2022.09 I am humbled to be named a Presidential Fellow at Carnegie Mellon University. The scholarship is generously donated by TCS.
2022.05 I received Manufacturing Futures Initiatives (MFI) Fellowship. See Manufacturing Futures Institute .
2021.12 I received Autodesk Research Merit Scholarship.
2021.08 I helped founded our new lab space at Mill-19. See Mill-19 Advanced Research Center .
2021.08 I received Manufacturing PA Fellowship from State of Pennsylvania. The research later received recognition from the State Governor Tom Wolf.
2021.05 I joined Autodesk AI Lab as research intern, working in affiliation with Autodesk Robotics Lab.
2020.10 I was awarded GSA/Provost GuSH Grant by Carnegie Mellon University.
2019.11 I was awarded LEGO Scholarship for attending HI'19. Thank you, LEGO Foundation!
2018.05 I co-founded Metascope Tech in Beijing, China.
A Human-Centered, Reinforcement Learning-Driven Robotic Framework for Assisting Construction Workers
Yuning Wu
|
![]() |
![]() |
Towards Human-Centered Construction Robotics: A Reinforcement Learning-Driven Companion Robot For Contextually Assisting Carpentry Workers
Yuning Wu
,
Jiaying Wei
,
Jean Oh
,
Daniel Cardoso Llach
|
![]() |
![]() |
Robot in the Loop: A Human-Centered Approach to Contextualizing AI and Robotics in Construction
Yuning Wu*
,
Emek Erdolu*
,
Jiaying Wei
,
Jean Oh
,
Daniel Cardoso Llach
|
![]() |
![]() |
Learning Dense Reward with Temporal Variant Self-Supervision
Yuning Wu
,
Jieliang Luo
,
Hui Li
|
![]() |
![]() |
Towards a Distributed, Robotically Assisted Construction Framework: Using Reinforcement Learning to Support Scalable Multi-Drone Construction in Dynamic Environments
Zhihao Fang
,
Yuning Wu
,
Ammar Hassonjee
,
Ardavan Bidgoli
,
Daniel Cardoso Llach
|
![]() |
![]() |
Teaching an AI Agent Spatial Sense by Playing LEGO
Yuning Wu
,
Katerina Fragkiadaki
|
![]() |
![]() |
Design Architecture with Graphs: Utilize Space Connectivity and Graph Neural Networks To Inform Coherent Spatial Generation
Yuning Wu
|
![]() |
![]() |
State Space Paradox of Computational Research in Creativity
Ömer Akin
,
Yuning Wu
|
![]() |
![]() |