Avatar

ABOUT ME

[ Site Upgrade in Progress ]

CV / LINKEDIN / GOOGLE SCHOLAR

I am currently pursuing my Ph.D. at Carnegie Mellon University, where I also completed my M.S. in Machine Learning. My research interests encompass robot learning, human-robot interaction (HRI), reinforcement learning (RL), and human-centered AI. I have the privilege of being co-advised by Dr. Daniel Cardoso Llach (Computational Design) and Dr. Jean Oh (Robotics). What truly drives my research are those amazing moments when robots demonstrate a nuanced understanding and consideration of humans. A memorable evening in the lab vividly illustrates this: I was deeply engrossed in thought before a whiteboard, contemplating a technical issue and pacing aimless back and forth. Husky, the robot I developed, was autonomously navigating the lab. Approaching me from behind, Husky perceptively slowed down, quietly turned around, and took a detour, seemingly careful not to disturb me. This sensitive and considerate behavior, a result of a finetuned RL model, diverged from typical goal-oriented, efficiency-driven programming. That night, I realized my passion for humane robots and human-centered AI.

In my current research, I am utilizing my interdisciplinary expertise in machine learning, computational design, and robotics to design, build, and optimize AI driven robots that can sensitively support people in labor-intensive tasks. For instance, a significant aspect of my Ph.D. research involves prototyping an RL-driven "work companion rover" that can contextually assist and accompany workers on construction site, offering them practical support in a safe and comfortable manner. As a big fan of sci-fi films, this pursuit of intelligent and empathetic companion robots is largely inspired by my favor for TARS from "Interstellar", Benben from "The Wandering Earth II", and Baymax from "Big Hero 6".

Before joining CMU, I obtained my M.Arch. and B.Arch. from Tsinghua University in Beijing, China. In May 2018, I co-founded Metascope Tech, a startup focused on providing B2B consultation in the Chinese commercial architecture market. We utilized statistical machine learning and computer vision technique to develop an aesthetic recommendation system, helping architects better align with clients' spatial preferences.

Outside of academia, my hobbies include photography, cooking, reading, playing drums, and caring for animals and plants. I am the proud owner of two adorable cats, Sophie (Tangtang) and Huihui. Among my favorite reads is Herbert Simon's "The Sciences of the Artificial", which significantly influenced my decision to pursue doctoral studies at Carnegie Mellon. If you're interested in exploring the other facets of me and my career journey, please check out SIDE-B: IMMATURE ARCHITECT OF THINGS. For inquiries or further discussions, feel free to reach out to me at yuningw [at] andrew [dot] cmu [dot] edu.

LOVE, DEATH & ROBOTS (THAT I'VE HACKED WITH)

Card image cap

Husky

This robot is my dearest baby. I designed and built it almost from scratch based on a Clearpath Husky A200 base, kindly made available by Jean and the National Robotics and Engineering Center (NREC). It is equipped with 2D & 3D LiDARs, RGB-D cameras, IMU unit, and a (strong) on-board computer. It is the cornerstone robot used for my PhD research on robot learning and human-robot interaction. To be noted, Jiaying Wei and Sam Shun played a critical role in the building process.

Card image cap

Black Proton

This cute little one is hacked entirely by me at home. It's adapted from a Tamiya 4WD remote control car (yes, a toy car), and is intended to be a mini version of Husky. Black Proton is a prototyping robot that I try all my reinforcement learning algorithms on. It is equipped with a Jetson Nano, a 2D LiDAR, and an RGB-D camera. After the heavy algorithm testing duty, Black Proton now serves as the Cat Detection, Tracking & Chasing (CDTC) robot in the family. I'm sure my cats love it.

Card image cap

Rocky

Rocky is a home-made, omnidirectional robot built by students from the Robotics Institute. It came to us after Fetch decided to sleep forever, and has been nicely rennovated by Sam Shun from NREC. However it is not so safe to work with. Our lab still has some drifting tire mark left by it on the floor. Once Rocky decides to go crazy, we have exactly 30s to catch it and press stop. Otherwise it's gone for good. Its heart is definitely in racing, not in HRI. Play safe next time, Rocky!

Card image cap

Fetch

The Fetch robot is the first member of the robotically supported collaborative work research. It came to us with a broken arm (gripper/end-effector), a malfunctioning 2D LiDAR, and eventually a "sleeping beauty" mainboard that is no longer awake. We tried very hard to fix it, but luck is not on our side. I love you still, Fetch.

Card image cap

Pixhawk

This drone was the star in the RL-driven multi-drone project. My teammate finished its initial building for carrying blocks using magnets. I must say I am not good at flying real drones. But I am really good at training drones with RL in simulation environments, e.g. Unity+MLAgents, NVIDIA Omniverse Isaac Gym. Pixhawk is now retired, and is always waiting for a chance to get a spin outdoor.

UPDATE

2023-07 I am honored to receive another year of CMU Presidential Fellowship.

2023-06 I am happy to present my PhD research at TCS Innovation Forum 2023, hosted at Cornell Tech, NYC. Thank you for the invite!

2023-05 I demonstrated and presented our research (and robot) at Mill-19 to Bosch Leadership and AI team visiting from Germany.

2022-11 I presented our ReAC project at CMU MFI Research Seminar, along with my colleague Emek Erdolu.

2022-10 Our team presented and demonstrated the ReAC research to CMU Board of Trustees at Mill-19.

2022-09 I am humbled to be named a Presidential Fellow at Carnegie Mellon University. The scholarship is generously donated by TCS.

2022-05 I received Manufacturing Futures Initiatives (MFI) Fellowship. See Manufacturing Futures Institute .

2021-12 I received Autodesk Research Merit Scholarship.

2021-08 I helped founded our new lab space at Mill-19. See Mill-19 Advanced Research Center .

2021-08 I received Manufacturing PA Fellowship from State of Pennsylvania. The research later received recognition from the State Governor Tom Wolf.

2021-05 I joined Autodesk AI Lab as research intern, working in affiliation with Autodesk Robotics Lab.

2020-10 I was awarded GSA/Provost GuSH Grant by Carnegie Mellon University.

2019-11 I was awarded LEGO Scholarship for attending HI'19. Thank you, LEGO Foundation!

2018-05 I co-founded Metascope Tech in Beijing, China.

SELECTED

cover cover

Towards Human-Centered Automation: An RL-Driven Robotic Framework for Humanely Supporting Labor-Intensive Construction Workers

In preparation, doctoral dissertation.

Yuning Wu


Committee: Daniel Cardoso Llach (Advisor), Jean Oh (Co-Advisor), Jieliang Luo
I am currently in Ph.D. ABD (All But Dissertation) stage, during which I am mostly focusing on final dissertation writing.

cover cover

Towards Human-Centered Construction Robotics: An RL-Driven Companion Robot For Contextually Assisting Carpentry Workers

Under review, submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Yuning Wu, Jiaying Wei, Jean Oh, Daniel Cardoso Llach

/ PDF / arXiv / video


TL;DR: This paper documents the technical development of a socially-aware 'work companion robot' centered on assisting, rather than brutally replacing, human workers. By taking on the more physically burdensome tasks within an existing prototypical construction scenario, carpentry formwork, the robot aims to allow workers to focus on the more high-skilled and sophisticated facets of their daily practice. The paper offers a systematic overview of how the robot prototype is architected, driven by state-of-the-art (SoTA) reinforcement learning algorithms. It also proposes a novel pipeline for incrementally fine-tuning a pretrained RL model to align with context-specific worker behaviors and prevent drastic policy shifts during Sim2Real.

cover cover

Construction as Computer-Supported Cooperative Work: Designing a Robot for Assisting Carpentry Workers

In revision, submitted to ACM Conference on Computer Supported Cooperative Work (CSCW).

Yuning Wu*, Emek Erdolu*, Jiaying Wei, Jean Oh, Daniel Cardoso Llach


TL;DR: This paper reimagines the integration of robotics into highly manual construction work from a socio-technical perspective. It emphasizes the application of Computer-Supported Cooperative Work (CSCW) principles, presenting a case study on the design, development, and evaluation of a prototype robot intended to assist in carpentry formwork. This paper discusses the use of ethnographic and practice-based research methods to understand the cooperative work of construction teams and inform the design of supportive robot technologies. It proposes specific human-centered benchmarking metrics for evaluating the envisioned robot support and advocates for an approach that supports rather than completely reconfigures or replaces the human elements in existing labor-intensive work.

cover

Correcting Sparse Erroneous Navigation Behaviors through Few-Shot Learning from Two Teachers

In preparation.

Yuning Wu, Jean Oh


TL;DR: This research introduces a novel approach to address long-tailed errors in robot navigation, utilizing a Vision Language Model (VLM) and human expert preferences for corrections. By leveraging CLIP as few-shot learners, the method effectively compensates for the sparse data issue inherent in long-tailed distributions. The paper proposes a new architecture aimed at efficiently rectifying erroneous navigation behaviors and better aligning them with human preferences, offering a significant advancement in social navigation systems. This paper is currently in preparation. It will be submitted to a top-tier conference in robotics and AI.

cover cover

Learning Dense Reward with Temporal Variant Self-Supervision

[ICRA'22] IEEE International Conference on Robotics and Automation 2022 | RL for Contact-Rich Manipulation Workshop.

Yuning Wu, Jieliang Luo, Hui Li

/ PDF / arXiv / slides / code


TL;DR: This paper introduces an efficient framework for deriving dense rewards in reinforcement learning, focusing on enhancing the efficiency and robustness of training for robots performing contact-rich manipulation tasks. Developed through collaboration between Carnegie Mellon University and Autodesk Research, the framework leverages Temporal Variant Forward Sampling (TVFS) and a self-supervised learning architecture. By utilizing temporal variance and multimodal observation pairs, the generated dense reward can facilitate faster and more stable policy training in a plug-and-play manner, examplified in tasks such as joint assembly and door opening. This work aims to share new thinkings in reward learning and its applications in robotics.

cover cover

Towards a Distributed, Robotically-Assisted Construction Framework: Using Reinforcement Learning to Support Scalable Multi-Drone Construction in Dynamic Environments

[ACADIA'20] Annual Conference of the Association of Computer Aided Design in Architecture 2020.

Zhihao Fang, Yuning Wu, Ammar Hassonjee, Ardavan Bidgoli, Daniel Cardoso Llach

/ PDF / CumInCAD / slides / video / code


TL;DR: The paper explores an innovative framework that utilizes multi-agent reinforcement learning method (MADDPG) to enable drones to perform collaborative tasks such as bricklaying and spray-coating. This scalable, adaptive approach allows multiple drones to work in dynamic environments, promising to enhance efficiency and flexibility in construction processes. By demonstrating the framework's potential through simulations and advancing towards a hardware prototype, the study contributes to the evolving field of construction robotics by offering insights into the future of human-machine collaborative ecosystems.

cover cover

Teaching an AI Agent Spatial Sense by Playing LEGO

Under submission.

Yuning Wu, Katerina Fragkiadaki


TL;DR: Whether an architect or not, how we build reflects our spatial sense of the physical world. As children, this exploration typically begins with playing with toys like building blocks. This research traces this initial baby-step learning process by training an AI agent to play with LEGOs in a physical simulator (NVIDIA Omniverse Isaac Gym / PyBullet). A key aspect is enabling the AI agent to gain a spatial sense by roughly understanding whether a composite structure is stable, through grasping the relationships between blocks. This paper is currently under submission to a top-tier conference in AI.

State Space Paradox of Computational Research in Creativity

[HI'19] Thirtieth Anniversary 'Heron Island' Conference on Computational and Cognitive Models of Creative Design.

Ömer Akin, Yuning Wu

/ PDF / arXiv / slides


In memory of Professor Ömer Akin (1942-2020). This is one of his last papers. I am priviledged to be organizing and presenting it. This paper explores the State Space Paradox (SSP) in computational creativity research, discussing how current digital systems, designed as closed systems with predefined parameters, are inherently limited in their creative capacity compared to the open systems of human creativity. It examines various procedural and representational approaches to artificial creativity, such as genetic algorithms and shape grammars, and critiques their ability to truly emulate human creativity. The paradox lies in the fact that while these systems can produce outputs that may initially seem creative, they are unable to exceed their predefined state spaces and thus fail to achieve genuine novelty or adapt beyond their initial programming.