Avatar

ABOUT ME

[ Site Upgrade in Progress ]

CV / LINKEDIN

I am currently pursuing my Ph.D. at Carnegie Mellon University, where I also completed my M.S. in Machine Learning. My research interests encompass machine learning, computational design, human-robot/-computer interaction (HRI/HCI), human-centered AI. I have the privilege of being co-advised by Dr. Daniel Cardoso Llach (Computational Design) and Dr. Jean Oh (Robotics). What truly drives my research are those amazing moments when robots and AI demonstrate a nuanced understanding humans. A memorable evening in the lab vividly illustrates this: I was deeply engrossed in thought before a whiteboard, contemplating a technical issue and pacing aimless back and forth. Husky, the robot I developed, was autonomously navigating the lab. Approaching me from behind, Husky perceptively slowed down, quietly turned around, and took a detour, seemingly careful not to disturb me. This sensitive and considerate behavior, a result of a finetuned RL model, diverged from typical goal-oriented, efficiency-driven programming. That night, I realized my passion for human-centered AI.

In my current research, I am leveraging my interdisciplinary expertise in machine learning, computational design, and robotics to design, build, and optimize AI-driven robots and agents that sensitively help us in building better living spaces, especially in labor-intensive tasks. A key element of my Ph.D. research involves developing a RL-driven "work companion rover" that contextually assists and accompanies workers on construction sites, providing practical support in a safe and comfortable manner. This robot is designed to semi-autonomously navigate the real-world complexities of a construction site, utilizing its multi-modal (LiDAR+camera) sensors to accurately perceive its geometrically intricate surroundings and the dynamics of construction worker activities. As a devoted sci-fi enthusiast, my passion for creating intelligent and empathetic companion robots is greatly inspired by characters like TARS from "Interstellar", Benben from "The Wandering Earth II", and Baymax from "Big Hero 6".

Before joining CMU, I obtained my M.Arch. and B.Arch. from Tsinghua University in Beijing, China. In May 2018, I co-founded Metascope Tech, a startup focused on providing B2B consultation in the Chinese commercial architecture market. We utilized statistical machine learning and computer vision technique to develop an aesthetic recommendation system, helping architects better align with clients' spatial preferences.

Outside of academia, my hobbies include photography, cooking, reading, playing drums, and caring for animals and plants. I am the proud owner of two adorable cats, Sophie (Tangtang) and Huihui. Among my favorite reads is Herbert Simon's "The Sciences of the Artificial", which significantly influenced my decision to pursue doctoral studies at Carnegie Mellon. If you're interested in exploring the other artistic designer and architect facets of me, please check out SIDE-B: ARCHITECT OF THINGS. For inquiries or further discussions, feel free to reach out to me at yuningw [at] andrew [dot] cmu [dot] edu.

LOVE, DEATH & ROBOTS (THAT I'VE HACKED WITH)

Card image cap

Husky

This robot is my dearest baby. I designed and built it almost from scratch based on a Clearpath Husky A200 base, kindly made available by Jean and the National Robotics and Engineering Center (NREC). It is equipped with 2D & 3D LiDARs, RGB-D cameras, IMU unit, and a (strong) on-board computer. It is the cornerstone robot used for my PhD research on robot learning and human-robot interaction. To be noted, Jiaying Wei and Sam Shun played a critical role in the building process.

Card image cap

Black Proton

This cute little one is hacked entirely by me at home. It's adapted from a Tamiya 4WD remote control car (yes, a toy car), and is intended to be a mini version of Husky. Black Proton is a prototyping robot that I try all my reinforcement learning algorithms on. It is equipped with a Jetson Nano, a 2D LiDAR, and an RGB-D camera. After the heavy algorithm testing duty, Black Proton now serves as the Cat Detection, Tracking & Chasing (CDTC) robot in the family. I'm sure my cats love it.

Card image cap

Rocky

Rocky is a home-made, omnidirectional robot built by students from the Robotics Institute. It came to us after Fetch decided to sleep forever, and has been nicely rennovated by Sam Shun from NREC. However it is not so safe to work with. Our lab still has some drifting tire mark left by it on the floor. Once Rocky decides to go crazy, we have exactly 30s to catch it and press stop. Otherwise it's gone for good. Its heart is definitely in racing, not in HRI. Play safe next time, Rocky!

Card image cap

Fetch

The Fetch robot is the first member of the robotically supported collaborative work research. It came to us with a broken arm (gripper/end-effector), a malfunctioning 2D LiDAR, and eventually a "sleeping beauty" mainboard that is no longer awake. We tried very hard to fix it, but luck is not on our side. I love you still, Fetch.

Card image cap

Pixhawk

This drone was the star in the RL-driven multi-drone project. My teammate finished its initial building for carrying blocks using magnets. I must say I am not good at flying real drones. But I am really good at training drones with RL in simulation environments, e.g. Unity+MLAgents, NVIDIA Omniverse Isaac Gym. Pixhawk is now retired, and is always waiting for a chance to get a spin outdoor.

UPDATE

2023-07 I am honored to receive another year of CMU Presidential Fellowship.

2023-06 I am happy to present my PhD research at TCS Innovation Forum 2023, hosted at Cornell Tech, NYC. Thank you for the invite!

2023-05 I demonstrated and presented our research (and robot) at Mill-19 to Bosch Leadership and AI team visiting from Germany.

2022-11 I presented our ReAC project at CMU MFI Research Seminar, along with my colleague Emek Erdolu.

2022-10 Our team presented and demonstrated the ReAC research to CMU Board of Trustees at Mill-19.

2022-09 I am humbled to be named a Presidential Fellow at Carnegie Mellon University. The scholarship is generously donated by TCS.

2022-05 I received Manufacturing Futures Initiatives (MFI) Fellowship. See Manufacturing Futures Institute .

2021-12 I received Autodesk Research Merit Scholarship.

2021-08 I helped founded our new lab space at Mill-19. See Mill-19 Advanced Research Center .

2021-08 I received Manufacturing PA Fellowship from State of Pennsylvania. The research later received recognition from the State Governor Tom Wolf.

2021-05 I joined Autodesk AI Lab as research intern, working in affiliation with Autodesk Robotics Lab.

2020-10 I was awarded GSA/Provost GuSH Grant by Carnegie Mellon University.

2019-11 I was awarded LEGO Scholarship for attending HI'19. Thank you, LEGO Foundation!

2018-05 I co-founded Metascope Tech in Beijing, China.

SELECTED

cover cover

A Human-Centered, Reinforcement Learning-Driven Robotic Framework for Collaboratively Supporting On-Site Construction Workers

In preparation, doctoral dissertation.

Yuning Wu


Committee: Daniel Cardoso Llach, Jean Oh, Jieliang Luo
I am currently in Ph.D. ABD (All But Dissertation) stage.

cover cover

Towards Human-Centered Construction Robotics: An RL-Driven Companion Robot For Contextually Assisting Construction Workers

Under review, submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Yuning Wu, Jiaying Wei, Jean Oh, Daniel Cardoso Llach

/ PDF / arXiv / video


TL;DR: This paper documents the technical development of a socially-aware "work companion robot" focused on assisting, rather than brutally replacing, skilled human workers. By taking on the more physically burdensome tasks within an existing prototypical construction workflow, specifically carpentry formwork, the robot enables workers to concentrate on the more high-skilled and sophisticated facets of their daily practice. The paper provides a systematic overview of how the robot prototype is architected, driven by recent RL-based social navigation methods. It also proposes a novel pipeline for incrementally fine-tuning a pretrained RL model to align with context-specific worker behaviors while preventing drastic policy shifts in Sim2Real. The robot is evaluated on an actual construction site by a group of workers through direct operation.

cover cover

Construction as Computer-Supported Cooperative Work: Designing a Robot for Assisting Carpentry Workers

In revision, submitted to ACM Conference on Computer Supported Cooperative Work (CSCW).

Yuning Wu*, Emek Erdolu*, Jiaying Wei, Jean Oh, Daniel Cardoso Llach


TL;DR: This paper reimagines the integration of robotics into highly manual construction work from a socio-technical perspective. It emphasizes the application of Computer-Supported Cooperative Work (CSCW) principles, presenting a case study on the design, development, and evaluation of a prototype robot designed to assist in an existing indoor construction workflow, carpentry formwork. The paper discusses the use of ethnographic and practice-based research methods to understand the cooperative work of construction teams and to inform the design of supportive robot technologies. It proposes specific human-centered benchmarking metrics for evaluating the envisioned robot support and advocates for an approach that supports, rather than completely reconfigures or replaces, the human elements in existing labor-intensive work.

cover cover

Learning Dense Reward with Temporal Variant Self-Supervision

[ICRA'22] IEEE International Conference on Robotics and Automation 2022 | RL for Contact-Rich Manipulation Workshop.

Yuning Wu, Jieliang Luo, Hui Li

/ PDF / arXiv / slides / code


TL;DR: This paper introduces an efficient framework for deriving dense rewards in reinforcement learning, aimed at enhancing the efficiency and robustness of training robots for contact-rich manipulation tasks. Developed through a collaboration between Carnegie Mellon University and Autodesk Research, this framework utilizes Temporal Variant Forward Sampling (TVFS) and a self-supervised learning architecture. By leveraging temporal variance and multimodal observation pairs, the generated dense reward facilitates faster and more stable policy training in a plug-and-play manner, exemplified in tasks such as joint assembly and door opening.

cover cover

Towards a Distributed, Robotically-Assisted Construction Framework: Using Reinforcement Learning to Support Scalable Multi-Drone Construction in Dynamic Environments

[ACADIA'20] Annual Conference of the Association of Computer Aided Design in Architecture 2020.

Zhihao Fang, Yuning Wu, Ammar Hassonjee, Ardavan Bidgoli, Daniel Cardoso Llach

/ PDF / CumInCAD / slides / video / code


TL;DR: This paper explores a robotic framework that utilizes the multi-agent reinforcement learning method (MADDPG) to enable drones to execute collaborative tasks such as bricklaying and spray-coating, specifically designed for complex-shaped geometric structures. It showcases a robotic framework adept at managing intricate real-world designs in dynamic environments, leveraging recent advances in machine learning.

cover cover

Teaching an AI Agent Spatial Sense by Playing LEGO

In preparation.

Yuning Wu, Katerina Fragkiadaki


TL;DR: Whether one is an architect or not, the way we design and build reflects our spatial understanding of the physical world. As children, this exploration typically begins by playing with building blocks. This research retraces these initial steps by training an AI agent to play with LEGOs in a physical simulator (NVIDIA Omniverse Isaac Gym / PyBullet). A crucial aspect of this study is enabling the AI agent to design valid, and eventually elegant, stacked structures within a specified arena by interpreting spatial relationships among the blocks. Various components of this research are pending completion and will be submitted to both top-tier AI and robotics conferences as well as an art venue.

cover cover

Design Architecture with Graphs: Utilize Space Connectivity and Graph Neural Networks To Inform Layout Design

Master Thesis Project

Yuning Wu

/ link


TL;DR: This thesis project explores the potential of utilizing space connectivity and graph neural networks (GNNs) to inform layout design. It proposes a novel computational framework that leverages the spatial relationships between architectural elements to generate more informed and contextually relevant design solutions, contrasting from solutions that are purely data-driven or rule-based.

State Space Paradox of Computational Research in Creativity

[HI'19] Thirtieth Anniversary 'Heron Island' Conference on Computational and Cognitive Models of Creative Design.

Ömer Akin, Yuning Wu

/ PDF / arXiv / slides


In memory of Professor Ömer Akin (1942-2020). This is one of his last papers, and I am privileged to be organizing and presenting it. This paper explores the State Space Paradox (SSP) in computational creativity research, discussing how current digital systems, designed as closed systems with predefined parameters, are inherently limited in their creative capacity compared to the open systems of human creativity. It examines various procedural and representational approaches to artificial creativity, such as genetic algorithms and shape grammars, and critiques their ability to truly emulate human creativity. The paradox lies in the fact that while these systems can produce outputs that may initially seem creative, they are ultimately unable to exceed their predefined state spaces and thus fail to achieve genuine novelty or adapt beyond their initial programming.