Irving Fang

I am a Computer Science PhD student at AI4CE Lab@NYU led by Prof. Chen Feng.

I obtained my bachelor's degree from UC Berkeley, double majoring in Data Science (Robotics Emphasis) and Mathematics, with minors in Japanese Literature and EECS.
At UC Berkeley I was fortunate enough to work with Prof. Alice Agogino at her BEST Lab and Squishy Robotics.

During Summer 2022, I interned at Mitsubishi Electric Research Laboratories (MERL), working with Dr. Radu Corcodel on tactile sensing and deep reinforcement learning.

In my free time I enjoy playing with MCU/FPGA boards. I am also a fan of clothing/jewelry design and video games.

Email  /  CV  /  Google Scholar  /  Github

profile photo
Research

At the broadest level, my research interests lie in the intersection of robotics, computer vision, and machine learning.

Specifically, I am interested in contact-rich manipulation. Can we make robots as dexterous, adaptive and efficient as humans when there is contact between the robot and the manipulated object, the environment or even the humans around?

I like to think of it as a chain of challenges in trajectory optimization, perception, simulation, hardware design and so on. Naturally, such a complicated problem calls for a variety of techniques, including deep learning, tactile sensing, model predictive control, large vision-language model, neuromorphic computing and many more.

In my free time, I also contribute my computational skills to scientific research in other fields such as anthropology.

For collaboration, click here.

FusionSense: Bridging Common Sense, Vision, and Touch for Robust Sparse-View Reconstruction
Irving Fang*, Kairui Shi*, Xujin He*, Siqi Tan, Yifan Wang, Hanwen Zhao, Hung-Jui Huang, Wenzhen Yuan, Chen Feng, Jing Zhang (* for equal contribution)
ICRA, 2025(Under Review).
project page / arXiv / code

Robot reconstructing visually and geometrically accurate surroundings with sparse visual and tactile data

VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model
Juexiao Zhang*, Beicheng Wang*, Shuwen Dong†, Irving Fang†, Chen Feng (*, †for equal contribution)
ICRA, 2025(Under Review).
project page / arXiv / code

Let the robot follow a human's actions by just watching one video.

EgoPAT3Dv2: Predicting 3D Action Target from 2D Egocentric Vision for Human-Robot Interaction
Irving Fang*, Yuzhong Chen* Yifan Wang* Jianghan Zhang†, Qiushi Zhang†, Jiali Xu†, Xibo He, Weibo Gao, Hao Su, Yiming Li, Chen Feng (*, †for equal contribution)
ICRA 2024
project page / arXiv / code

Human-robot interaction for a potentially AR world?

DeepExplorer: Metric-Free Exploration for Topological Mapping by Task and Motion Imitation in Feature Space
Yuhang He*, Irving Fang*, Yiming Li, Rushi Bhavesh Shah, Chen Feng (* for equal contribution)
RSS 2023
project page / arXiv / code

A simple and effective framework for efficient and lightweight active visual exploration with only RGB images as input

Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information
Alice Agogino, Hae Young Jang, Vivek Rao, Ritik Batra, Felicity Liao, Rohan Sood, Irving Fang, R Lily Hu, Emerson Shoichet-Bartus, John Matranga (Authors ordered by department affiliation, not contribution)
ASME IMECE, 2021
arXiv / code

A framework for optimizing the deployment of emergency sensors using Long Short-Term Memory (LSTM) Neural Network and Expected Value of Information (EVI)

Personal Projects

Please visit this repo. It contains pointers to some personal projects ranging from robotics to a RISC-V CPU implemented on a Xilinx FPGA board.

Teaching

Teaching Aide, ROB-UY 3203 Robot Vision, Spring 2023
Teaching Aide, ROB-GY 6203 Robot Perception, Fall 2022
Teaching Aide, ROB-UY 3203 Robot Vision, Spring 2022

Service

Reviewer, ICRA2024
Reviewer, DARS2024


The website is based on Dr. Jon Barron's source code