Hi, I am Take Ohkawa (大川 武彦).
I am a PhD student (2021-) at Graduate School of Information Science and Technology, The University of Tokyo, advised by
Prof. Yoichi Sato.
During my PhD journey, I interned at Meta Reality Labs, hosted by Dr. Takaaki Shiratori in Pittsburgh in 2024, and Dr. Kun He in Redmond in 2022.
I joined Prof. Marc Pollefeys lab at ETH Zurich as a Visiting Researcher and collaborated with Dr. Jinglu Wang at MSRA in 2023.
I worked closely with Dr. Yoshitaka Ushiku at OMRON SINIC X.
I joined Prof. Kris Kitani lab at CMU as a Research Scholar in 2021.
Regarding my research projects and fundings, I was a Principal Investigator of JST ACT-X Project (2020-2023) and a JSPS Research Fellow (DC1) (2022-2024).
I recieved competitive fellowships from Google (2024), Microsoft Research Asia (2023) and Leading House Asia ETH Zurich (2023).
I'm looking for research positions in academia or industry starting from Fall 2025.
Please feel free to contact me if you are interested in my research.
My research focuses on computer vision for human sensing and understanding, striving to achieve this from dual perspectives: precisely estimating external states of humans, such as physical poses, as well as inferring their internal states, such as intentions.
This approach facilitates recognizing human interactions in the real world, connecting humans with the virtual world, and augmenting our perceptual capabilities via assistive AI systems.
Specifically, I am keenly interested in the following topics:
Our HANDS workshop will gather vision researchers working on perceiving hands performing actions, including 2D & 3D hand detection, segmentation, pose/shape estimation, tracking, etc. We will also cover related applications including gesture recognition, hand-object manipulation analysis, hand activity understanding, and interactive interfaces.
We present a novel framework that concurrently tackles hand action recognition and 3D future hand motion prediction.
Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects
Zicong Fan*, Takehiko Ohkawa*, Linlin Yang*, Nie Lin, Zhishan Zhou, Shihao Zhou, Jiajun Liang, Zhong Gao, Xuanyang Zhang, Xue Zhang, Fei Li, Liu Zheng, Feng Lu, Karim Abou Zeid, Bastian Leibe, Jeongwan On, Seungryul Baek, Aditya Prakash, Saurabh Gupta, Kun He, Yoichi Sato, Otmar Hilliges, Hyung Jin Chang, and Angela Yao (*equal contribution)
European Conference on Computer Vision (ECCV), 2024 [Paper]
We present a comprehensive summary of the HANDS23 challenge using the AssemblyHands and ARCTIC datasets. Based on the results of the top submitted methods and more recent baselines on the leaderboards, we perform a thorough analysis on 3D hand(-object) reconstruction tasks.
We present EgoYC2, a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos with exocentric views to an egocentric view.
We present AssemblyHands, a large-scale benchmark dataset with accurate 3D hand pose annotations, to facilitate the study of challenging hand-object interactions from egocentric videos.
We present a systematic review of 3D hand pose estimation from the perspective of efficient annotation and learning. 3D hand pose estimation has been an important research area owing to its potential to enable various applications, such as video understanding, AR/VR, and robotics.
We tackled domain adaptation of hand keypoint regression and hand segmentation to in-the-wild egocentric videos with new imaging conditions (e.g., Ego4D).
We propose Background Mixup augmentation that leverages data-mixing regularization for hand-object detection while avoiding unintended effect produced by naive Mixup.
We developed a domain adaptation method for hand segmentation, consisting of appearance gap reduction by stylization and learning with pseudo-labels generated by network consensus.
We developed extended consistency regularization for stabilizing the training of image translation models using real, fake, and reconstructed samples.
Research & Work Experience
[Nov 2020 - Present] Research assistant, Sato Lab, UTokyo [Jun 2024 - Nov 2024] Research internship, Meta Reality Labs @Pittsburgh [Jul 2023 - Mar 2024] Visiting researcher, CVG Group, ETH Zurich [Apr 2023 - Mar 2024] Research collaboration, Microsoft Research Asia [Jan 2023 - May 2023] Research internship, OMRON SINIC X Corp. [May 2022 - Nov 2022] Research internship, Meta Reality Labs @Redmond [Sep 2021 - Mar 2022] Research scholar, Kitani Lab, CMU [Aug 2020 - Aug 2021] Research internship, OMRON SINIC X Corp. [Oct 2019 - May 2020] Research internship, Neural Pocket Inc. [Aug 2019 - Mar 2020] Research assistant, Inoue Lab, TokyoTech [Aug 2019 - Sep 2019] Engineering internship, teamLab Inc. [Dec 2017 - Nov 2018] Research internship, Cross Compass Ltd.
Awards & Grants
Google PhD Fellowship in Machine Perception, 2024
JSPS DC1 Special Stipends for Excellent Research Results, 2024
ACT-X Travel Grant for International Research Meetings, 2024
UTokyo-IIS Travel Grant for International Research Meetings, 2024
JSPS Research Fellowship for Young Scientists (DC1), 2022-2024
Leading House Asia ETH Zurich "Young Researchers' Exchange Programme", 2023
Microsoft Research Asia Collaborative Research Program D-CORE, 2023
JST ACT-X Acceleration Phase of "Frontier of Mathematics and Information Science", 2023
JST ACT-X "Frontier of Mathematics and Information Science", 2020-2022
UTokyo-IIS Travel Grant for International Research Meetings, 2022
JASSO Scholarship for Excellent Master Students at UTokyo, 2021
UTokyo-IIS Research Collaboration Initiative Award, 2021
MIRU Student Encouragement Award, 2021
PRMU Best Presentation of the Month, 2020
JEES/Softbank AI Scholarship, 2020
Tokio Marine Kagami Memorial Foundation Scholarship, 2018-2020