Take Ohkawa

Hi, I am Take Ohkawa (大川 武彦).
I am a PhD student (2021-) at Graduate School of Information Science and Technology, The University of Tokyo, advised by Prof. Yoichi Sato.

During my PhD journey, I interned at Meta Reality Labs, hosted by Dr. Takaaki Shiratori in Pittsburgh in 2024, and Dr. Kun He in Redmond in 2022. I joined Prof. Marc Pollefeys lab at ETH Zurich as a Visiting Researcher and collaborated with Dr. Jinglu Wang at MSRA in 2023. I worked closely with Dr. Yoshitaka Ushiku at OMRON SINIC X. I joined Prof. Kris Kitani lab at CMU as a Research Scholar in 2021.

Regarding my research projects and fundings, I was a Principal Investigator of JST ACT-X Project (2020-2023) and a JSPS Research Fellow (DC1) (2022-2024). I recieved competitive fellowships from Google (2024), Microsoft Research Asia (2023) and Leading House Asia ETH Zurich (2023).

E-mail: ohkawa-t [at] iis.u-tokyo.ac.jp
Google Scholar  /  LinkedIn  /  X (Twitter)  /  CV

I'm looking for research positions in academia or industry starting from Fall 2025. Please feel free to contact me if you are interested in my research.

profile photo
PhD'25-'21
Intern'24,'22
Fellowship'24
Visitor'23
Visitor'21
Fellowship'23
Intern'23,'20
News

[Nov 2024] Recieved Google PhD Fellowship in Machine Perception!
[Oct 2024] Paper "Exo2EgoDVC" got accepted to WACV 2025.
[Jul 2024] Three papers "HANDS'23 Analysis", "GHTT", "HandCLR" got accepted to ECCV 2024.
[Jun 2024] Started an internship at Meta Reality Labs @Pittsburgh.
[Apr 2024] Two papers "S2DHand" and "Exo2EgoDVC" got accepted to CVPR 2024.
[Apr 2024] 8th HANDS workshop proposal got aceepted to ECCV 2024 with Dr. Linlin at CUC. See you in Miran!
[Apr 2024] Gave an invited presenatation at JST ASPIRE HCVM workshop, UTokyo-IIS.
[Jul 2023] Paper "Survey on 3D Hand Pose Estimation" got accepted to IJCV.
[Jul 2023] Started working as a Visiting Researcher at CVG Group, ETH Zurich.

Past updates [Jun 2023] Gave an invited talk at CVML Group, NUS.
[Apr 2023] Research proposals got accepted to JST ACT-X Acceleration Phase and MSRA.
[Mar 2023] Host 7th HANDS workshop at ICCV 2023 with Prof. Angela at NUS. See you in Paris!
[Feb 2023] Paper "AssemblyHands Benchmark" got accepted to CVPR 2023.
[Jul 2022] Paper "Hand State Estimation in the Wild" got accepted to ECCV 2022.
[Feb 2022] Received UTokyo-IIS Research Collaboration Initiative Award with Oculus Quests!
[Sep 2021] Obtained M.A.S. for 1.5 years (Early Graduation), UTokyo.
[Jun 2021] Paper "Domain Adaptation of Hand Segmentation" got accepted to IEEE Access 2021.
[Oct 2020] Research proposal got accepted to JST ACT-X.
[Oct 2020] Paper "Augmented Cyclic Consistency Regularization" got accepted to ICPR2020.
[Apr 2020] Joined Sato/Sugano Lab at UTokyo.
[Mar 2020] Obtained B.E. for 3 years (Early Graduation) at TokyoTech.
[Oct 2019] Gifted NVIDIA RTX 2080Ti from Yu Darvish, a Japanese MLB player who I respect the most!
[Oct 2019] Joined Inoue Lab at TokyoTech.

Research

My research focuses on computer vision for human sensing and understanding, striving to achieve this from dual perspectives: precisely estimating external states of humans, such as physical poses, as well as inferring their internal states, such as intentions. This approach facilitates recognizing human interactions in the real world, connecting humans with the virtual world, and augmenting our perceptual capabilities via assistive AI systems. Specifically, I am keenly interested in the following topics:

  • pose estimation and 3D reconstruction
  • video and activity understanding
  • egocentric vision and AR/VR technologies
  • self-supervised learning and transfer learning
  • human-computer intereaction

HANDS Workshops: Observing and Understanding Hands in Action
Contributed as an organizer and challenge committee
European Conference on Computer Vision Workshops (ECCVW), 2024
[HANDS@ECCV2024] / [HANDS@ICCV2023]

Challenges:

Our HANDS workshop will gather vision researchers working on perceiving hands performing actions, including 2D & 3D hand detection, segmentation, pose/shape estimation, tracking, etc. We will also cover related applications including gesture recognition, hand-object manipulation analysis, hand activity understanding, and interactive interfaces.

Pre-Training for 3D Hand Pose Estimation with Contrastive Learning on Large-Scale Hand Images in the Wild
Nie Lin*, Takehiko Ohkawa*, Mingfang Zhang, Yifei Huang, Ryosuke Furuta, Yoichi Sato (*equal contribution)
HANDS, European Conference on Computer Vision Workshops (ECCVW), 2024
[Paper]

We present a contrastive learning framework based on in-the-wild hand images tailored for pre-training 3D hand pose estimators, dubbed HandCLR.

Generative Hierarchical Temporal Transformer for Hand Action Recognition and Motion Prediction
Yilin Wen, Hao Pan, Takehiko Ohkawa, Lei Yang, Jia Pan, Yoichi Sato, Taku Komura, and Wenping Wang
HANDS, European Conference on Computer Vision Workshops (ECCVW), 2024
[Paper]

We present a novel framework that concurrently tackles hand action recognition and 3D future hand motion prediction.

Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects
Zicong Fan*, Takehiko Ohkawa*, Linlin Yang*, Nie Lin, Zhishan Zhou, Shihao Zhou, Jiajun Liang, Zhong Gao, Xuanyang Zhang, Xue Zhang, Fei Li, Liu Zheng, Feng Lu, Karim Abou Zeid, Bastian Leibe, Jeongwan On, Seungryul Baek, Aditya Prakash, Saurabh Gupta, Kun He, Yoichi Sato, Otmar Hilliges, Hyung Jin Chang, and Angela Yao (*equal contribution)
European Conference on Computer Vision (ECCV), 2024
[Paper]

We present a comprehensive summary of the HANDS23 challenge using the AssemblyHands and ARCTIC datasets. Based on the results of the top submitted methods and more recent baselines on the leaderboards, we perform a thorough analysis on 3D hand(-object) reconstruction tasks.

Exo2EgoDVC: Dense Video Captioning of Egocentric Human Activities Using Web Instructional Videos
Takehiko Ohkawa, Takuma Yagi, Taichi Nishimura, Ryosuke Furuta, Atsushi Hashimoto, Yoshitaka Ushiku, and Yoichi Sato
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025
LPVL, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024
[Paper]

We present EgoYC2, a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos with exocentric views to an egocentric view.

Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation
Ruicong Liu, Takehiko Ohkawa, Mingfang Zhang, Yoichi Sato
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
Invited Poster Presentation at EgoVis Workshop, CVPRW, 2024
Invited Oral Presentation at Forum on Information Technology (FIT), 2024
[Paper] [Code]

We propose a novel Single-to-Dual-view adaptation (S2DHand) solution that adapts a pre-trained single-view estimator to dual views.

AssemblyHands: Towards Egocentric Activity Understanding via 3D Hand Pose Estimation
Takehiko Ohkawa, Kun He, Fadime Sener, Tomas Hodan, Luan Tran, and Cem Keskin
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
Invited Oral Presentation at Ego4D & EPIC Workshop, CVPRW, 2023
Poster Presentation at International Computer Vision Summer School (ICVSS), 2023
HANDS Workshop Benchmark Dataset, ICCVW, 2023
[Paper] [Project] [Code & Data]

We present AssemblyHands, a large-scale benchmark dataset with accurate 3D hand pose annotations, to facilitate the study of challenging hand-object interactions from egocentric videos.

Efficient Annotation and Learning for 3D Hand Pose Estimation: A Survey
Takehiko Ohkawa, Ryosuke Furuta, and Yoichi Sato
International Journal of Computer Vision (IJCV), 2023
[Paper] [Springer] [Slides]

We present a systematic review of 3D hand pose estimation from the perspective of efficient annotation and learning. 3D hand pose estimation has been an important research area owing to its potential to enable various applications, such as video understanding, AR/VR, and robotics.

Domain Adaptive Hand Keypoint and Pixel Localization in the Wild
Takehiko Ohkawa, Yu-Jhe Li, Qichen Fu, Ryosuke Furuta, Kris M. Kitani, and Yoichi Sato
European Conference on Computer Vision (ECCV), 2022
Invited Poster Presentation at HANDS and HBHA workshops, ECCVW, 2022
Invited Oral Presentation at Meeting on Image Recognition and Understanding (MIRU), 2023
[Paper] [Project] [Slides]

We tackled domain adaptation of hand keypoint regression and hand segmentation to in-the-wild egocentric videos with new imaging conditions (e.g., Ego4D).

Background Mixup Data Augmentation for Hand and Object-in-Contact Detection
Koya Tango, Takehiko Ohkawa, Ryosuke Furuta, and Yoichi Sato
HANDS, European Conference on Computer Vision Workshops (ECCVW), 2022
[Paper]

We propose Background Mixup augmentation that leverages data-mixing regularization for hand-object detection while avoiding unintended effect produced by naive Mixup.

Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation
Takehiko Ohkawa, Takuma Yagi, Atsushi Hashimoto, Yoshitaka Ushiku, and Yoichi Sato
IEEE Access, 2021
[Paper] [IEEE Xplore] [Project] [Code & Data]

We developed a domain adaptation method for hand segmentation, consisting of appearance gap reduction by stylization and learning with pseudo-labels generated by network consensus.

Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation
Takehiko Ohkawa, Naoto Inoue, Hirokatsu Kataoka, and Nakamasa Inoue
International Conference on Pattern Recognition (ICPR), 2020
[Paper]

We developed extended consistency regularization for stabilizing the training of image translation models using real, fake, and reconstructed samples.

Research & Work Experience

[Nov 2020 - Present] Research assistant, Sato Lab, UTokyo
[Jun 2024 - Nov 2024] Research internship, Meta Reality Labs @Pittsburgh
[Jul 2023 - Mar 2024] Visiting researcher, CVG Group, ETH Zurich
[Apr 2023 - Mar 2024] Research collaboration, Microsoft Research Asia
[Jan 2023 - May 2023] Research internship, OMRON SINIC X Corp.
[May 2022 - Nov 2022] Research internship, Meta Reality Labs @Redmond
[Sep 2021 - Mar 2022] Research scholar, Kitani Lab, CMU
[Aug 2020 - Aug 2021] Research internship, OMRON SINIC X Corp.
[Oct 2019 - May 2020] Research internship, Neural Pocket Inc.
[Aug 2019 - Mar 2020] Research assistant, Inoue Lab, TokyoTech
[Aug 2019 - Sep 2019] Engineering internship, teamLab Inc.
[Dec 2017 - Nov 2018] Research internship, Cross Compass Ltd.

Awards & Grants

Google PhD Fellowship in Machine Perception, 2024
JSPS DC1 Special Stipends for Excellent Research Results, 2024
ACT-X Travel Grant for International Research Meetings, 2024
UTokyo-IIS Travel Grant for International Research Meetings, 2024
JSPS Research Fellowship for Young Scientists (DC1), 2022-2024
Leading House Asia ETH Zurich "Young Researchers' Exchange Programme", 2023
Microsoft Research Asia Collaborative Research Program D-CORE, 2023
JST ACT-X Acceleration Phase of "Frontier of Mathematics and Information Science", 2023
JST ACT-X "Frontier of Mathematics and Information Science", 2020-2022
UTokyo-IIS Travel Grant for International Research Meetings, 2022
JASSO Scholarship for Excellent Master Students at UTokyo, 2021
UTokyo-IIS Research Collaboration Initiative Award, 2021
MIRU Student Encouragement Award, 2021
PRMU Best Presentation of the Month, 2020
JEES/Softbank AI Scholarship, 2020
Tokio Marine Kagami Memorial Foundation Scholarship, 2018-2020

Activities

Professional Service:

    Reviewers: MIRU'22-, CVPR'24-, ECCV'24-
    Workshop organizers: HANDS (ECCV'24, ICCV'23)

Conference Log:

  • 2025: WACV (Tucson, USA)
  • 2024: CVPR (Seattle, USA), ECCV (Milan, Italy)
  • 2023: CVPR (Vancouver, Canada), ICVSS (Sicily, Italy), ICCV (Paris, France)
  • 2022: ECCV (Tel-Aviv, Israel), SIGGRAPH (Vancouver, Canada), WACV (Hawaii, USA)
  • 2019: ICCV (Soul, Korea)


© Takehiko Ohkawa / Design: jonbarron