Take Ohkawa

Hi, I am Take Ohkawa (大川 武彦).
I am a PhD student (2021-) at Graduate School of Information Science and Technology, The University of Tokyo, advised by Prof. Yoichi Sato.

During my PhD journey, I will be an incomming intern at Meta Reality Labs at Pittsburgh in 2024, and interned with Dr. Kun He at Redmond in 2022. I joined Prof. Marc Pollefeys lab at ETH Zurich as a Visiting Researcher and collaborated with Dr. Jinglu Wang at MSRA in 2023. I worked closely with Dr. Yoshitaka Ushiku at OMRON SINIC X. I joined Prof. Kris Kitani lab at CMU as a Research Scholar in 2021.

I am a Principal Investigator of JST ACT-X Project (2020-2024) and a Research Fellow (DC1) of JSPS Research Fellowships for Young Scientists (2022-2025). I recieved fellowships from Microsoft Research Asia (2023) and Leading House Asia ETH Zurich (2023).

I attained my master's degree with early completion in 1.5 years at The University of Tokyo under the supervision of Prof. Yoichi Sato and Prof. Jun Rekimoto. Prior to that, I received my bachelor's degree with early completion in 3 years at the Tokyo Institute of Technology under the guidance of Prof. Nakamasa Inoue.

E-mail: ohkawa-t [at] iis.u-tokyo.ac.jp
Google Scholar  /  LinkedIn  /  X (Twitter)  /  CV

I'm looking for research positions in academia or industry starting from Fall 2025. Please feel free to contact me if you are interested in my research.

profile photo
Gornergrat, Zermatt, Switzerland
PhD'21-'25
Visitor'23
Visitor'21
Intern'24,'22
Fellowship'23
Intern'23,'20
News

[Apr 2024] 8th HANDS workshop proposal got aceepted to ECCV 2024 with Dr. Linlin at CUC. See you in Miran!
[Apr 2024] Gave an invited presenatation at JST ASPIRE HCVM workshop, UTokyo-IIS.
[Mar 2024] Paper "Single-to-Dual-View Hand Pose Estimation" got accepted to CVPR 2024.
[Jul 2023] Paper "Survey on 3D Hand Pose Estimation" got accepted to IJCV.
[Jul 2023] Started working as a Visiting Researcher at CVG Group, ETH Zurich.
[Jun 2023] Gave an invited talk at CVML Group, NUS.
[Apr 2023] Research proposals got accepted to JST ACT-X Acceleration Phase and MSRA.

Past updates [Mar 2023] Host 7th HANDS workshop at ICCV 2023 with Prof. Angela at NUS. See you in Paris!
[Feb 2023] Paper "AssemblyHands Benchmark" got accepted to CVPR 2023.
[Jul 2022] Paper "Hand State Estimation in the Wild" got accepted to ECCV 2022.
[Feb 2022] Received UTokyo-IIS Research Collaboration Initiative Award with Oculus Quests!
[Sep 2021] Obtained M.A.S. for 1.5 years (Early Graduation), UTokyo.
[Jun 2021] Paper "Domain Adaptation of Hand Segmentation" got accepted to IEEE Access 2021.
[Oct 2020] Research proposal got accepted to JST ACT-X.
[Oct 2020] Paper "Augmented Cyclic Consistency Regularization" got accepted to ICPR2020.
[Apr 2020] Joined Sato/Sugano Lab at UTokyo.
[Mar 2020] Obtained B.E. for 3 years (Early Graduation) at TokyoTech.
[Oct 2019] Gifted NVIDIA RTX 2080Ti from Yu Darvish, a Japanese MLB player who I respect the most!
[Oct 2019] Joined Inoue Lab at TokyoTech.

Research

My research focuses on computer vision for human sensing and understanding, striving to achieve this from dual perspectives: precisely estimating observable external states, such as physical poses, and inferring deeper internal states, such as intentions. This approach facilitates recognizing human interactions in the real world, connecting humans with the virtual world, and augmenting our perceptual capabilities via assistive AI systems.

Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects
Zicong Fan*, Takehiko Ohkawa*, Linlin Yang*, Nie Lin, Zhishan Zhou, Shihao Zhou, Jiajun Liang, Zhong Gao, Xuanyang Zhang, Xue Zhang, Fei Li, Liu Zheng, Feng Lu, Karim Abou Zeid, Bastian Leibe, Jeongwan On, Seungryul Baek, Aditya Prakash, Saurabh Gupta, Kun He, Yoichi Sato, Otmar Hilliges, Hyung Jin Chang, and Angela Yao (* equal contribution)
[Paper]

We present a comprehensive summary of the HANDS23 challenge based on the AssemblyHands and ARCTIC datasets. Based on the results of the top submitted methods and more recent baselines on the leaderboards, we perform a thorough analysis on 3D hand(-object) reconstruction tasks.

Exo2EgoDVC: Dense Video Captioning of Egocentric Human Activities Using Web Instructional Videos
Takehiko Ohkawa, Takuma Yagi, Taichi Nishimura, Ryosuke Furuta, Atsushi Hashimoto, Yoshitaka Ushiku, and Yoichi Sato
[Paper]

We present EgoYC2, a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos with exocentric views to an egocentric view.

Generative Hierarchical Temporal Transformer for Hand Action Recognition and Motion Prediction
Yilin Wen, Hao Pan, Takehiko Ohkawa, Lei Yang, Jia Pan, Yoichi Sato, Taku Komura, and Wenping Wang
[Paper]

We present a novel framework that concurrently tackles hand action recognition and 3D future hand motion prediction.

Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation
Ruicong Liu, Takehiko Ohkawa, Mingfang Zhang, Yoichi Sato
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
[Paper]

We propose a novel Single-to-Dual-view adaptation (S2DHand) solution that adapts a pre-trained single-view estimator to dual views.

AssemblyHands: Towards Egocentric Activity Understanding via 3D Hand Pose Estimation
Takehiko Ohkawa, Kun He, Fadime Sener, Tomas Hodan, Luan Tran, and Cem Keskin
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
Invited Oral Presentation at Ego4D & EPIC Workshop, CVPRW, 2023
Poster Presentation at International Computer Vision Summer School (ICVSS), 2023
HANDS Workshop Benchmark Dataset, ICCVW, 2023
[Paper] [Project] [Leaderboard] [Code & Data]

We present AssemblyHands, a large-scale benchmark dataset with accurate 3D hand pose annotations, to facilitate the study of challenging hand-object interactions from egocentric videos.

Efficient Annotation and Learning for 3D Hand Pose Estimation: A Survey
Takehiko Ohkawa, Ryosuke Furuta, and Yoichi Sato
International Journal of Computer Vision (IJCV), 2023
[Paper] [Springer] [Slides]

We present a systematic review of 3D hand pose estimation from the perspective of efficient annotation and learning. 3D hand pose estimation has been an important research area owing to its potential to enable various applications, such as video understanding, AR/VR, and robotics.

Domain Adaptive Hand Keypoint and Pixel Localization in the Wild
Takehiko Ohkawa, Yu-Jhe Li, Qichen Fu, Ryosuke Furuta, Kris M. Kitani, and Yoichi Sato
European Conference on Computer Vision (ECCV), 2022
Invited Poster Presentation at HANDS and HBHA workshops, ECCVW, 2022
Invited Oral Presentation at Meeting on Image Recognition and Understanding (MIRU), 2023
[Paper] [Project] [Slides]

We tackled domain adaptation of hand keypoint regression and hand segmentation to in-the-wild egocentric videos with new imaging conditions (e.g., Ego4D).

Background Mixup Data Augmentation for Hand and Object-in-Contact Detection
Koya Tango, Takehiko Ohkawa, Ryosuke Furuta, and Yoichi Sato
HANDS, European Conference on Computer Vision Workshops (ECCVW), 2022
[Paper]

We propose Background Mixup augmentation that leverages data-mixing regularization for hand-object detection while avoiding unintended effect produced by naive Mixup.

Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation
Takehiko Ohkawa, Takuma Yagi, Atsushi Hashimoto, Yoshitaka Ushiku, and Yoichi Sato
IEEE Access, 2021
[Paper] [IEEE Xplore] [Project] [Code & Data]

We developed a domain adaptation method for hand segmentation, consisting of appearance gap reduction by stylization and learning with pseudo-labels generated by network consensus.

Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation
Takehiko Ohkawa, Naoto Inoue, Hirokatsu Kataoka, and Nakamasa Inoue
International Conference on Pattern Recognition (ICPR), 2020
[Paper]

We developed extended consistency regularization for stabilizing the training of image translation models using real, fake, and reconstructed samples.

Research & Work Experience

[Nov 2020 - Present] Research assistant, Sato Lab, UTokyo
[Jun 2024 - Nov 2024] Research internship, Meta Reality Labs @Pittsburgh
[Jul 2023 - Mar 2024] Visiting researcher, CVG Group, ETH Zurich
[Apr 2023 - Mar 2024] Research collaboration, Microsoft Research Asia
[Jan 2023 - May 2023] Research internship, OMRON SINIC X Corp.
[May 2022 - Nov 2022] Research internship, Meta Reality Labs @Redmond
[Sep 2021 - Mar 2022] Research scholar, Kitani Lab, CMU
[Aug 2020 - Aug 2021] Research internship, OMRON SINIC X Corp.
[Oct 2019 - May 2020] Research internship, Neural Pocket Inc.
[Aug 2019 - Mar 2020] Research assistant, Inoue Lab, TokyoTech
[Aug 2019 - Sep 2019] Engineering internship, teamLab Inc.
[Dec 2017 - Nov 2018] Research internship, Cross Compass Ltd.

Awards & Grants

JSPS Research Fellowship for Young Scientists (DC1), 2022-2025
UTokyo-IIS Travel Grant for International Research Meetings, 2024
Leading House Asia ETH Zurich "Young Researchers' Exchange Programme", 2023
Microsoft Research Asia Collaborative Research Program D-CORE, 2023
JST ACT-X Acceleration Phase of "Frontier of Mathematics and Information Science", 2023
JST ACT-X "Frontier of Mathematics and Information Science", 2020-2023
UTokyo-IIS Travel Grant for International Research Meetings, 2022
JASSO Scholarship for Excellent Master Students at UTokyo, 2021
UTokyo-IIS Research Collaboration Initiative Award, 2021
MIRU Student Encouragement Award, 2021
PRMU Best Presentation of the Month, 2020
JEES/Softbank AI Scholarship, 2020
Tokio Marine Kagami Memorial Foundation Scholarship, 2018-2020

Activities

Professional Service:

    Reviewers: MIRU'22-, CVPR'24-, ECCV'24-
    Workshop organizers: HANDS (ECCV'24, ICCV'23)

Lab Visits:

Conference Participation:

  • 2023: CVPR (Vancouver, Canada), ICVSS (Sicily, Italy), ICCV (Paris, France)
  • 2022: ECCV (Tel-Aviv, Israel), SIGGRAPH (Vancouver, Canada), WACV (Hawaii, USA)
  • 2019: ICCV (Soul, Korea)


© Takehiko Ohkawa / Design: jonbarron