Youngkyoon Jang, Ph.D.


|    Home    |     C.V.    |     Research     |     Publication     |     Links    |
Neural Rendering for Human Heads
I am currently working on a Neural Rendering project using 3D Gaussian Splatting. In the past, I led organising the ‘To NeRF or not to NeRF: A View Synthesis Challenge for Human Heads’ at ICCV 2023.

Papers:
- ILSH: IEEE ICCVW, Paris, France, October 02, 2023. [pdf][bibtex]
- VSCHH 2023: IEEE ICCVW, Paris, France, October 02, 2023. [pdf][supp][bibtex]
Dataset:
- ILSH-VSCHH CodaLabs for new [Validation] and [Test] submissions
- VSCHH 2023 challenge @ ICCV 2023 [Challenge CodaLab]
- ILSH Dataset download from ICCV 2023 workshop on To NeRF or not to NeRF: [WS]

Advanced Mixed Realities (AdMiRe)
I was involved in Advanced Mixed Realities (AdMiRe) project. This project is to to develop, validate and demonstrate innovation solutions, based on Mixed Reality (MR) technology, which will allow for TV audiences a step change in interactivity, and bring for content creators a radical improvement in talent immersion and interaction with computer generated elements.

Trustworthy Human Robot Interaction
I was involved in ENhanced Transparent inteRaction for trUstworthy and Safe auTonomy (EN-TRUST) project. This project is to investigate methods for robot’s trustworthy decision making by understanding human behaviours.

Papers:
- IEEE ICRA, Philadelphia (PA), USA, May 23-27, 2022. [pdf] [Project page] [Code]
Human Behaviour Understanding in Egocentric Video

I was involved in GLAnceable Nuances for Contextual Events (GLANCE) project supported by EPSRC. My ambition is to understand the visual context of egocentric videos for adaptive visual guidance tailored to a user.

Papers:
- IEEE ICCV Workshop on EPIC, Seoul, S.Korea, Nov. 02, 2019. [pdf] [Project page]
Dataset:
- EPIC-Tent 2019 Dataset: [Teaser Video] [Dataset] [Annotation]
Human Behaviour/Affect Understanding



I was involved in SensingFeeling project supported by Innovative UK. I proposed Face-SSD, which is the first network to perform face analysis without relying on pre-processing such as face detection and registration in advance.

Research interests: Deep learning, affective computing, behaviour understanding

Papers:
- Elsevier Computer Vision and Image Understanding (CVIU), vol. 182, pp. 17-29, May 2019. [arXiv] [Project page]
- IEEE ICCV Workshop on AMFG, Venice, Italy, Oct. 28, 2017. [pdf] [Project page]
Face Landmark Detection/Tracking
, ,
"Highly Realistic and Human-centric VR Technology Development" project.

Metaphoric Hand Gestures in VR

"Metaphoric Hand Gestures for Orientation-aware VR Object Manipulation with an Egocentric Viewpoint" in collaboration with ICVL Lab. (supervised by Dr.T-K. Kim) of Imperial College London, UK.)

Papers:
- IEEE Trans. on Human-Machine Systems (THMS)), vol. 47, no. 1, pp. 113-127, Feb. 2017.
- [Poster] (also presented in IEEE CVPR 2016 Workshop on HANDS, Las Vegas, NV, USA. Jul. 01, 2016 as a poster, and won the best poster award, sponsored by Facebook/Oculus and Purdue University)
- (also presented in Asia-Pacific Workshop on Mixed Reality (APMR), Andong, Korea, Apr 2016, and won the best presentation award)
- Download: [pdf] [demo] [Project page]
3D Finger Gesture Recognition

"3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint" - With a collaborator Hyung Jin Chang who was a member of ICVL Lab. (supervised by Dr.T-K. Kim) of Imperial College London, UK.)

Papers:
- IEEE Trans. on Vis. and Computer Graphics (TVCG)), vol. 21, no. 4, pp. 501-510, April 2015.
- (also presented in IEEE VR 2015, Mar. 23-27, 2015 as a long paper, accept rate: 13.8% (13/94))
- Download: [PDF] [Demo on Youtube video] [Project page]
Unified Visual Perception Model for Context-aware Augmented Reality (Multiple Object)
We propose unified visual perception model, which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science.
Papers:
- ISMAR 2013: DC program (officially, poster), Adelaide, S.A, Australia, Oct. 1-4, 2013.
- Download: [PDF] [Demo on Youtube video]
Video-based Object Recognition
"Video-based Object Recognition using Novel Set-of-Sets Representation" - With a collaborator Yang Liu who is a member of ICVL Lab. (supervised by Dr.T-K. Kim) of Imperial College London, UK.)
Papers:
- 3rd Workshop on Egocentric (First-person) Vision (In conjunction with CVPR 2014)
- Download: [PDF] [Demo1, Demo2 on Youtube video]
Local Feature Descriptors for 3D Object Recognition
This paper represents 3D object recognition, which is an extension of the common feature point-based object recognition, based on novel descriptors utilizing local angles (for shape), gradient orientations (for texture of corners), and color information.
Papers:
- ISUVR 2012: PDF
Semi-automatic ROI Detection for In-Situ Painting Recognition
In the case of illumination and view direction changes, the ability to accurately detect the Regions of Interest (ROI) is important for robust recognition. In this paper, we propose a stroke-based semi-automatic ROI detection algorithm using adaptive thresholding and a Hough-transform method for in-situ painting recognition.
Papers:
- HCI International 2011: PDF, VIDEO
Computer Vision-based Smile Training System
This paper presents an adaptive lip feature point detection algorithm for the proposed real-time smile training system using visual instructions. The proposed algorithm can detect a lip feature point irrespective of lip color with minimal user participation, such as drawing a line on a lip on the screen. Therefore, the proposed algorithm supports adaptive feature detection by real-time analysis for a color histogram. Moreover, we develop a supportive guide model as visual instructions for the target expression. By using the guide model, users can train their smile expression intuitively because they can easily identify the differences between their smile and target expression.
Papers:
- The 4th International Conference on E-Learning and Games (Edutainment 2009): PDF, VIDEO
Finger Vein Recognition
With increases in recent security requirements, biometric images such as fingerprints, faces and irises have been widely used in many recognition applications including door access control, personal authentication for computers, internet banking, automatic teller machines and border-crossing controls. Finger vein recognition uses the unique patterns of finger veins to identify individuals at a high level of accuracy. This paper proposes new devices and algorithms for touchless finger vein recognition.

Papers:
- Journal of Information Processing Society (B) (2008, in Korean): PDF
Iris Recognition with eyelid localization on a mobile device
We propose a new portable iris recognition system. Because existing portable iris systems use customized embedded processing units, they are limited in ability to expand to other applications, and they have low processing power. To overcome such problems, we propose a new portable iris recognition system consisting of a conventional ultra-mobile personal computer (UMPC), a small universal serial bus (USB) iris camera, and near-infrared (NIR) light illuminators.
Papers:
- International Journal of Control, Automation and Systems (2010): PDF
- Pattern Recognition Letters (2008): PDF