OscarsLab

 

RESEARCH AREAS   TEAM   PUBLICATIONS   OPENINGS   ABOUT US
HOME > Research Areas

indicator

Research Areas: Computer Graphics & Vision

publication list wordle  

Visual computing defines technologies and applications that integrate computer graphics with computer vision. Computer graphics describes foundations and applications of acquisition, representation, and interaction with the three-dimensional (3D) real and the virtual world, while computer vision allows for a deeper understanding of the real world in a form of two-dimensional (2D) images or video. The Optical Sensing and Camera System (Oscars) Lab at UTokyo is particularly interested in visual phenomena related with light transport from a light source to the visual perception in our brain via light traversal over 3D surfaces. The abstracts of our research include the fundamental elements of the real world: light, color, geometry, simulation, and even interaction among these elements. In particular, we are focusing on acquiring material appearance for better color representation in 3D graphics, hyperspectral 3D imaging for a deeper physical understanding of light transport, and color perception in 3D for deeper understanding color. Our contributions in this research allow for various hardware designs and software applications of visual computing.

-- Dr. Yinqiang Zheng

Titles of all our publications are visualized by wordclouds in Dec 2020.


High-Performance Advanced Imaging:

hdr characterization  

3D Imaging Spectroscopy

We introduce an end-to-end measurement system for capturing spectral data on 3D objects. We developed a compressive sensing imager to make it suitable for acquiring such data in a hyperspectral range at high spectral and spatial resolution. We fully characterize the imaging system, and document its accuracy. This imager is integrated into a 3D scanning system to enable the measurement of the diffuse spectral reflectance and fluorescence.

hdr characterization  

High-Dynamic-Range Color Reproduction

Classical color reproduction systems fail to reproduce HDR images due to the dynamic range of luminance present in HDR images. Motivated by the idea to bridge the gap between cross-media color reproduction and HDR imaging, this project investigates the fundamentals and the infrastructure of cross-media color reproduction and restructures them with respect to HDR imaging, and develops a novel reproduction system for HDR imaging.

hdr characterization  

High-Dynamic-Range Imaging

Digital imaging has become a standard practice but are optimized for plausible visual reproduction of a physical scene. However, visual reproduction is just one application of digital images. We propose a novel characterization technique for HDR imaging, allowing us to build a physically-meaningful HDR radiance map to measure real-world radiance. The achieved accuracy of this technique rivals that of a spectroradiometer.

Publications:

 

   


Machine Learning-based Graphics and Vision:

xlrcam  

Deep Learning-based Advanced Spectral Imaging

We developed a novel hyperspectral imaging system that can provide very high accuracy in reconstructing spectral information from compressive input. We built a spatio-spectral compressive imager, which incorporates without our spectral reconstruction algorithm that can provide high spatial and spectral resolution, overcoming the long-last tradeoff of compressive hyperspectral imaging.

insitu  

Joint Learning-Based High-Dynamic-Range Imaging

We propose an interlaced HDR imaging via joint learning. It jointly solves two traditional problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. We first solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extend dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows.

Publications:

 

   


Color Visual Perception:

xlrcam  

High-Dynamic-Range Color Appearance Model

We developed a novel color appearance model that not only predicts human visual perception but is also directly applicable to HDR imaging. We built a customized display device, which produces high luminances in order to conduct color experiments. The scientific measurements of human color perception from these experiments enables me to derive a color appearance model which can cover the full range of the human visual system.

insitu  

Spatially-Varying Appearance Model

Color perception is recognized to vary with surrounding spatial structure, but the impact of edge smoothness on color has not been studied in color appearance modeling. We study the appearance of color under different degrees of edge smoothness. Based on our experimental data, we have developed a computational model that predicts this appearance change. The model can be integrated into existing color appearance models.

Publications:

 

   

© Optical Sensing and Camera System Laboratory (Oscars Lab), The University of Tokyo. All rights reserved.

TODAI