Research

A. Research Overview

My research focuses on statistical signal processing for large-scale data, with long term efforts in developing theories and algorithms for recovering latent but coherent structures of the data. Topics of interest include signal reconstruction, sampling theory, network analysis and numerical optimization. My on-going research projects are:

  • Imaging:
    • (1) Monte-Carlo Non-Local Means - Random Sampling for Large-Scale Denoising
    • (2) Single-Photon Imaging Sensors
  • Network Analysis:
    • (1) Nonparametric Graphon Estimation by Stochastic Block Approximation
    • (2) Total Variation based Graphon Estimation Algorithms


Fig 1: Overview of research. Theme: signal processing on large-data. Tools: representation, sampling and optimization. Applications: network analysis, imaging systems and beyond.

B. Network Analysis

Stochastic Block Approximation for Graphon  Estimation

Given a convergent sequence of graphs, there exists a limit object called the graphon from which random graphs are generated. This nonparametric perspective of random graphs opens the door to study graphs beyond the traditional parametric models, but at the same time also poses the challenging question of how to estimate the graphon underlying observed graphs. In this work, we propose a computationally efficient algorithm to estimate a graphon from a set of observed graphs generated from it. We show that, by approximating the graphon with stochastic block models, the graphon can be consistently estimated, that is, the estimation error vanishes as the size of the graph approaches infinity.

C. Imaging Systems

Large-Scale Image denoising

While low-cost commercial cameras work astonishingly well in well-lit environments, we are quickly reaching the physical limits of our current photosensitive materials and devices. Our ability to reduce pixel pitch and further miniaturize imaging systems, boost frame-rates, and capture videos in harsh, low-light scenarios is fundamentally linked to our ability to restore noisy image and video data.

In this project, we are interested in studying the following aspects of image denoising algorithms.

  • Optimal sampling strategies for reducing computational complexity of non-local filtering
  • Adaptive non-local filtering for multiview images by searching through relevant patches
  • Near optimal denoising filter design by utilizing object specific databases

Total Variation Minimization Solver (deconvtv) 

Image restoration is an inverse problem where the goal is to recover an image from a blurry and noisy observation. An image restoration problem can be formulated as a total variation regularized least-squares minimization where the objective function is the $l_2$-norm squares of the residue between the observation and the prediction. Since the total variation norm is not differentiable, existing methods are inefficient.

In this project, a fast numerical optimization method is proposed to solve total variation image restoration problems. The method transforms the original unconstrained problem to an equivalent constrained problem and uses an augmented Lagrangian method to handle the constraints. The transformation allows the differentiable and non-differentiable parts of the objective function to be separated into different subproblems where each subproblem may be solved efficiently. An alternating strategy is then used to combine the subproblem solutions.

The image restoration method is extended to handle video restoration problems. The proposed method considers a video as a space-time volume, and introduces a three-dimensional total variation regularization function to enhance the spatial and temporal consistency. The new video restoration framework opens a wide range of applications, including

MATLAB code available at: http://www.mathworks.com/matlabcentral/fileexchange/43600

Spatially Varying Blur Analysis

Spatially variant blur is a type of distortion where different pixels in an image are blurred differently. Spatially variant blur are difficult in several aspects. First, in terms of computational speed, spatially variant problems are difficult to be solved efficiently, as the matrix-vector multiplication of a spatially variant blur in general cannot be handled using Fourier Transforms. Second, in terms modeling, the image formation model of a spatially variant blur (out-of-focus) is depth dependent. Thus, the classical model that expresses a spatially variant matrix as a sum of invariant matrices is not accurate. The goals of the projects are:

  • Understand the structure of a spatially variant blur matrix.
  • Develop numerical methods for spatially variant out-of-focus blur removal.
  • Develop numerical methods for spatially variant motion blur removal.

LCD Motion Blur Analysis

Liquid crystal display (LCD) is the most popular display device in the market due to its low cost, low power consumption, high resolution and high contrast. However, LCDs are also known for its slow response time, which in turn causes motion blurs. In this project, we study several fundamental properties of LCDs. Our goal is to provide both theoretical and experimental justifications to some methods employed in the industry.

LCD motion blur reduction is an inverse problem: Given a target signal (sharp), we need to pre-distort it so that the pre-distorted signal can compensate the liquid crystal system, which is a low pass system. 

There are two important building blocks in this study. First, the impulse response of the system must be thoroughly studied, for otherwise it is impossible design methodologies to overcome the LCD motion blur. Second, provided a model of the LCD system, algorithms should be developed to reduce motion blur.

Motion Estimation

In conventional block matching motion estimation algorithms, subpixel motion accuracy is achieved by searching the best matching block in an enlarged (interpolated) reference search area. This, however, is computationally expensive as the number of operations required is directly proportional to the interpolation factor. For non video compression based applications, the interpolation process is even wasteful as the motion compensation frames are not needed. This project aims at developing a fast motion estimation algorithm that achieves subpixel accuracy without interpolation. We show that by fusing the existing integer block matching algorithm and a modified optical flow method, subpixel motion vectors can be determined at the cost of integer block matching plus solving a 2-by-2 systems of linear equations. Experimental results demonstrate that the proposed method is faster than conventional method by a factor of 2 (or more), while the motion vector quality is compatible to the benchmark full search algorithm.

Stanley H. Chan, Dung Vo and Truong Q. Nguyen, "Subpixel motion estimation withouth interpolation," in Proceedings of IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.722-725, Dallas, March 2010. 

Optical Lithography

The continual shrinkage of minimum feature size in integrated circuit fabrication increases the difficulty of the optical lithography process, for the desired circuit patterns printed are distorted by diffraction. To push the limit further, resolution enhancement techniques (RETs) must be used to modify the masks so that the distortion caused by optical diffraction is reduced. While traditional RETs focuses on local pattern matches, we consider inverse lithography, a method that treats the mask design process as an inverse optimization problem. Through numerical algorithms, optimal masks can be designed. 

In inverse lithography, the mask design process is modeled as an inverse problem. That is, given a target pattern, how do we design a mask so that it can compensate the distortion? Inverse lithography is a global minimization problem, with optimization variables being either 1 or 0. Our method to solve this discrete minimization is to relax the discrete problem to a continuous one and introduce penalty functions to force the binary characteristic of the solution.