CS180 Project 4

Shoot Pictures

I took 2 images in the Sutardja Dai Hall at the corner of the hall way:

Recover Homography

I used the point correspondence tool from project 3 and clicked 10 corresponding points on the images. I displayed the points with matplotlib:

Here’s the linear system of equations for the 8 variables in the homography:

(xhyhw)=(h11h12h13h21h22h23h31h321)(xy1)\begin{pmatrix}x_h' \\y_h' \\w'\end{pmatrix} =\begin{pmatrix}h_{11} & h_{12} & h_{13} \\h_{21} & h_{22} & h_{23} \\h_{31} & h_{32} & 1\end{pmatrix}\begin{pmatrix}x \\y \\1\end{pmatrix}
(xy1000xxyx000xy1xyyy)(h11h12h13h21h22h23h31h32)=(xy)\begin{pmatrix}x & y & 1 & 0 & 0 & 0 & -xx^\prime & -yx^\prime \\0 & 0 & 0 & x & y & 1 & -xy^\prime & -yy^\prime \\\end{pmatrix} \begin{pmatrix}h_{11} \\h_{12} \\h_{13} \\h_{21} \\h_{22} \\h_{23} \\h_{31} \\h_{32} \\\end{pmatrix} = \begin{pmatrix}x^{\prime}\\y^{\prime}\end{pmatrix}

For each point, I stacked the 2 linear equations to the left and the corresponding coordinates on the right. I used np.linalg.lstsq to find the best fitting solution to the system.

Warp Images

Here are the results of warping the full images:

Image Rectification

I chose to rectify another image. It’s an angled image of 2 paintings in my apartment. On the right is the recification results. Looks pretty nice!

Blend Mosaic

I used the distance transform alpha masking method. Here are the masks (left image warped, right kept the same):

For low frequencies, I compute a weighted average according to the distance to the edge. For high frequencies, I select the pixel where the distance transform mask has greater value. Here are the results of mosaicing the hall way in Sutardja Dai:

Harris Interest Point Detector

I used the starter code linked to in the project. Before passing the image to the harris corner detection function, I bias/gain-normalized the grayscale images. I also set the threshold_rel parameter of peak_local_max to 0.01 to filter some points. Results from the left interior image of Sutardja Dai:

There are a lot of cluttered points. Let’s eliminate some of them in the next step.

Adaptive NMS

Overview of ANMS:

I’m picking the top 50 points after ANMS. They are much more evenly spread out across the image.

Feature Extraction

I get a 40x40 pixel block around the remaining coordinates, and downsample them to 8x8. Here are the extrations for the left image:

Feature Matching

To do feature matching, I adopt the metric of the ratio between first nearest neighbor and second nearest neighbor. Overview of the procedure:

Here are the results:

RANSAC

To select inliers to match with each other and avoid outliers (for example, in the images above the point labeled 19 is an outlier) messing with the homography, I implemented RANSAC according to the description of the algorithm in lecture. I got these matches:

And I got the mosaic for these 2 images

auto stitching results

compared to the manuel point matching:

Other Mosaics

Anchor House:

Outside Dwinelle

What I learned