Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation that stores some 3D properties of the scene 2. 3D shape from X: methods of inferring 3D depth information from various sources 1 What can you determine about 1. the sizes of objects 2. the distances of objects from the camera? What knowledge do you use to analyze this image? 2 What objects are shown in this image? How can you estimate distance from the camera? What feature changes with distance? 3 Intrinsic Images: 2.5 D The idea of intrinsic images is to label features of a 2D image with information that tells us something about the 3D structure of the scene. occluding edge convex edge 4 Contour Labels for Intrinsic Images • convex crease (+) • concave crease (-) • blade (>) M + + + • limb (>>) • shadow (S) • illumination boundary (I) • reflectance boundary (M) 5 S I Labeling Simple Line Drawings • Huffman and Clowes showed that blocks world drawings could be labeled (with +, -, >) based on real world constraints. • Labeling a simple blocks world image is a consistent labeling problem! • Waltz extended the work to cracks and shadows and developed one of the first discrete relaxation algorithms, known as Waltz filtering. 6 Problems with this Approach • Research on how to do these labelings was confined to perfect blocks world images • There was no way to extend it to real images with missing segments, broken segments, nonconnected junctions, etc. • It led some groups down the wrong path for a while. 7 3D Shape from X • shading • silhouette • texture • stereo • light striping • motion 8 mainly research used in practice Perspective Imaging Model: 1D real image point E xi This is the axis of the real image plane. f O zc camera lens image of point D B in front image xf xc B 3D object point 9 O is the center of projection. This is the axis of the front image plane, which we use. xi = xc f zc Perspective in 2D Yc (Simplified) P´=(xi,yi,f) 3D object point P=(xc,yc,zc) =(xw,yw,zw) yc ray yi F f xi optical axis zw=zc Zc xc Here camera coordinates equal world coordinates. 10 xi xc = f zc yi = yc f zc xi = (f/zc)xc yi = (f/zc)yc camera Xc 3D from Stereo left image 3D point right image disparity: the difference in image location of the same 3D point when projected under perspective to two different cameras. d = xleft - xright 11 Depth Perception from Stereo Simple Model: Parallel Optic Axes image plane camera L z Z f xl b baseline f camera R xr x-b X z f 12 = x xl z f P=(x,z) x-b = xr z f = y = yl y yr y-axis is perpendicular to the page. Resultant Depth Calculation For stereo cameras with parallel optical axes, focal length f, baseline b, corresponding image points (xl,yl) and (xr,yr) with disparity d: z = f*b / (xl - xr) = f*b/d x = xl*z/f or b + xr*z/f y = yl*z/f or 13 yr*z/f This method of determining depth from disparity is called triangulation. Finding Correspondences • If the correspondence is correct, triangulation works VERY well. • But correspondence finding is not perfectly solved. for the general stereo problem. • For some very specific applications, it can be solved for those specific kind of images, e.g. windshield of a car. ° 14 ° 2 Main Matching Methods 1. Cross correlation using small windows. dense 2. Symbolic feature matching, usually using segments/corners. sparse 15 Epipolar Geometry Constraint: 1. Normal Pair of Images The epipolar plane cuts through the image plane(s) forming 2 epipolar lines. z1 y1 z2 y2 P1 C1 b P2 epipolar plane x C2 The match for P1 (or P2) in the other image, must lie on the same epipolar line. 16 P Epipolar Geometry: General Case P y1 y2 P1 e2 e1 P2 x2 x1 C1 C2 17 1. Epipolar Constraint: Matching points lie on corresponding epipolar lines. Constraints P 2. Ordering Constraint: Usually in the same order across the lines. Q e2 e1 C1 C2 18 Structured Light light stripe 3D data can also be derived using • a single camera • a light source that can produce stripe(s) on the 3D object light source 19 camera Structured Light 3D Computation 3D data can also be derived using 3D point (x, y, z) • a single camera • a light source that can produce stripe(s) on the 3D object b [x y z] = --------------- [x´ y´ f] f cot - x´ image 3D 20 light source b (0,0,0) f (x´,y´,f) x axis Depth from Multiple Light Stripes What are these objects? 21 Our (former) System 4-camera light-striping stereo cameras projector rotation table 3D object 22
© Copyright 2025 Paperzz