//****************************************************************************// //*************** Polyhedra Operations - March 13th, 2019 ******************// //**************************************************************************// - This week has a very special day coming up tomorrow - PI DAY! - (also Albert Einstein's birthday, but PIIIIIIII!!!) - Professor Turk's favorite person related to Pi is William Shanks, who in 1873 published 770 digits of Pi he'd calculated over 15 years - but (oh no!) only the first 573 were correct! He'd made an error! - He later corrected the error, but the PUBLISHER of his paper made a typo and printed the wrong number; sometimes, you just can't get a break - The memorization record for digits of Pi is (as of the last check) 67,890 digits - because, y'know, why not? - The highest number of digits computed for Pi is on the order of 12.1 trillion digits - it took about 86 terabytes of disk space, and 96 days of computation, but why should WE care? - Well, remember when we talked about partial shadows, and how we dealt with them by casting multiple rays from the light source? With circular light sources, we can do something called "rejection sampling," where we sample points on a square grid and just reject them if they're not inside the circle - This is surprisingly fast, actually; the number of rejections tends to be fairly low (about 1/4 of the samples), since the area of the square is 4*r^2 and the area of the circle is pi*r^2 - If we sample enough points, we can actually use this to approximate Pi! If we sample a BUNCH of points, and count the number of points we reject, then we can re-arrange the equation to get: pi = 4 * (# samples in) / (total # samples) - This is a really simple example of Monte Carlo estimation - Now, project 3B, implementing the rest of your raytracer, is going to be due when you all get back from spring break - let's talk about some pointers for it - There're 3 big pieces to the raytracer: a "ray intersection" part, a "color the ray" part, and "shade a hit" part - We'll first get the intersection, then return the hit and try to color it; that color function will then try to shade its particular hit, which'll occasionally check intersections for shadows/reflections and recursively need to get the "ray color" for reflected rays - Now, you also need to code up the intersection equation for a cone; the simple version looks like this: x^2 + z^2 = (ky)^2 - Where the radius of the cone is "k*y"; by default, this equation'll give us a double cone - We want to position this cone at some point in space, though, so we need to do this: (x - xb)^2 + (z - zb^2) = k((y - yb))^2 - Where "x/y/z" are the ray part, where (for instance) x = (x0 + t*xDir) - You solve for a, b, and c, then, algebraically - even if it gets messy, I trust you've all got the algebra chops to handle that! - To get the surface normal for the rounded part of this cone, it'll be tilted based on our value of k somehow, but, well...how? - To get this normal for a given hit point "P", we'll get the tangent vector T1 to the cone "B - P" (where B is the base point of the cone), and the "radius" R pointing straight out from the cone to the point: R = (B + (0, P.y, 0)) - P - If we rotate this "radius" vector 90 degrees, then we've got 2 orthonormal vectors! So, the actual "T2" vector will be: T2 = (-dx, 0, ) (missed this, but just rotating "R" 90 degrees so it's pointing straight down) - From there, we can finally use those 2 basis vectors to get the normal (try flipping the order if it doesn't work): N = T1 X T2 / |T1 X T2| - "Again, PLEASE start this homework early on in case you get stuck!" --------------------------------------------------------------- - Alright, we were talking a little about polygons yesterday; now, let's talk about some operations we can do on these polygons, and 3 operations in particular: - Laplacian Smoothing - Face Subdivision - Triangulation - LAPLACIAN SMOOTHING is where we can "smooth" a given polygonal surface by taking a vertex in the middle of other vertices and moving it to the center; for instance, if we have a vertex 'V' that's surrounded by 5 other vertices, then we'll move it instead to the average position of those vertices: V' = 1/5 * (v1 + v2 + v3 + v4 + v5) - We apply that equation to ALL the vertices, and that really will smooth all the points on the surface! It means we can take a rough surface (especially from scan data) and make it more regular - The one downside is that this'll shrink the polygon, but there are ways around that by alternately "inflating"/"deflating" the object - Usually, this is coded so that when we apply this equation to all the vertices we store the "current vertex" positions, and use that to calculate all the new positions. This way, the result is independent of the order we smooth the vertices in - FACE SUBDIVISION is where we have a given polygon and want to chop it up into a bunch of different faces (for various reasons) - For instance, if we have regular triangle, we might change it into a triforce-like version made up of 4 smaller polygons - This is useful for turning "rough" surfaces into smooth ones (think of the geodesic dome at Epcot) - or, alternatively, as a way of getting more polygons for us to play with so we can add texture to a surface - We'll revisit this in a few weeks, when we'll start talking about subdivided surfaces and specific algorithms like Catmull-Clark, etc. - Finally, TRIANGULATION lets us turns a given polygon into a version made up entirely of triangles - Why would we do that? Well, triangles are NICE to play with! By nature their vertices will always lie on the same plane, it means we only need to have one constant-size data structure for triangles instead of a general "n-vertex" structure, etc. - Generally, we do this by adding diagonals to the polygon - In order to use ANY of these algorithms, though (triangulation possibly excepted), we need to know how our polygons are connected to one another - and to do that, we need to store connectivity information! - How do we store this connectivity data, then? There're 4 general approaches we can take: - POLYGON SOUP is when we DON't store any connectivity information at all; we just "throw them all in the pot," and our faces don't store anything about what they're adjacent to: class Face { vertices[3]; } - In this scheme, polyhedra are just lists of faces making up the polyhedra - that's all! - Because this stores a minimal amount of information, it's sometimes used as a file format, but it's pretty limited for production use - SHARED VERTICES (or "Indexed Face Sets") are where we store the list of vertices in a face AND the other faces that reference those same vertices - This is also pretty common as a file format, since it's a good mix of flexibility AND (due to how we reuse vertex IDs) actually can save us memory for large objects - To actually use this, we'll go through and assign a number/ID to each vertex; then, for each face, we'll hold a list of the vertices it uses: 0 = (-1.0, -1.0, -1.0) 1 = (1.0, 1.0, -1.0) 2 = (1.0, -1.0, 1.0) 3 = (-1.0, 1.0, 1.0) f1 = (0, 1, 2) f2 = (0, 2, 3) f3 = (0, 1, 3) f4 = (1, 2, 3) - So, this scheme reuses the IDs for each vertex, but in Polygon Soup each face has to have its own ID for each vertex - meaning this'll actually use less memory when there are lots of vertices at play! - Alright, 2 down, 2 to go - we'll cover the other 2 storage schemes on Friday!