//****************************************************************************//
//************* Volume Rendering (cont.) - April 12th, 2019 *****************//
//**************************************************************************//

- (blank)
    - ...
    - ...
    - BOO! I HAVE SCARED THEE!
----------------------------------------

- So, on Wednesday, we started to talk about volume rendering, where our model is stored as a 3D array of "voxels"
    - We began talking about how we might render these models using the "marching cubes" algorithm, and went through a 2D version of it!

- This 3D version, the MARCHING CUBES algorithm, is one of the most cited in all of computer graphics, so let's talk about it!
    - Assume we're looking at a "neighborhood" of 8 different voxels, each forming the vertex of a cube, and each one considered to be either "inside" or "outside" of the surface
        - Because this is now in 3D, there're more possible cases for us to consider (about 14 or 15 in all), but the idea is the same: based on which vertices are inside/outside, we draw a different kind of polygon to connect them, eventually "enclosing" all of the inside vertices in a polygonal shell
            - Probably the most complicated case is where we have 4 inside vertices in a tetrahedral pattern, which'll result in a hexagonal polygon
        - Eventually, because of the rules for these different cases, all of the polygons we create will link up with one another to form a continuous surface - and then we can render that!

- So, that's the isomorphic way of rendering these volumes, but what about the DIRECT rendering approach?
    - There are several different techniques to do this; we'll be mostly considering the raytracing approach, but most of them work using a similar several-stage process:
        1) Classify each voxel
            - Is the voxel representing bone? Air? Skin? Muscle?
            - Based on this data, we can determine what we need to render the surface (e.g. its surface color, opacity, reflectance, etc.)
        2) Calculate the voxel's shading
            - This is where we figure out what its normal is, any lighting/reflections, the color it should be rendered as, etc.
        3) Actually render the voxel
            - This is where we do visibility checks (is the voxel hidden?), etc, and actually draw the voxel on a screen
    - Usually, we'll assume that each voxel is filled with a bunch of tiny, identical, opaque particles of its material
        - Each individual voxel will have an OPACITY value between 0 and 1 (0 is transparent, 1 is opaque) representing the chance that a light ray will pass through the voxel, and a COLOR (i.e. the color of the voxel after lighting, etc. is taken into account)
            - Usually, the voxel's color will be calculated using something like this:

                    finalColor = a_i*c_i + (1 - a_i)*(everything else)

                - Where "a_i" is the opacity of the voxel and c_i is the color we've calculated for the voxel's surface
                    - What's this "everything else," though? Well, it's the color of everything BEHIND the voxel, letting us handle transparency
                        - For a given ray, this might mean recursively calculating that color by shooting another ray behind the voxel (a la reflection from earlier in the semester)

- How can we actually use raytracing to directly rendered these voxels, though? How do the pieces all come together?
    - Oftentimes, we'll have a ray that's going diagonally through a bunch of voxels, and that means we can't render every same-material voxel the exact same (e.g. the corners of an amber cube are more translucent than the center, because they're thinner)
        - Because of this, we'll need to interpolate our opacity/color within each voxel using TRI-LINEAR INTERPOLATION to figure out how transparent, etc. the voxels are
    - Where do these opacities come from, though? We mentioned it came from the "classification" step, but what does that actually mean?
        - Let's say we have some data from a CAT scan, and each voxel has a floating-point value associated with its density
            - Air might have a very low density value, while muscle has a certain density "range" and bone has another density range
                - What if two of these density ranges seem to overlap? (e.g. it's very difficult to distinguish, from the data, if something is tough muscle or weak bone?)
                    - That's a very real-world issue, and with the Broom of Fuggedaboutit Professor Turk is choosing to push it underneath the Rug of Ignorance (ostensibly within the Room of Messy Graphics?)
            - Once we've figured out what the voxel should be, we give it the corresponding color/opacity for its material
    - How do we figure out the surface normals? Well, trust me, PLENTY of techniques exist - but alas, we don't have enough lecture time to get to them

- So, that's a wrap for volume rendering, and now we can start getting into some of the fun stuff YOU guys chose to bring upon yourselves - starting off with NON-PHOTOREALISTIC RENDERING!
    - The goal here, oftentimes, are to try and capture the hand-drawn feel of traditional animation and art
        - One common feature of early animations were these thick outlines for the characters, which directly inspired a style of rendering called CEL SHADING!
            - There are 2 especially prominent features at play here:
                - Dark outlines/silhouettes for the objects
                    - To actually make these silhouettes, Professor Turk is aware of two general approaches:
                        - To render the object normally, then do edge detection on the pixels (using a Laplacian filter, perhaps, where we try to )
                            - A "Laplacian" basically means taking a 2nd derivative in 2D/3D, and'll be relevant when we get to fluid simulation
                        - To identify possible silhouettes based on the polygon normals
                            - The idea here is that if a polygon's normals are facing towards the eye, it's "front-facing," while if it's facing away from the eye it'll be a "back-facing" polygon
                                - Wherever a forward/back facing polygon are adjacent, we say that edge where they meet is a silhouette edge, and render it as a dark line!
                - Simple shading with relatively few colors
                    - The idea here is to use the traditional shading equation where "V = N*L," but to NOT render V as a continuous range of colors
                        - Instead, we'll categorize it into discrete categories and render ALL the values within that range as a single color (e.g. 0 <= V <= 0.2 is dark green, 0.2 <= V <= 0.5 is regular green, etc.)
        - One of the first video games to use real-time Cel shading was "The Legend of Zelda: The Wind Waker," which looks distinctly more cartoony than a traditional 3D shader

- Alright, next week we'll start going through more end-of-year lecture material - hurrah!