//****************************************************************************// //********** Cellular Automata (cont.) - February 21st, 2019 ****************// //**************************************************************************// - Professor Vuduc has forgotten his microphone - "So, I'll do my usual blah-blah-blah thing" - We're coming to the end of the first half of the class, where we're thinking about state variables (continuous or discrete) with respect to time - mostly on the continuous side - We just started getting into discrete maps last week, as we looked at how Lorenz started analyzing chaotic systems by viewing their state variables(i.e. height) as continuous, but sampled at discrete points - For ALL of these systems, we always want to know 2 things: are there any fixed points, and are those points stable or unstable? - And then, just last lecture, we started talking about Cellular Automata, where the state space itself is discrete - but what does that mean for our beloved fixed points? What does a "fixed point" look like in these CA systems, and how do we analyze them? - As we'll see more in the 2nd half of the course, we'll often use continuous techniques to study discrete models, and vice-versa - Hopefully, as you study/panic for the midterm, you'll start seeing how all these things connect - "I realized from office hours that a lot of you weren't seeing the big picture for what we were doing, so...I hope that helped a little" -------------------------------------- - Last time, we were studying the behavior of a 1D CA system where each cell has 2 states, with its behavior depending on it and its immediate neighbors, giving us 8 total rules for how to map these cells and 2^n possible states (where "n" is the number of cells) - Because the system is discrete, we can often get oscillatory or chaotic-looking behavior, which we'd NEVER expect in a continuous 1D system! - Unlike for continuous differential equation-type systems, though, no one really agrees on a classification system for "fixed points" in Cellular Automate; roughly, though, we could classify them into 4 groups: - "Fixed Points" - A state that stays as itself, and doesn't change - Periodic - A state that loops through some set sequence of states over and over, before returning to itself and repeating - Chaotic - Most initial states lead to pseudorandom states - Complex - Random-seeming behavior, yet with clear, ordered patterns emerging - Similarly, the notion of a "trajectory" in a CA system doesn't really exist in the same way, although there are parallels in FSM-type graphs showing how the state transitions flow between different states - So, if you're analyzing CA systems, you can try and analyze what the cycles are, if they go to a fixed point, etc. - "This stuff won't be on the midterm; we're just going into some 'fun' stuff now" - An example of this (that multiple teams are doing for their project): the spread of forest fires! - (...that sounds a lot easier than what we're doing, but okay) - The idea behind this simple cellular automata is that it tries to model the rate a forest fire spreads at based on the density of the "trees" in the model - A given cell is either a tree, empty space, or on fire - For each step, if a cell is on fire, it spreads to any adjacent trees and then becomes "extinguished" (i.e. open space) - So, the "fixed points" for this kind of system would be when the fire has burned some number of trees, and then gone out - If we plot the results of this system Monte-Carlo style, (choose random tree positions/densities over many simulations), we see that there's a certain "critical density" that acts almost like a critical point; below that point, the fire doesn't spread too much, but past that point the fire spreads SIGNIFICANTLY more! - You can imagine, of course, how people have added on to this model: random spreading, factors like winds, different types of plants and burn factors, etc. - If we wanted to keep analyzing this system, what else might we do? - One simple thing we could try is MEAN FIELD ANALYSIS, where we calculate the probability of each cell being a tree, fire, empty space, etc., and then answer probability-ish questions - For instance, if a given cell is on fire, we can say "the probability of an adjacent cell being on fire is (probability of cell being tree)*(probability of tree catching fire)" - "...actually, I just realized my notes for this example don't make sense, so...exercise?" - This is in Sayama Chpt. 12, or something - Another thing we could do is RENORMALIZATION; the basic idea is that we're trying to model how the system's behavior changes as the scale of the model grows, e.g. from a 2x2 grid to a 10x10 grid, so we can model how larger systems will behave - If we have a 1x1 domain, for instance, the probability that a fire starting on the left side of the cell spreads all the way to the right side is just the probability the 1x1 cell is a tree - If we have a 2x2 domain, though, which configurations would allow the fire to spread from left to right? - Well, we would need at least one tree on the left and one tree on the right, so there'd be 4 "flammable" configurations with 2 trees, 4 configurations with 3 trees, and 1 configuration with 4 trees, out of 16 possible states (4 squares, 2 possibilities (tree or empty space) for each, so 2^4) - i.e., there's a 9/16 chance of the fire being able to spread left-to-right - Then, we'd enumerate them all like this again for the 3x3 case - For the 4x4 case, it's starting to get impractical to enumerate EVERY possible state, so we'll instead APPROXIMATE it by considering it to be a 2x2 grid of 2x2 cases, with the probability the fire spreads to each "cell" on its right being the probability it'll do so for the 2x2 case - In general, for an "SxS" system, this'll work out to (might ONLY be for this fire example?): P_s+1 = P_s^4 + 4*P_s^3(1 - P_s) + 4*P_s^2(1 - P_s) - We can then continue doing similar things for larger cases (e.g. 16x16 = four 8x8 cases, etc.); these equations won't get us the exact answer, but their approximations are usually pretty good - So, we're trying to get to the point where we can relate different scales to each other using an iterated map - If the model is stochastic, yet another thing you can do is a RANDOM WALK - In the 1D case, let's assume there's a very flat, restricted 1D person, who can only walk left or right - We'll assume he'll choose whether to step left or right randomly, according to some probability we set; assume he's rather drunk, or something - At every step, the walker flips a (perhaps biased) coin, checks it, and then takes one step to either the left (negative) or right (positive) - Starting to analyze this isn't too bad; we'll assume each step choice is independent from the others, so if the chance of stepping right is "a," the expected "value" of the walker after "k" steps is: E(Sk) = sum(value*prob.) = 1*a + -1*(1-a) = 2a - 1 - So, this value is the "mean step" of our walker - The variance of our walker's value would thus be: Var(Sk) = 1 - E(Sk)^2 = StdDev(Sk)^2 - Now, let's suppose we've plotted a BUNCH of random walkers, each of which ended up at some location "Si," and we want to guess the location "Xk" of our walker at time k; what would this be? - Well, Xk would be equal to the sum of all the previous locations (CHECK THIS?) Xk = sum (Si) - The expected value/variance would be would be: E(Xk) = E(Sk) * k Var(Xk) = Var(Sk) * k - So, with all this, what's the probability the walker is at some location "i" at some time step "k?" - Well, if we assume a = 0.5, then at the previous step (k-1), it had an equal chance of being at either "i-1" or "i+1," which'd get us: P(k, i) = 0.5*P(k-1, i-1) + 0.5*P(k-1, i+1) - This is still an iterated map, but what if we wanted to have a CONTINUOUS approximation for this? - Well, let's start off by treat P(k,i) as a probability distribution - This means we can apply our good old probability rules here! P(a*k, i) = a*P(k,i) P(k, bi) = b*P(k, i) - From there, to make it approximately continuous, let's take the average of each timestep (missed this): dK = k/t - ...which'll get us: p(t,x) = 0.5 * P(t - dt, x - dx) + 0.5*P(t - dt, x + dx) - From there, if we do some algebra magic, we get a very famous equation as dx and dt go to 0: the DIFFUSION equation! - Alright, with that, good luck on your midterm! Study hard!