//****************************************************************************//
//************** Analogical Reasoning - October 23rd, 2019 ******************//
//**************************************************************************//

- ...a very empty classroom today, for some reason. Humph.
    - Gradually, more people are trickling in, but it's only ~15 people right now in a 100+ lecture hall.

- "As usual, we'll begin with a movie clip, and this time with a very famous movie clip from The Matrix: the red pill or blue pill clip"
    - As you watch, think about what analogies are being used
        - What analogies did you hear? "Alice in Wonderland," a prison, wool over your eyes, and so forth
            - Analogy is a powerful notion; even when we don't know exactly what something is (like the Matrix), we can understand it through analogies!
                - Similarly, I think none of us have had literal wool pulled over our eyes, but we've still come to know this means someone is being duped
    - We use analogies all the time - even showing this movie clip is an analogy I'm using to make my point!
        - Yet, today, there are no AI systems that have a robust understanding of analogy
        - How do humans do this, and how can we replicate that in AIs? Is it through case-based reasoning? Is it something totally different?
--------------------------------------------------------------------------------

- *cue an experiment where Professor Goel presented the same issue to each class, but with different stories told before each one*
    - "There are 2 important things going on here: first, that humans do use analogies to help solve problems even when they don't seem directly related (we can see "deeper" similarities than just surface features); secondly, that humans seem to understand and use complex relationships"
    - Think about our humor; many puns involve playing with relationships in our ideas ("Why was 6 afraid of 7? Because 7 ate 9!")
        - There are whole schools of AI that think analogies are the key to human intelligence; Douglas Hoffstader wrote an influential book called "Godel, Escher, Bach" that put forth this view

- Suppose we hear the sentence "a woman is climbing a ladder"
    - We intuitively know that a woman climbing a stepladder is very similar, a plane flying into the sky is somewhat similar (it's changing its height), climbing the corporate ladder is somewhat similar, and a water bottle sitting on a desk isn't really similar at all
    - In case-based reasoning, we used the NEAREST NEIGHBOR METHOD to try and figure out which statements were the closest
        - This, though, and other methods (like discrimination networks) are based on features - which we don't have for random given sentences!
    - Instead, analogical reasoning seems like it's based off of the relationships in the story, but not the actual objects or features within the story
        - Everything else can be different, but we can somehow infer these relationships and apply them to something different

- Okay, we'll come back to this next time - goodbye!