//****************************************************************************//
//****** Analogical Reason (cont). / Version Spaces - October 28th, 2019 ****//
//**************************************************************************//

- Jake's descent into madness, part 1: 26 hours without sleep
    - I feel like my mind's currently stuck in tar and shared with a Rottweiler who is most definitely Not A Very Good Boy. It's simultaneously very slow and very twitchy. Also, my face feels like something wants to jump out of it.
        - May the joys of sleep deprivation - and my first (hopefully last) all nighter at Tech - begin

- So, we were working on analogical reasoning last time; let's try to get that wrapped up
    - How do we transfer a source case to the target? One principle of intelligence we talked about: we transfer only what is NESTED (per the principle of systematicity)
        - What presumably sets us apart from all other animals is that we can think in terms of relationships; if things aren't nested, then we're just naming properties, but a NESTED link implies there's some deeper relationship going on!
            - Unlike machine learning and almost all other AI fields, we'll ONLY care about these relationships and ignore the feature values themselves
        - According to analogical theories of intelligence, this stuff is human's secret sauce; this kind of thinking is what we have that others don't
            - Notice this is a POWERFUL idea: where do new ideas come from? Where does creativity come from? Analogies seem to be at least one large piece in the puzzle
- There are several things going on here
    - A fundamental idea in AI is "where do ideas come from?"
        - Finding cats in pictures, of course, is not so fundamental
    - We transfer ideas across very different situations, focusing on these relations unlike all other animals; this is our first time in this class tackling
        - Almost all scientists believe in Darwin's theory of evolution, including cognitive scientists - but many cognitive scientists believe Darwin made a critical mistake
            - Darwin and all evolutionary biologists claim that evolution occurs in very small, incremental, contiguous steps - but increasingly, cognitive scientists think that while there's indeed biological evolution, these biologists were so focused on biology that they missed cognition
                - Cognition seems to have occurred in a jump at some point; there's biological continuity, they say, but a cognitive leap, and cognitive scientists increasingly claim humans seem to be unique in this. Other creatures can recognize objects and features quite well, but abstract relationships are something different; perhaps crows and dolphins can do basic counting and numerical relationships, but nothing close to us
            - These are some very big ideas - and controversial ideas! - but here's the question: are we unique in the universe? If we are, what does that say about us trying to build AIs?
--------------------------------------------------------------------------------

- So, let's talk some more about where ideas come from!
    - Here's a BMW car whose design was inspired by the Boxfish, and another picture of a bullet train inspired by a Kingfisher's bill
        - We sometimes think of ideas like this as revolutionary, but they often just come through analogy!
        - Humans can even COMBINE multiple analogies, which no AI can currently do!
    
- So, that's analogical reasoning, come and gone

- We'll now move on to VERSION SPACES, and what better way to introduce it than a Seinfeld video?
    - "Humans make mistakes all the time, and one of our most common mistakes is over-generalization"
        - One thing we've been saying is that we should build human-like AIs, but us humans make mistakes - do we want our AIs to make those some mistakes? Should we reproduce our biases and limitations in our machines?
            - Psychologists have identified 57 major human biases, and there's likely more
            - The most common error humans make of all is over-generalizing; I meet someone, see they have 2 arms, and say "gee, all humans must have 2 arms!" I meet a person from the Netherlands and he's mean, and I decide "huh, those Dutch are rude!"
        - Humans also have a VERY limited short-term memory; all our telephone digits are less than 10 digits for a reason! We're limited!

- VERSION SPACES talks about how we can create human-like AI while guarding against over-generalizing, hopefully extending that to more biases over time
    - We've talked about incremental concept learning before, and how learning from new things is heavily based on your background knowledge
        - We generalize when we get a positive example and specialize when we get a negative example, and hopefully converge; how can we do better?
    - VERSION SPACES tries to get around this by keeping both the most general possible model and the most specific possible model in our heads, with the 2 hopefully converging
        - So, imagine we go to a restaurant to get breakfast on Friday and Sam's, and it was very cheap and I was sick afterwards
        - Next Friday, go to Sam's for lunch on a Saturday, and I ALSO got sick!
            - Is it correct to say I'll get sick every time I go to Sam's? No, of course not!
        - So, we want to make the most specific and general model here

- Hopefully this will become more clear as we go through mpore examples; see you on Wednesday!