//****************************************************************************// //********* Concept Learning (cont.) / Logic - September 25th, 2019 *********// //**************************************************************************// - Okay, here's a very short TED talk on teambuilding - Interestingly, kindergartners produce better marshmallow structures than business students because they don't try to make 1 perfect plan, then build; they just start building and iterating right away (they also tend spend time trying to be the leader) - ...engineers and architects, luckily, do tend to do as well or better than kindergartners - So, what do these children know that we don't know? - They don't know structural mechanics or concepts for prototyping - What we want to say is that they - Note, please, that AI isn't just a technology; AI concepts have also led to insights into teaching, psychology, and the like -------------------------------------------------------------------------------- - So, yesterday, we were talking about incrementally building concepts with positive and negative examples, through generalization, specialization - In general, positive examples lead to generalization (broadening our idea of what a concept could be), while negative examples lead to specialization (eliminating what the concept could possibly be) - To get the right balance, we need both positive and negative examples to avoid overfitting/underfitting our concept - We hope, as more examples come, that our agent will converge on all the correct concepts and exclude all the irrelevant ones - This entire technique is based on background knowledge - the agent sees something, then starts representing it as a mental model using concepts it already knows about - Over time, we convert our episodic experiences into semantic knowledge - We first talked about learning via cases; then we talked about chunking; now, we're talking about a 3rd type of learning: concept learning - "Why don't we talk about these concepts in other AI classes? Because those classes aren't concerned with building human-level, human-like AI" - Now, suppose we show someone a new example that's very different from previous examples (e.g. "this is an apple, this is a pineapple (not an apple)") - how do we tell which differences are relevant? Is it not an apple because of the color, or the leaf shape, or what? - If we can't decide which differences are important, then we'll often build multiple mental models, and it may take some time to combine them - So, according to this, what an AI can learn or does learn is dictated heavily by what background knowledge the agent has - Many researchers actually believe that there's no "master algorithm" that will let our AIs learn everything the same, best way; it'll always be based off of what we know - Now, it seems like we've actually built an almost complete theory of intelligence, right? - This theory says that all intelligences have 3 kinds of knowledge: episodic (history/memory), procedural (rules), and semantic (concepts and understanding) - We start with episodic experiences, and pull rules and concepts out of that to get the other types of knowledge - So, that's our account of intelligence so far - it's almost certainly not complete, and you may not agree with it, but it's a very powerful notion! We can design agents around this! - "I'm going to say some not-nice things about psychology, so forgive me" - Professor Ashok was bored to tears at his introductory psychology class, because it didn't answer what he thought were the important questions: how do we learn? What is intelligence? It seems psychology goes to Pavlov's dogs, and then stops - So, for concept learning, we've talked about 6 heuristics: - require-link - drop-link - - - - - "You can watch the videos online if you want a detailed run-through of each of these" - Okay, let's now move on to another field: logic! - Why are we interested in logic? For several reasons - There are a number of fields, like algebra, where we want some guarantees of correctness and logical certainty - We want robots to not make mistakes, and logic is one of the few frameworks that comes with guarantees of correctness - So, we'll talk about some logical principals, and get into formal logic - There are 2 principals we really like about logic: its SOUNDNESS (that only valid conclusions can be proven) and COMPLETENESS - This allows us to get as close to axiomatic definitions for things as possible; we'll formally define everything, then - So, let's get into some formal logic stuff - A PREDICATE is a function that map arguments to true or false values - e.g. "Feathers(animal)," or "If Feathers(animal), then Bird(animal)" - We can combine multiple predicates with conjunctions: "If Lays-eggs(animal) and Flies(Animal), then Bird(animal)" - Notice, here, that there're no exceptions to this rule; it can handle any case we throw at it, and that's VERY powerful! - Prototypes and concepts cause problems because we always have to deal with some exceptions - We can also, of course, negate concepts, and form these statements into well-formed logical formulas composed of LITERALS: true/false things that are made up of a predicate acting on some object (e.g. "Superhero(batman)") - This lets us build up a logical "language" we can use to talk about the world and logical implications, where the left side "implies" the thing on the right if-true: - "Lays-eggs(animal) ^ Flies(animal) => Bird(animal)" - "Lays-eggs(animal) ^ Beak(animal) ^ !Flies(animal) => Platypus(animal)" - So, we can translate English sentences into these logical formula and eliminate ambiguity! - ...unfortunately, this breaks down when we deal with complex sentences; no one's yet devised a logic that can fully capture the complexities of English language - NASA has launched autonmous spacecraft that we've lost contact with - how can we expecte it to take intelligence action? What should it do? - What NASA has sometimes done is written many logical sentences like this - "if there is a blue planet nearby, investigate it" - and, after giving it all of this knowledge, letting it determine from its sensors what the input is that its logical rules can act on - In the 1980s, there was a huge excitement about "expert systems" in AI, where we tried to take the "rules" that experts use at their many different tasks and encode their expertise - At the time, people believed this expertise was entirely in the form of procedural knowledge, which we could capture in the form of production rules and production systems (which is DISTINCT from logic) - As it turned out, this failed for 2 main reasons: - Much of human knowledge isn't procedural - Experts have a difficult time explaining how they do their rules; in some cases, they can't articulate at all how or why to do certain things - Alright, we'll see you on Friday - goodbye!