//****************************************************************************// //********* Incremental Concept Learning - September 23rd, 2019 *************// //**************************************************************************// - "This class continues shrinking...I am assuming everyone else is sleeping the project night off" - So, cue video on chimpanzees communicating with facial expressions and gestures, and a theory that they actually can invent new gestures - "So, Chimpanzees can make these expressions of being sorry - but do they actually understand the concept of being sorry, or is this just a rule they follow? Do they just mechanically make a 'forgiveness' face when they lose a fight? Are they just imitating what they've seen other chimpanzees do? Or do they actually understand being sorry, and act based off of that?" - Notice that these 3 theories line up with our 3 kinds of knowledge: procedural, episodic, and semantic - How could we tell the difference, what's going on in the chimpanzee's mind? And are we 100% sure our feelings of being sorry don't involve imitating other humans? - The current theory of human development is that we're born with a little bit of "starter" semantic and procedural knowledge - According to Noam Chomsky, rules for learning language are innate; according to a different linguist named Stanislaw, we're born with semantic knowledge of basic language concepts - So, with these inital rules and concepts, we then acquire a BUNCH of experiences that we turn over time into more rules and concepts - and that's where we're headed! - "Here, the agent hasn't been given millions of data points to train on; it's been given 1 piece at a time" - Speaking of this, how was project 1? - Many people find the project difficult to start due to its ambiguity; it's an ill-defined, big, open-ended scary problem. I didn't tell you where to start! But over time - as you wrote more examples down, perhaps, and started to write down code - it probably got easier; you found a direction to move in, and the problem became more concrete, and easier - "Most human problems are ill-defined. Love? Good luck with that. National security, religion, faith, curing cancer - none of these problems have a PDF describing what we need to do" - What did you learn from this homework? - Many people ran into the "Pareto optimality" problem: for open-ended problems, we often find that making our solution better in one way will make it worse in others (e.g. a better building will be more expensive) - One person who manually built a decision tree for this problem compared it to machine learning, where instead of letting the data guide the problem, he instead had to try and make the rules classify the data; there wasn't enough data to do a normal ML approach - "There were many, many different approaches to this problem" - When dealing with heuristics, you'll always have exceptions when dealing with non-trivial - so how do you handle mistakes? We say humans learn from their mistakes; how can we get robots to do the same thing? - "In AI, there's a constant attempt to do things perfectly with massive datasets; but it's interesting to note how far some approaches can get with only a little bit of knowledge; that's also a worthy go" - Many people had to make some assumptions about what kind of sentences they would receive, and that led to a common critique people have of AIs: "brittleness" - Learning can help with this; our handcrafted agents may fail, but learning can help it to deal with cases. It won't make the problem go away, but - "Some people have critiqued this class because I cover the exact same material as the video lectures you can watch at home - and that's a fair critique. But it's a conscious decision on my part." - You guys are smart; if I applied again, I'm not sure if I would get in. I don't think giving you guys some exact algorithm and having you build it is helpful for learning; learning happens when you figure things out for yourselves, and I structure the class around that - The projects are the beating heart of this class, and the TAs have poured dozens of hours into their design - I view my goal, then, as inspiring you to be excited about these things and to think about them for yourselves -------------------------------------------------------------------------------- - So, let's now start looking at "incremental concept learning" - When we talked about frames, we mentioned that we'd skip how to actually learn about new things from our frames; here, we're going to go from using frames to know WHAT we need to learn to actually learning - We'll talk about variabilization, specialization, and generalization, and heuristics for doing each of these - Let's say I showed you a few different photos of buildings and said "this is a foo;" then I showed you a few more photos and said "this is not a foo" - Notice that we start by telling you what a foo IS, and there's only ONE difference between the first positive example and the first negative example, so it's clear what feature is relevant to the "foo/not foo" distinction - Similarly, if we're good teachers, each positive/negative example will only change by 1 thing each time from the previous example - "If you ever have children, you'll learn very quickly that this is the only way you'll make progress teaching them: showing them 1, clear thing at a time, and changing it a little bit" - This is the core of INCREMENTAL CONCEPT LEARNING: building up what a concept "is" example-by-example over time - One helpful way to do this is to translate our raw input - an image or a sentence or something - into a knowledge representation (a frame or semantic ) - This is simple translation into a symbolic language: "In an arch, brick 1 is above brick 2" - The first BIG part of this learning is VARIABILIZATION: we replace the particular constants ("brick 1", "brick 2", ...) into variables ("brick is above brick") - How does our agent know what a brick is in the first place, though? We're assuming our agent already knows what a brick is, and "above" is, and so on - Notice here that by this scheme, the more we know, the more we can learn! - "Why do people learn different things in the same classroom? Perhaps because they have different sets of background knowledge - and we might expect the same thing to be true of AIs" - Now, suppose we notice that a recurring thing in our example actually isn't essential to the concept; for instance, if we see Gatorade, we might assume all water bottles have orange caps - but then we see a Dasani water bottle and realize "hey, this is still a water bottle - we can ignore the cap color"! - This is called GENERALIZATION - Sepcifinally, this is an example of the DROP-LINK heurisitc," where we might notice that a certain feature in our concept actually isn't always relevant - We may also have to go the opposite direction and use the "add-link heuristic," where we notice that certain features ARE important to a concept and we require them - Finally, we may have a "forbid-link heuristic," where we say that certain connection aren't just unncessary - they're NOT ALLOWED - ENLARGE-LINK HEURISTIC - CLIMB-TREE HEURISTIC - "As it turns out, about 8 heuristics cover AMLOST all examples we come across - but there are exceptions" - So, we're doing several things here: talking about human learning, talking - "It's amazing how many instructors pay no attention to what the students already know" - Alright; we'll return to this next time