//****************************************************************************// //***************** Logic (cont.) - September 30th, 2019 ********************// //**************************************************************************// - Alright, we're talking about logic - and what better example of logic is there then a clip from "The Princess Bride?" - We certainly don't use logic all the time, but we do use it for many important things - We've studied mathematical logic for centuries, so why don't we program all our AIs to be purely logical? - One problem, as we've discussed, is that humans aren't perfectly logical all the time - What about ethics? It seems that we sometimes use logic to make these choices, but many ethical decisions aren't thoroughly thought through, but "felt" to be right - Could it actually be that we only "appear" to be logical, but don't fundamentally use logic at all? Frames and semantic networks don't deal with logic at all, but can still produce consistent behaviors - and even if WE are logical, should our AIs be? - So, we'll talk about why people program logic into AIs, and why we use it for some problems and not for others - As a side-note, most science fiction about AI is kind of absurd as it relates to current technologies - Logic isn't really procedural or semantic knowledge; it's a different theory of thinking entirely, although it has similarities with semantic knowledge -------------------------------------------------------------------------------- - So, we're talking about PREPOSITIONAL LOGIC right now, logic that deals with predicates and prepositional statements that act on objects - In completely logical systems, we only deal with statements as if they were axiomatic concepts, and all our conclusions are necessarily true, with NO exceptions - We can go back-and-forth between logical prepositions and natural language sentences - that's powerful, and gives us another theory of intelligence (i.e. we translate sentences into logic, evaluate them logically, and then convert it back to normal language) - This is a VERY different concept than frames and productions systems and such - We can do a lot in logic with truth tables - couldn't we just build an ever-larger truth table for a given sentence? - We could, but it is SLOW to build this, and becomes computationally worse as the truth table grows - DeMorgan's law says that negating an "AND" statement turns it into an OR" with both pieces negated, and vice-versa with "OR" - We also have classic deductive logical stuff like Modus Ponens, Modus Tollens, etc. - Why are people in AI so fascinated by this? Well, suppose we want to build a knowledge base with "K" sentences/axioms bootstrapped into the machine to start with, and then when a new sentence comes in we 100% know what to do! - This is NOT the same thing as production systems; production systems work by saying that whichever rules match ALL apply to the input, and if multiple conflicting rules or no rules get activated, we're at an impasse - In contrast, logical systems ask "is this true?" and end up only activating one thing - it's a completely different view of how thought works! - Back in 1999 or so, almost the entire NASA AI team resigned because they were concerned that their logical programs were all doomed to failure and didn't want to receive the blame for it - but, of course, right after they left was the one time their systems all worked beautifully! - Unfortunately for AIs, researchers found searching through logical rules to be extremely slow, but they came up with many schemes to speed this up as much as possible - Now, many people talk about how robots are surely going to be good to humans - which doesn't seem a guarantee - but can you imagine humans being good to robots? No! If something doesn't work, we kick it! - Suppose we have a conveyer belt with boxes on it, and we have a simple robot that lifts these boxes and puts them into a truck - What if some humans decided to play a trick and glue the box on the conveyor belt? The robot would keep trying to pick up the box, over and over, to the end of time. That's NOT what a human would do - they would try a few times and then say "eh, something's wrong with this box, I'll ignore it." That takes intelligence! - So, how would our robot deduce this logically? That's a challenge! - We could manually add a rule saying "if I can't move this, stop lifting it," but aren't there many such exceptions? And how can we PROVE the box isn't movable? - We could use a technique called RESOLUTION THEOREM PROVING, where we remove conflicting logical rules until we can make a logical sentence, and if we eliminate all rules than it means the rule must be absurd - We'll talk moe about logical proofs next lecture - goodbye!