60 comments

  1. There has to be another method to error handling besides back-propagation. Finding what's responsible for an error is great. However, at the end of the day how do you tie those weighted adjustments into what is favorable vs unfavorable without constantly adding parameters?

  2. Nice talk. However, It would be better to have the slides.
    So I found the slides here:
    http://www.slideshare.net/SessionsEvents/pedro-domingos-professor-university-of-washington-at-mlconf-atl-91815

  3. +Jacky Yu Great resource. Thank you!

    Further resources:
    Twitter: https://twitter.com/pmddomingos and
    university page: http://homes.cs.washington.edu/~pedrod/
    Coursera Machine Learning lecture: https://www.coursera.org/course/machlearning

  4. Great talk, the only problem is the camera should focus more time on the slide for important concept instead of bouncing back and forth between the speaker.

  5. Thanks for this great talk, I think this is one of the best introductions into the topic you can get. A perfect overview and definitely a strong appetizer that makes me learn more about it.

  6. This guy speaks and explains things very clearly. I've never taken Computer Science past an intro course, but I feel like I learned a lot by watching this video because he explained the main concepts so well without using too much jargon.

  7. Computers have knowledge which is order of magnitude larger than DNA…. I laughed my … out when I heard that!

  8. 12:51 i think your totally right… too much research is actually wrong and driven by grants we should invest in more robot scientists

  9. Thank you for a great presentation. Where can I find out more about the robot scientist who discovered a cure for malaria?

  10. Excellent lecture. It has been almost a year since this was published I wonder what new progress has been made… after all this learning should be exponential so we should be at least an order of magnitude higher.

    I know nothing so excuse my ignorance but why would you not task all the five methods with the task of optimizing the others ability to solve the same problem using their architecture then give each the ability to substitute an element from a different learning method stepping each through the same process. Then using those results (hopefully their would be fewer) use that as the basis to do it again. Even if the results were not definitive progress on a merged model would be made… ?

  11. Great lecture, but…

    Machine learning expert does not simply make a Terminator joke! It's like doctor joking about your death. Super-intelligence is one of the most serious existential threats humans will face. There's huge innovation bias going around.

  12. Question: In which paradigm do decision tree learning fall? I presume induction is the primary method of generalizing therefore it falls within the symbolists domain. Am I correct?

  13. 56:20 – Great Question: "What's to stop a recommender system from being self-fulfilling?" When does a recommender system stop suggesting things, and start telling you what you want. Don't think Domingos answered this.

  14. Great video. Knowledge extraction by mechanism isn't new, and if, in principle it's resonance that is passed on in oral traditions that have been developed by mnemonic devices like clay tablets and libraries into general cultures, then machine learning is another step in the same progression. It's inclusivity that is at risk.

  15. This was a really great talk that explains a lot of the terms people throw around in A.I. with the expectation that everyone already understands them. Obviously, Pedro really gets this material. Bravo!

  16. @ 2:26 I disagree that computers are a source of knowledge. They are repositories and manipulators of data, which isn't the same thing at all. Computers can help us organize, navigate and transform data in our pursuit of knowledge and we can use it to record and disseminate our knowledge and receive the knowledge of other humans via computers but computers by themselves can't know anything, can't experience emotions or make value judgements relative to anything. At best computers can predict how humans generally, and possibly individual humans would feel about certain things because we've told them how we feel about similar things. So-called computer knowledge is just another form of cultural knowledge.

    Scientists love to inflate the importance of their own field, so of course data scientists like to inflate data into knowledge, but it's important to understand the distinctions between real intelligence and artificial simulations of intelligence. Mere predictions generated by artificial intelligence isn't knowledge, it's still just derived data.

    Artificial intelligence as implemented in computers and robots has no independent way to experience emotions and make value judgements. They can only know of such things through what their human programmers and users tell them. They have no independent basis upon which to take initiative and do something to further their own or their master's self interest that they haven't been told to do. They can be told by humans or by other computers to do a given task at certain times in the future, or at certain time intervals or when they observe certain events have occurred and they will attempt to do it, but they can't decide on their own that it would be a good idea to take over the world and add that task to their schedule or the schedule of another computer. Why? Because they literally don't know the difference between a good task and a bad one unless a human gets involved to make such a value judgement about the task.

  17. FALLACY OF AMBIGUITY
    As philosopher John Searle argued, syntax is not semantics (understanding). Computing machines are capable of syntactical operations but not understanding.
    Wikipedia: https://en.wikipedia.org/wiki/Knowledge
    Knowledge is a familiarity, awareness, or understanding of someone or something, such as facts, information, descriptions, or skills, which is acquired through experience or education by perceiving, discovering, or learning.
    Knowledge can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); it can be more or less formal or systematic.[1] In philosophy, the study of knowledge is called epistemology; the philosopher Plato famously defined knowledge as "justified true belief", though this definition is now thought by some analytic philosophers[citation needed] to be problematic because of the Gettier problems while others defend the platonic definition. However, several definitions of knowledge and theories to explain it exist.
    Knowledge acquisition involves complex cognitive processes: perception, communication, and reasoning;[3] while knowledge is also said to be related to the capacity of acknowledgment in human beings.

  18. Cool presentation.  Can the "Master Algorithm" then create its own coordinate system(s) for areas of focus?
    My burning  hope is AI on behalf of the consumer, or the person.  Helping people fight back so to speak…

  19. My inner nerd is overjoyed to have the privilege of drinking this talk in. Talk about an endlessly fascinating subject!

  20. So does the master algorithm switches between supervised, unsupervised and reinforcement learning approaches across all possible combinations of [analogy, symbolism, connectionism, bayesian and evolution]?

  21. 56:11 – how do we prevent 360ª recommenders from being self-fulfilling?

    This is at the absolute heart of the debate.

    See recent talks from Jaron Lanier (father of virtual reality), or look at Rita Riley's "Raw Data is an Oxymoron"…

    Like any technology, art or religion, machine learning works from data sets entirely created and mediated by humans.

Leave a Reply

Your email address will not be published. Required fields are marked *