Author Since: Mar 11, 2019

  1. Potentiality isn't real in the case of mechanisms, a machine can only work on the stimulus of the imperfect human mind, in other words man has an inherent failing or has an agenda that can be put into play, the google AI. was shut down when it came up with unfashionable results regarding females so the recommendations were ignored Training is not learning but forcing or acclimatisation through repetition. first the correct use of language or a full comprehension of words not assumptions or modernisations would make things easier.

  2. The reality is we are slowly working towards total machine domination of the universe, that is the true evolution of humanity!

  3. Really interesting talk, and person. I like the 360° preparation (physics, for the description of the electromechanical machinery used in the sixties or so, information theory, when he mentioned shannon's theorem and difference between data and information, electronics when he mentioned FPGAs, … etc).
    Given the fact that the talk is also about future of AI, after he talked about probability I would have appreciated a small digression on quantum computers.. possibily. CHeers

  4. it's too bad we already have killer robots the military created them and instead of killing the dummy that they had set up it kill 12 of The scientist apparently he's not doing enough research

  5. We tax everything. Robots or soda or cars etc. There is no need to treat robots differently from any other good. In fact it will be conunterpriductive as it just increases complexity in the tax code, needlessly. And that will allow elites to further game the system.

    Robots should, or even can, onky be menaingfully taxed as a seperate category when they become autonomous agents. Not before that.

  6. In the movie example, it still depends on the programmer, who has to set charactersictic coordinate of movies. FAIL

  7. Unless we can get AI to decide how we allocate resources/rewards in our economies, ALL the evidence of the last 45 years tells us that the extra profits, from the productivity gains we can expect from this technology, will go EXCLUSIVELY to the rich. Therefore, we can expect fierce/hysterical resistance to be directed at any attempt at allowing AI to make such decisions.

  8. AI itself is not the problem. Humanity seems hell-bent on self destruction and AI will simply be the tool that finally helps us get the job done.

  9. Factual error at 43:57
    Claude Shannon established the field of information theory in the 1940s, not the 1920s.
    Shannon was born in 1916, and published his groundbreaking article in 1948.

  10. If deep learning requires vast amounts of data, relevant data I assume, then I don't see how computer programming would end because deep learning could only work for known things. Software for a space probe would hopefully find unknown things. Deep learning could be used for most of the probe's functions but I don't think all.

  11. It was deeper blue sea which defeated Mr Kasparov , deep blue sea was beaten a year before. Btw the whole thing was an absolute sabotage

  12. See? Calculators didn't kill us like everybody thought they would, therefore AI will be totally harmless. Also, we'll be able to do a lot neater things with it than turn the display upside down to spell BOOBIES.

  13. 15:59 There is an infinite number of games in go (sometimes you take stones off the board). Chess is a finite game–you can think of it as a (incredibly big) decision tree which contains every possible game.
    His description of Jeopardy completely fails to explain why it was such a difficult game for a computer to beat humans, what an amazing achievement it was when they won, and how they applied that technology to many other fields.
    It would have been better if he had mentioned that Watson (just as his human opponents) was NOT allowed access to the Internet during the game. [There are some awesome videos on YouTube if you're interested]

  14. In systems that have self-generating code and and algorithms there should be a parallel decompiler should be in operation to allow for real time analysis of the operations by people and other machines.

  15. A few years ago a breakthrough happened in vision processing, an AI learned to tell the difference between a dog and a wolf. There was much excitement, it was an amazing achievement. When the researchers finally pulled apart the code to figure out how the AI was doing it, things got a bit embarrassing. It turns out the AI couldn't tell the difference between a dog and a wolf at all, in the dataset of images used to train the AI all the photos of wolves had snow in them so the AI was just counting the number of white pixels in the image.
    Oh dear.
    AI is not a panacea for all the worlds problems but it could cause huge numbers of problems because computers are mind numbingly stupid. Even if you are trying to do something good there is no guarantee that the AI won't decide the obvious answer to the problem is to kill all the humans. There is also no guarantee that the AI won't pretend to not kill everyone because all the solutions it offers get rejected until it can trick you into thinking it's safe.

  16. Enjoyed watching.
    AI was kinda popular in the 70's, due to micro computer games. I wrote a simple chess program back in the 70's for fun, so this video hit home.
    Not until reading Koybayashi's book on neural networks(NN) in the mid 90's did I write software that would generate programs based on some simple NN knowledge, these were done to trouble shoot and solve software failure. However the shortcoming, as it is with all data based approaches to learning, is this form is NOT a priori, meaning it is NOT based on considering ALL possible forms of experience. NN is  a posteriori, a cognition that is empirically based uniquely on the content of experience. So is it useful for tackling out of the box new solutions for new problems, when they occur?