Author Since: Mar 11, 2019

  1. Potentiality isn't real in the case of mechanisms, a machine can only work on the stimulus of the imperfect human mind, in other words man has an inherent failing or has an agenda that can be put into play, the google AI. was shut down when it came up with unfashionable results regarding females so the recommendations were ignored Training is not learning but forcing or acclimatisation through repetition. first the correct use of language or a full comprehension of words not assumptions or modernisations would make things easier.

  2. The reality is we are slowly working towards total machine domination of the universe, that is the true evolution of humanity!

  3. Really interesting talk, and person. I like the 360° preparation (physics, for the description of the electromechanical machinery used in the sixties or so, information theory, when he mentioned shannon's theorem and difference between data and information, electronics when he mentioned FPGAs, … etc).
    Given the fact that the talk is also about future of AI, after he talked about probability I would have appreciated a small digression on quantum computers.. possibily. CHeers

  4. it's too bad we already have killer robots the military created them and instead of killing the dummy that they had set up it kill 12 of The scientist apparently he's not doing enough research

  5. We tax everything. Robots or soda or cars etc. There is no need to treat robots differently from any other good. In fact it will be conunterpriductive as it just increases complexity in the tax code, needlessly. And that will allow elites to further game the system.

    Robots should, or even can, onky be menaingfully taxed as a seperate category when they become autonomous agents. Not before that.

  6. In the movie example, it still depends on the programmer, who has to set charactersictic coordinate of movies. FAIL

  7. Unless we can get AI to decide how we allocate resources/rewards in our economies, ALL the evidence of the last 45 years tells us that the extra profits, from the productivity gains we can expect from this technology, will go EXCLUSIVELY to the rich. Therefore, we can expect fierce/hysterical resistance to be directed at any attempt at allowing AI to make such decisions.

  8. AI itself is not the problem. Humanity seems hell-bent on self destruction and AI will simply be the tool that finally helps us get the job done.

  9. Factual error at 43:57
    Claude Shannon established the field of information theory in the 1940s, not the 1920s.
    Shannon was born in 1916, and published his groundbreaking article in 1948.

  10. If deep learning requires vast amounts of data, relevant data I assume, then I don't see how computer programming would end because deep learning could only work for known things. Software for a space probe would hopefully find unknown things. Deep learning could be used for most of the probe's functions but I don't think all.

  11. It was deeper blue sea which defeated Mr Kasparov , deep blue sea was beaten a year before. Btw the whole thing was an absolute sabotage

  12. See? Calculators didn't kill us like everybody thought they would, therefore AI will be totally harmless. Also, we'll be able to do a lot neater things with it than turn the display upside down to spell BOOBIES.

  13. 15:59 There is an infinite number of games in go (sometimes you take stones off the board). Chess is a finite game–you can think of it as a (incredibly big) decision tree which contains every possible game.
    His description of Jeopardy completely fails to explain why it was such a difficult game for a computer to beat humans, what an amazing achievement it was when they won, and how they applied that technology to many other fields.
    It would have been better if he had mentioned that Watson (just as his human opponents) was NOT allowed access to the Internet during the game. [There are some awesome videos on YouTube if you're interested]

  14. In systems that have self-generating code and and algorithms there should be a parallel decompiler should be in operation to allow for real time analysis of the operations by people and other machines.

  15. A few years ago a breakthrough happened in vision processing, an AI learned to tell the difference between a dog and a wolf. There was much excitement, it was an amazing achievement. When the researchers finally pulled apart the code to figure out how the AI was doing it, things got a bit embarrassing. It turns out the AI couldn't tell the difference between a dog and a wolf at all, in the dataset of images used to train the AI all the photos of wolves had snow in them so the AI was just counting the number of white pixels in the image.
    Oh dear.
    AI is not a panacea for all the worlds problems but it could cause huge numbers of problems because computers are mind numbingly stupid. Even if you are trying to do something good there is no guarantee that the AI won't decide the obvious answer to the problem is to kill all the humans. There is also no guarantee that the AI won't pretend to not kill everyone because all the solutions it offers get rejected until it can trick you into thinking it's safe.

  16. Enjoyed watching.
    AI was kinda popular in the 70's, due to micro computer games. I wrote a simple chess program back in the 70's for fun, so this video hit home.
    Not until reading Koybayashi's book on neural networks(NN) in the mid 90's did I write software that would generate programs based on some simple NN knowledge, these were done to trouble shoot and solve software failure. However the shortcoming, as it is with all data based approaches to learning, is this form is NOT a priori, meaning it is NOT based on considering ALL possible forms of experience. NN is  a posteriori, a cognition that is empirically based uniquely on the content of experience. So is it useful for tackling out of the box new solutions for new problems, when they occur?

  17. It's interesting that the photos of the Microsoft data center show quite clearly the thoughtless destruction of some of the trees of Washington State, which help to keep the entire atmosphere clean. Clearly, the expansion of such centers is not sustainable, which is to say it cannot scale up without limit.

  18. The history of almost all basic new technology is marked by initial fear: we will lose our jobs, or in the case of AI, we will lose our lives, either through war with robots or loss of liberty. And yet, almost all the previous technologies, although they were marked by distrust, upheaval, and resistance, eventually were adopted with the result of improvement in people's lives. The exceptions are few: such as nuclear power generation. And even nuclear power may someday be fully accepted, if the reactors can be engineered to transmute their waste products into substances that can be disposed of safely. The imagined 'threat' from AI is quite similar to the threats of the domestication of plants (agriculture), the Industrial Revolution, or genetics: yes, there are problems, but also yes, humans can solve them. In fact, this initial fear is as nothing when we compare with problems that have truly clear-cut disastrous consequences: global climate change, overpopulation, and the vulnerability of the Earth to collision from a large object from space. We really tend to worry about the wrong things.

  19. Still in analog for my k9k is really same way its sonar is guiding ok under water to navigated travel is really great but i need it smaller than a tool for mapping simplest way to control or govern the driver's. Still work out bugs that are popping up.

  20. What i gound is eye and camera to the eye s would be built together like real actions of things that are depicted as an answer Why does it distinguish between like or disliked items that the viewer see's ? An algorithm is really given then it asess s would be built together like consisltancy for like

  21. Why don't they put the data center underground and spare the forest? We need forests as much as data centers. Data centers don't need soil.

  22. 'intelligence' is a misnomer. i wish it had never been called that. then 'artificial' would have been unnecessary. coding is nothing more than a means like any machine. except it has a tendency to encourage closure of all kinds rather than openness. a machine is closed. human creativity is open and should stay that way or ideally be expanded by this, another kind of machine. please no more scientistic axiological bankruptcy. it's way overtime for progress of a different kind. over to the next generation.

  23. It will be fun when AI can mimic a fly. How it seeks food, mate, sense danger , evades, flocks, etc. Then AI will be closer to being really neural.

Related Post