Wednesday, 5 October 2016

Week 1 [03.10-09.10.2016] Sam Harris: Can we build AI without losing control over it?

Watch the presentation Sam Harris: Can we build AI without losing control over it? at https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it#t-22278 and present your opinion on it/discuss it. 

14 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Artificial intelligence, intelligent machines can help us in life. Life is better but these machines can't be more intelligence than we because man must master the machine and not vice versa

    ReplyDelete
  3. In my opinion there is the lot of the risks and opportunities - everything has positive and negative consequences. On the one hand, artificial intelligence helps people with disabilities and the sick (for example on autism), on the other hand, extends the life (I do not want to anyone understand that I am an enthusiast of eugenic methods), but this is a burden on the system. Artificial Intelligence certainly solves many problems, but also makes new challenges, for example in the field of morality (autonomous cars or robots rights).

    ReplyDelete
  4. I have to admit that I agree that further development of technology and AI is inevitable. However I don’t like the definition of intelligence which was used in the talk. The ability of processing huge amount of data cannot be compared to intelligence. So far computers are just tools, same as hammers or axes. They are designed for a specific job.

    Indeed they can perform better than human itself doing the same thing. But does an axe understand why it chops the wood? Does hammer know why it hits nails? Nope. Neither computer understands why it processes data.

    The main problem in development of AI is making the program becoming self-conscious. There’s no answer how to do it and, as I know, we don’t even have clue in which direction we should look to find solution to this issue. Therefore I can even imagine that given implications of inventing (creating?) AI may trigger number of unexpected and undesirable effects. However I think that one of base assumptions is wrong. Human race is a little bit too proud and overbearing thinking that it can create something greater than they are. After billions of years we can barely understand what we are and yet we are afraid that we may create something which is SO MUCH greater than we are.

    Too early, too much.

    ReplyDelete
    Replies
    1. Well, I agree that a machine with a real intelligence should be self-conscious. But I can image a machine that although is not self-conscious (and therefore not intelligent) it can still build another machines that are improved copies of itself. It can build them and measure if a specific factor is better or worse and then decide what improvement was more beneficial. Consciousnesses is not necessary to do such things. If such building machines were multiplied and actions were accelerated than, such evolution would be much faster that our one. If such a machine detects that humans hinder it's improvement, it can "decide" to eliminate them. It wouldn't do this with consciousness but still the machine would fight with people.

      To sum up, I think that even if a machines cannot be super intelligent, they may eventually destroy humankind if we allow them to build and improve themself without our control.

      Delete
  5. Artificial intelligence and robotics helps people nowadays. Disabled people have a much easier and more enjoyable life through specialized machines. AI improves various industries, eg. Production or medicine. If a huma doesn't lose control over the AI, the world really is going in the right direction and in 100 years we will be indestructible. Worse would be, if we lose control of AI, hopefully, we don't wake up in "the matrix"

    ReplyDelete
  6. It is possible, but considering people's greed for money, power and position it is hardly realistic.

    ReplyDelete
  7. Seems that the creation of AI is not the matter of "if" but "when". As Sam Harris said the safest way to constrain artificial intelligence by connecting it to our minds. Maybe that is what we should do, enhance ourself before allowing AI to become self aware. I beleive that we can accelerate our own evolution, not just use our creations to make things for us.

    ReplyDelete
  8. I think that it's impossible to don't lose control over Artifical Intelligence. Intelligence is a something independent, so it's only a matter of time when we lose control over it. The question is whether we will therefore have trouble or not.

    ReplyDelete
  9. We can say that above mentioned movie presented an argument that nothing can stop humanity to improve Artificial Inteligence. There is a thin line which people may cross not on purpose, they can bulid a machine which could improve their intelgence without participation of human and in the scope that is unreachable for humans. This may arise many global economic problems like unemployment or hunger.
    At the present, there is no conditions that may improve AI in the save way for us.

    ReplyDelete
  10. Machines are taking part in our life nearly from beginning of our existence. Like people they have theirs own evolution. I'm excited and also scared about AI. Nowadays we can't build independent machine but who knows. Maybe in near future we'll make code which will be able to modify itself.

    ReplyDelete
  11. I think we should push AI limits, but we have to be careful, and put some failsafes, just in case AI wants to go rogue. Recently google said they've put shutdown "button" in their newest AI project, good preparation and some procedures should keep AI in its tracks.

    ReplyDelete
  12. I think that AI technology is will be 'must have' technology over the years. At the moment we can find many books about AI, but this technology is not a complete.
    In today's world at every step we contact with any data, but a human not have the capacity to analyze all the data.
    Many people think that AI is very dangerous,because it strongly interferes in our privacy.
    I think that we should not be afraid of this technology. Yet.
    AI is no substitute for human work, but only supports people.
    IT people known direction of technology development. I hope that it will be safe and i will not affect our security and privacy.

    ReplyDelete
  13. I don't think that AI nowadays could be a big danger. It's not some super mastermind gaining control over the world.. those are just algorithms prepared to resolve specific functions - help you browse internet, drive a car, find a ad you may be interested in.. I don't see any SkyNet around ;)

    ReplyDelete