Monday, 16 October 2017

Week 2 [23-29.10.17] Can we teach robots ethics?

Read the article at  http://www.bbc.com/news/magazine-41504285 and discuss it here.

16 comments:

  1. “If driverless cars save life overall why not allow them on to the road before we resolve what they should do in very rare circumstances?” That’s the question which came to my mind after reading few paragraphs. It’s nice that Land Rover employee thinks similar. 
    Some stat from Association for Safe International Road Travel: “Unless action is taken, road traffic injuries are predicted to become the fifth leading cause of death by 2030.” It’s huge numbers. Autonomous cars should be part of that “action”. Even if they kill few people in case of indescribable mistake. They can save lives of millions people. In my opinion real drivers are more dangerous. As it was mentioned computers can’t get drunk or fall asleep during road trip. That my view on autonomous cars.
    In case of robotic assistants I’m more sceptical. If they not going to be “ humanized” they can’t make correct decision every time. That can cause problems. But robot with emotions end ethical senses? That’s vision making me afraid.

    ReplyDelete
  2. This article presents the well-known argument. What will the self-driving car do? Will he kill us or someone? I think it depends on the manufacturer of such a system. And it seems to me that he should care more about our safety. And if these cars will communicate with each other? There will not be any accidents or will they just be less often? We will see :)

    ReplyDelete
  3. I totally agree with Rafał. Autonomous cars are already here and it is manufacturers' duty to program it to make it safe. There are multiple articles stating that Tesla autopilot saved drivers live because it reacted to a threat that human could not see. I hope autonomous cars keep on developing and will communicate with each other to keep us all safe!

    ReplyDelete
  4. There have been numerous attempts to gather data from humans for AI to learn from about what's moral and what's not. Main topic of article, i.e. driverless cars was investigated by academic institutions such as MIT in social experiments like: http://moralmachine.mit.edu
    You can even compare yourself to the rest of society at the end of this quiz.
    Similar talk about driverless car moral decisions worth mentioning is one of TED talks (also linked on Moral Machine site): https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make
    As for quote from the article "Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?) (...)" - to be honest, I find ridiculous.
    The main point of interest of self learning systems (like machine learning) is to learn from appropriate samples, but also from mistakes.
    If we manage to teach AI that unacceptable moral decision is the wrong one, we will obtain right for judging it on same basis as we do judge children. We might be forgiving at the beginning and punishing in long term (which is appropriate if we intend to make AI-based robots/vehicles live among us as part of society).

    ReplyDelete
    Replies
    1. First of all, thank you for link to moralmachine you provided. It was fun but summary was not very accurate, in my opinion (I didn't care about gender or age of victims but it pointed that I was a lot).
      Going back to the main question: Can we teach robots ethics? Aswer is: We can't. We can provide them with algorithms and data. Basing on this information robot can make some decisions. If he decides to do something that majority of people thinks he should do doesn't make his decision more or less ethic. It's just based on data he was provided with, thats all. AI are just algorithms and data. If we want to talk about ethics we can discuss ethics of developers behind those algorithms.

      Delete
    2. "We can't. We can provide them with algorithms and data. Basing on this information robot can make some decisions." - but this is exactly how we learn. Does it mean we can't teach humans ethics? How could you tell the difference (provided we assume developed algorithm for self learning is flawless)?

      Delete
    3. For me, more doubtful part of the title question is not the second, but the first one. Are WE qualified to teach anyone ethics if we ourselves don't know answers for some morally tricky and ambiguous questions?

      Delete
  5. In theory the car should make the decision similar to human. The car's AI would learn to make decisions from group of people, probably a enormous group of people. The outcome should be the same as the decision taken by major part of the group.
    The real qustion is would the accident really happend?
    I agree with Rafał. Remember it's an AI which can compute faster than human brain and have faster reaction time. The arcticle present problem of two choices: hit motorcycle or kids who rolled out of grass. Well I'd say hit no one and just turn and drive on to grass from which the kids have rolled out. It's not a train, there are always more than two choices.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. I think that in a few years self-driving cars will be common. This sounds great, but can you think of dangerous situations (accidents, children playing football near a street etc.). The programmers should make software responsible for detecting such situations. Do you know `The trolley problem`? I think it is the most important problem for such technologies.

    Please. Visit this page and play a simple game. This shows many moral situations on the roads.
    http://moralmachine.mit.edu

    ReplyDelete
  8. I think that in robots we can talk more about probability than ethics. In the case cited by the article, the computer will calculate the probability and select the least harmful variant. A man in the same situation may be completely unpredictable and not even react at all.
    By 2027, this computer will have enough computing power to quickly analyze all car sensor data and make the best possible decision.

    ReplyDelete
  9. I’m not a big fun of artificial intelligence and I don’t believe robots can change our lives in positive way. It’s so dangerous to think and be so sure to talk about robots ethics. It’s not exists and it wont’t be! I have only one moment in my mind and it from film called Terminator. It’s a big mistake to think the artificial intelligence get out life better. One about I’m sure is fact that robot’s could some day become our enemies and now what helps us, can’t be true for some years. I know that people are various, so robots could also but I can’t stop feeling about robots in bad way. I have some suspicious about this. I wish I wrong because no matter what some people will still work on it. It so exciting for some to not develop it. Have you ever heard about Facebook’s bot who/which?? learn a new secret language? Is a teaser. That is how will be look the world, full of bots which will speak secret languages. It’s a paranoia!

    ReplyDelete
  10. Oh good, We slowly reach this scary moment where car with premeditation smash for example the tree because it's "better decision"
    No, It can't happen. Some ethical decision should be reserved only for humans.
    All the time I also think about some" bugs" in ethical code. Let's say that huge robot corporation introduce new update with code problem. After that, robots at unexpected times make wrong decisions and consequently kill people.
    It's looks like We've got disaster:(

    ReplyDelete
  11. There are 3 laws of robotics:
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    If Elon Musk is scared of AI we should also.

    https://www.youtube.com/watch?v=03QduDcu5wc

    I don't think we can teach robots ethics. Only rules.
    Ethics=Rules?

    https://www.vice.com/en_us/article/ywwba5/meet-the-artist-using-ritual-magic-to-trap-self-driving-cars

    ReplyDelete
  12. Reading the introduction, I recalled the (later presented in the text) famous trolley dilemma. It’s a thought experiment, a puzzle impossible to solve, where no answer is correct and none is incorrect either. Participants’ usual reactions reflect subconscious inconsistencies in human’s logic system. These moral ambiguities are still up-to-date - and presumably, always will be. That’s just a part of being human. And that’s fine - or rather, it used to be. Up to now, when along with technology development, new moral dilemmas appear. Are we in a position to teach on morality? Is it even possible to be taught? If so, maybe we should create robots presenting different approaches - just as people differ one from each other.

    ReplyDelete
  13. I'm really scary about this topic.
    Learn Ethic? Please, We can learn song, or poem but ethic is somethnig much more bigger. Complicated ideaological schema with a lot of turnovers. Human can decide in some ethical problem in seconds. Many times it comes straight from heart.
    In my opinion machin never get so much feelings

    ReplyDelete