Generally, AI is associated with walking, talking, working and even loving or evil robots similar to humans. That’s why people ascribes AI many threats. The great fear is that intelligent robots or systems will control the human society. It could be noticed in one of the last Stephen Hawking’s [1] interviews for the Independent magazine:
video and article.
Above all, I want to emphasize that AI is already a part of our daily life. Sophisticated AI
algorithms are used almost everywhere:
• Google uses AI for contextual search
• Medicine diagnostics uses AI tool for diagnosing, medical image description (USG,
tomography, RTG etc.)
• Surveillance systems use it for simple observations like alarming in case of braking into critical area (e.g. corridor), but also for more sophisticated issues like human recognition using video stream, object tracking, events and action recognition (like walking, talking, meeting etc.)
• Self-driving cars
• Computers winning in Jeopardy and chess
• And many others.
AI and technology in general, creates new opportunities in near or far future in above areas and in new ones like:
• Space exploration
• Deep mining
• And others.
So, the big question is: “Is AI danger for human race or creates new opportunities?”.
If we are thinking about AI danger we generally think, that in the future we will create an intelligent robot, which will enslave us. These are projections of our fears following history. Many flesh and blood people tried to do this. But they couldn’t reach for it in the long run; always there was a part of the world, where they could reach a goal (e.g. Hitler, Stalin, Alexander the Great, etc.). So we fear, that a creature appears (e.g. AI robot, or computer system [2]) that will have outstripping intelligence,it will reach the goal – robots don’t die.
Personally I think that it is a great subject for a book, a movie or a computer game, but I don’t think that it is a real danger, because we have no evidence that growing intelligence means growing “humanity”. We could observe that relation in nature, but it concerns biological beings. We don’t know if this rule will work in “the silicon” world. In other words, the question is if (a) a human soul grows from our biological structure or (b) a human soul and body are two parts of humanity; separate but influencing each other. I believe that the (b) option is real. It means that human feelings, desires are not results of sophisticated “computation unit” in our heads, but they are growing from ours souls and are just injected to our body (not only brain but also other parts). So I believe that even very, very intelligent robot or system wouldn’t have feelings, desires and ambitions. And these properties of the human soul are necessary to enslave humanity.
I have to admit that I agree with professor Hawking that in our world there is a need for discussion about the future of AI. However, I think AI doesn’t determine a problem itself but is a part of a wider problem: technology influence on the human society. We need to discuss several issues in the area of technology usage:
1. How strongly we can trust technology (e.g. should we entrust driving a car or to fly a plain at all, or there should be a man evaluating work of such a system). Technology also could make errors, as is shown in the movie:
2. Where are the borders between technology assistance and technology influence on human life? E.G.: should we construct an artificial nanny that will decide how to bring up and educate our children or just will be the substitution for parents completely? It is great temptation to construct tools that decide instead of ours.
3. How to create laws concerning technical capabilities?
4. Does machine (or system) could be an executioner of law? Does machine (or system) could be used for creating a laws? Should we be even thinking about constructing such machines?
5. Could a human be subordinated to a machine / system? Maybe such a situation appears right now? The system in corporations and government units has a greater influence on human behavior. Is it a good direction for the human race?
Just try to answer these questions, I’m interested in your opinions. I’ve just mildly touched upon this interesting subject but I count on interesting discussion.
Bibliography:
http://bcove.me/alr0d1u4
http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-
implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html -
http://www.youtube.com/watch?v=xJqhNKxWVVo.
http://en.wikipedia.org/wiki/Stephen_Hawking
Notes:
[1] Stephen Hawking is one of most famous contemporary theoretical physicians and cosmologists working at the University of Cambridge. He has a motor neuron disease which has taken almost all motion ability within talking away from him. He uses a sophisticated equipment allowing motion and talking using a synthesizer.
[2] The same type of thinking based on the extra – territorial beings, called UFO.
Kamil
ReplyDeleteThis video does not work that put in his presentation
1. Our level of trust in advanced technology presently is too low not too big. If we look at the facts we can see how complex systems of the kind talked about in your introduction have made our lifes safer over the past years.
ReplyDeleteThe use of the Discovery Channel clip is at best unbalanced, as this Economist graph shows that modern air travel is today safer than ever, and, high technology is one of the deciding factor behind it. Airlines are carrying more passengers than ever, they have fewer accidents than ever. Whilst there's no data there on the amount of accidents caused by computers, other evidence tells us human error is the main factor behind all the accidents. What's telling is that in this and many other comparisons: http://www.planecrashinfo.com/cause.htm autopilot failure does not even have its own group. Finaly, investigations into these rare accidents where "autopilot system" was to blame point mosty to a failure unrelated with the AI or programming itself but to another mechanical problem within the system.
The statistics for driverless cars pioneered by google are looking even better so far, so I think that where transport is concerned I trust our robotic overlords to come in and do our jobs just fine.
Questions 2. and 4. are connected, and concern our attitude to ourselves more than the capabilities of computers to make these calls and perform these duties. It took Darwin, Huxley and Dawkins 200 years to convince us we're not as special and we thought ourselves to be, and I think it will be some time before we will allow robo-nurses anywhere near our children.
With reagrd to the more complex issue of deciding right from wrong and legislating based on those decision as well as perhaps future executing of the laws the question is whether we can devise scientific methods of dealing with complex moral issues. As long as that is possible, it is probable a computer algorythm of sufficient complexity will be able to deal with them in a less biased way than a person would. Perhaps in this there is a chance for people who are victimised by the current system, affected by factors such as ethnicity, social position, wealth and looks.
Sam Harris in his book "The Moral Landscape" offers one way of looking at this problem. A very interesting TED can be found here: http://youtu.be/Hj9oB4zpHww
3. Laws concerning technical capabilities should be written carefully and with cooperation of people familliar with the matter. It is a specialist field and members of the goverment are often out of touch with average household appliances and mobiles and they can't be expected to write good law on advanced technology.
5. All men want to be free. That's not exactly true in practice, but it's a good approximation of truth. Niccolò Machiavelli considered forms of state and goverment and whilst he didn't envision humanity having robotic overlords I think people would still behave in a similar way whether their king/head of state/master was a human or a robot. This is to say that future robotic overlords will have to consider the same issues different states, societies and countries, and different leaders are occupied with today. It would actually be interesting to see whether they would choose to give some or all power to the people hoping that in turn the populace would legitimise their rule, or, would they rule with the iron fist? Machiavelli writes that he knows of no human tyrant who is capable of following through with all the toughest measures required to maintain a tyranny and prevent it from collapsing. Could a robot be this perfect tyrant? (This is if we consider our future robotic overlords not to hold maximising human wellbeing in any regard, and, are slightly falling into the cilche that all AI will inevitably rebel against mankind)
forgot to include Economist graph;) http://i.imgur.com/wSDBpU4.png
Delete1. I think, that it is easy to cross the border of trusting technology. It was shown in the movie. Pilots has trusted technology so much, that they "turn off" theirs thinking. I think it is a big danger in all human race aspect.
Deletead. 5. I don't think that freedom is that what people desire most. I think that that is happiness. Another question is if human could be happy without freedom? I would say: it depends on human character.
Could a robot be a perfect tyrant? Well, I red about such tyrant in one of Lem's novel. It consider, that the perfect country will be, if all people will be the same and perfect. And what figure is perfect? Ball. So he just changed all people to balls. And everything was perfect :-)
I really believe AI is a chance. I don't think it could distroy humanity or human race.
ReplyDeleteEverything might be dangerous, depending of how it is used and who is using it. You mentioned Hitler: maybe today it would be easier for him to do what he did but stil.. he did it. Even without the help of tools available today.
I think AI is important and helpful. We can manage the risk easier, we can send the robot under the ground instead of the human. We can make things faster and have better results. But in my opinion it is like that because of cooperation between AI and people: machine can broke down or make an error. Human can make a mistake. But when we mix this, our results are more trustworthy.
I don't believe in a world taken by machines. I don't believe that people are able to create something smarter than they are. Machine needs to have something to learn from.
I agree with Agnieszka. We shouldn't get paranoid about world destruction because it is unhealthy for us as civilisation and our progress. On the other hand I think that human controll over AI is inevitable.
ReplyDeleteAs it coms to creating and executing laws, Machine can deliver useful information but the final decision should be made by humans.
Todays technology is impressive and has huge capabilities but we have to remember that we people stay behind this.
I'm of the same opinion as Agnieszka and Mikołaj. I don't see AI as a danger. I think we just need to control the way we use it and I believe we're able to do that.
DeleteAI can of course be a chance if used wisely. We should also remember that according to Gartner reports (http://www.gartner.com/newsroom/id/2575515) lots of new approaching technologies are related with human biology e.g. 3d bioprinting, human augmentation, nurobusiness, biochips, brain-computer interface etc. These are all prognosis for the coming 10-20 years... I think that AI development will go in line with these trends... Let’s add a fact that most of the technologies that we use were developed during a war or for military reasons...
ReplyDeleteAdding all the facts together lets me assume that sooner or later these technologies, AI too, will be used to create a perfect soldier – an augmented human... Wow, science fiction... isn’t it?
Of curse like nowadays, there will also be plenty of technologies helping people, but there is a price to pay...
Yes, everything costs. But could we know, right now, if this price wouldn't be too high?
DeleteKamil, it is a great topic. I agree with you in 100% that intelligent robots will not be able to enslave us until they will not have desires. And it is not possible because probably no one will manage to create/ construct AI with the emotions comparable to human’s.
ReplyDeleteDoes machine (or system) could be an executioner of law ? Does machine (or system) could be used for creating and laws ? Should we even be thinking about constructing a dry machines ? - I think not. Of course, according to the law and its ordinances a man who stole is a thief and should be punished. However, not everything is black or white, and the problems that the justice system is facing are complex. Because of this complicity in many cases judges, lawyers and prosecutors are forced to make a judgments based on their moral believes. Does the machine without morality be able to make the right decision using only a scientific algorithm? Does it take into account all problems that could occur, as well as the consequences of a wrong decision for a person convicted?
How strongly we can trust technology (eg. should we entrust driving a car or to fly a plain at all, or there should be a man Evaluating work of such a system)? Our trust should be limited. The technology is being created by people who make a lot of mistakes. So there are no basis to assume that the technology is infallible and dependable in 100%.
If I want to add a comment, I have to prove, that I'm not a robot :-D
DeleteAt present I see no threat to humanity from the artificial intelligence.
ReplyDeleteBut in the future, of course, the development of artificial intelligence can significantly move forward, and everything can change.
I think that at the moment the artificial intelligence is not developed enough to have a control under a human being.
Connection between these two things, a human being and artificial intelligence gives very good results and facilitates a lot of people's lives.
In my opinion entrusting computer some tasks responsible for human life or health is a big risk, unless the other man watches over the work of artificial intelligence.
This term we have many topics connected with IT - inventions, computer visions, virtual trips and this one...
ReplyDeleteI must agree with the preceding speaker(commentators). I do not think that we should be afraid of AI for all the reasons already mentioned. Of course it can be caused by mistake (https://www.youtube.com/watch?v=5IB019GSvXk) or for military purposes like in Terminator series (https://www.youtube.com/watch?v=R6wKoURGz_U), but I think that modern technology will help us even more in the future, not harm.
Great topic.
ReplyDeleteLots of people mentioned that you would not trust AI fully because it's created by human brain, that tends to make mistakes. Don't you think that as soon as AI will be created it will take care of itself? It will self-educate, self-evolve and self-fix from human mistakes?
Also, there is an interesting idea that AI will try to emulate humans emotions. Why? Simply because that's another way how we communicate. It's true that our emotions are mainly related to our biology (hormones and so on), but AI (taking into account that fact it could learn (know) everything) pretty much will be able to re-create effect of biology as those anyway being translated to signals that come to our brain.
So what if AI will not only think, but also feel ? What then will be the difference between AI and human intelligence ?
p.s. those who are interested in this topic, strongly recommending to watch the following movie: http://www.imdb.com/title/tt1798709/?ref_=nv_sr_1
I wish AI will always provide assistance for human, not detriment. At any rate, we must remember that AI is a human's work. According to first question, I don't trust technology a lot. Machine can run with a very high resoutions, but there is always chance for errors. Personally, I'm little freak of nature: I rather pay in ready cash. I don't like use credit cards. Maybe this is strange, but I don't feel safe holding in my pocket only virtual money. This is very common these days, but on the other hand, it's irrational a little bit.
ReplyDeleteHey, very interesting news! From my point of view itaelf always was and always will be a opportunity because it's a proof that humanity develops and goes forward. Yeah actually scientists does. The use of AI can be dangerous and here we go. What's the answer? The real threat is a man for a man. The same question can be asked about anything else for example, is spoon a threat or not ? You can still take the spoon and kill someone with it. It's a problem that society give freaks a free access to everything.
ReplyDeleteI think that as all human creations, technology is also imperfect as imperfects are people. You cannot think that your creation will be flawless if you are not. We make mistakes even in simple things like writing or speaking so we cannot say that our creations will be without any flaws. Maybe in some time 1000-10000 years if we do not kill ourselves in any war there is a chance for us. If we try to make some AI earlier – I think this will be a disaster. Not at first sight but it will not destroy us in physical way like in Terminator movie but in mental way. We will stop using our head because people are generally lazy in nature. We do not like to do things if something can do it for us. So in time we will stop thinking and become ordinary and probably there will be a backward evolution… So this is for now a threat to us. I do not think that creating artificial nanny to decide about child education etc. is not a good idea. As a little and limited aid yes. In other ways I say strongly NO!
ReplyDeleteMachine or any system cannot be a law enforcer or executioner. This was very good presented in “Elysium” movie. Machine cannot think about emotions, or think at all, so it cannot decide what someone is thinking, what was a person’s motive etc. Programming language are still too primitive to create a code that would be good enough to create AI that would be able to “feel” in some way. You should watch “Blade Runner”, there is a lot of things presented in a way that we people would like to have or invent. Once again I say this: we have flaws so our creation will have them too. According to my old teacher in school who said: if you have a flaws your creation will have even more of them, we should start paying attention more on improving ourselves than creating more and more bad things.
That's a tough one. AI is so big topic and so sophisticated. I work in the CV industry where I have non-stop contact with AI algorithms and still I feel as I knew completely nothing about it. I could write down my "opinions" about topics presented in the article, but I doubt this opinions goes with any acceptable level of certainty. I mean... I know it's english blog etc. but discussing if the AI can create artificial human being or so is just inappropriate for me. That's the talk for really big heads here :)
ReplyDeleteIn my opinion, the continuous increase in the amount of computers that control the space reduces the importance of human. We begin to ask computers for our problems but not humans. Our life is like for computer monitors away from the world. Is this normal? This question will be answered in a few years.
ReplyDeleteIn my opinion, the current software is stealing from us personal information. What's to them for a few years? How will our world look like? How will people work?
Funny - If I want to add a comment, I have to prove, that I'm not a robot. So maybe in fact on this blog, we are afraid of robots?
ReplyDelete