Since the invention of the first computer, the main principles of giving them commands
remained unchanged. In traditional coding, you write a very specific set of
instructions taking into consideration edge cases and other dependencies.
Thanks to that, you knew exactly how the computer is going to react in any
given circumstance. However, with the rise of today’s information era, this method started to be too limited. We had to start coding in a completely new
way.
https://blog.verseo.pl/wp-content/uploads/2017/01/ml.jpg |
In this approach, you give the
computer a set of inputs and a corresponding set of outputs and let the machine
write its own directions to follow. For example, you give the computer 1000
pictures of cats and 1000 pictures of dogs. Then the computer looks at them and
tries to build a classification system. This is machine learning. It is the
technology that lets us create self-driving cars, facial recognition, and many
other brilliant inventions. Google Translate used to be more than 1 million
lines of code currently, it is about 500 lines that just call machine
learning.
At the same time, this technology
is creating social and legal challenges. Most machine learning algorithms
are called “black-boxes”, that means that can’t interpret the way in which they
are making their predictions. If the historical data that was used to create a
model was biased, it will cause the model to repeat the same bias in its
predictions.
https://upload.wikimedia.org/wikipedia/commons/a/a9/Amazon_logo.svg |
A recent example is now-retired
Amazon’s recruiting engine, which was trained on resumes spanning a 10-year
history. Based on the data it was fed, the algorithm learned a distinct male
preference. Unfortunately, the team assumed that their historical data was free
of bias, which resulted in a perpetuation of inequality and a significant PR
disaster.
Here u can find another interesting example:
Here u can find another interesting example:
1. Have you ever heard about this
problem? If yes, from where?
2. In your opinion, in which
applications of machine learning this problem could also occur?
I know what machine learning is, I know some of its pros and cons, but I have never thought about issue you have described.
ReplyDeleteThis is scary that someone's inability to select the proper initial data can make that the program will have errors throughout its life.
I believe that this problem could occur everywhere, where machine learning occurs, because people may not be aware enough what are consequences of bad data and what actually are bad data - and Amazon is great example.
I've heard about this topic couple of years ago from Google's case of face recognition algorithm which classified images of black people as gorillas. It's a difficult issue as it concerns ethical approach to ML. While AI should rather support and upgrade social systems it may as a matter of fact reinforce existing inequalities. I suppose biased artificial intelligence may affect negatively banking apps which estimate if a person is suitable for a loan by taking into consideration their gender or race.
ReplyDeleteActually I've never been too much into machine learning except last semester's project at school, so I probably won't be able to specify of cases that I've might hear about.
ReplyDeleteI think that there are a lot of great applications to use machine learning as basis. Starting from product recommendation based on your age group, location and basic preferences, going through VOD services such as Netflix, Prime, HBO GO etc. But that's not all, personally I think that interesting idea is to use machine learning for time/party scheduler, based on previous users choices and plans, age group and our further choices.
This comment has been removed by the author.
ReplyDelete
ReplyDeleteI've heard about machine learning, I think it will be great technology - in the near future. Now, in fact, the computer often makes mistakes (sometimes funny or scary - e.g. when painting a human face). Apparently, when asked about people, the computer thought they should be extinct. Have you heard about it?
I haven't heard much about it, but it seems to me very interesting. I think that every technology in its beginnings was something strange to people, raising questions and doubts and the more advanced technology the more intense and demanding more caution. Failures are inevitable in the improvement process and I hope that machine learning will bring a lot of benefits in the future. I do not know in which areas it could be useful, but I would like to learn more - I'm particularly interested in the beginnings of this technology.
ReplyDeleteMachine learing is quite buzzing topic nowadays. Almost every person use ML solutions day by day even though they are not aware of it.
ReplyDeleteMachine learing provides us plenty of benefits - for ex. allows automizing difficult processes, making some kind of predictions, supporting business decisions and so forth. Unfortunately ML also has lots of disadvantages. As Magda said, there is a danger that ML can empower existng inqualities. The problem with biased precitions of ML algorithms appears also in AI medicine project in the USA. AI algorithm predicts what disease a patient has, only based on the symptoms and historical data. You can guess that a diagnosis is not always correct.
I’ve never heard about this, but it sounds like something important.
ReplyDeleteThe evolution of machines has reached so far that they slowly begin to imitate people.
I think this technology can be very helpful in business. I'd like to learn more about it, but at the moment I’m not sure in what context it could be a problem.
I remember the case of the bot created by Microsoft based on machine learning - Tay, who was supposed to imitate a teenage girl on a Tweeter, but after a few hours it turned into a vulgar, racist cyber creature, and after dozen hours it was turned off by someone:) It was PR disaster for Microsoft - similar story to the Amazon ML defeat story that you mentioned. I probably read about it in the newsletter I received to my mailbox.
ReplyDeleteI think this problem occurs everywhere where ML is used, but thing which frightening me the most is vision of fatal mistakes in autonomous cars or airplanes systems.
This topic is completely unknown to for me. In my opinion is highly specialized. Honestly speaking, even after reading the above articles, I can’t understand at all the topic.
ReplyDeleteDoes it only seem to me that the case is similar to the subject of disclosure of data from Facebook few years earlier. Although I may be wrong.
I find it difficult to comment on this topic. If I understand correctly, technology can be useful for social invigilations. It sounds very disturbing and scary. Such solutions rid off of privacy and freedom.
If I am wrong, can someone explain it to me with simple examples? I am curious a bit this topic?
DeleteKinga, it's about machine learning - artificial intelligence. Do you remember our 'KCK' classes last semester? It's Mrs. Zawadzka profession. She explained it to us. ☺️
DeleteAnd did you know that professor Krzysztof Wołk is a specialist in this field? His research of statistical methods of machine learning have been recognized as one of the most cited research in the world! 🌏
Deleteoh, Zuza how do you know that?🧐
DeleteThis semester, we have classes with Professor Wołk and I wanted to check what he does, because a lot of our lecturers took part in interesting projects. I was surprised that his research area concerns machine learning. You can read more about it on Wikipedia, for example. ✅
DeleteUnfortunately, I've never heard about this problem. The subject you have raised in the article is something new to me that I had no idea about. I will try to find some information into the subject for sure! This what you mentioned about is very interesting. Unfortunately, there are many areas that we can say little about. This is definitely something for me!
ReplyDeleteUnfortunately at the moment my knowledge is quite small on this subject and unfortunately I can't say anything about it.
Yes, I've already heard about this problem. I'm facing it everyday. For example, when browsing the Internet, we encounter various types of marketing offers. They are often mismatched to our preferences and there are things displayed that we simply do not want to buy. This is an example that shows that models can be wrong and and that is why they have a misclassification rate which means that not all predictions are accurate.
ReplyDeleteI think this problem can also occur in applications connected with some political issues. An extremly dangerous one can be for example social scoring, as it may control someone's key life decisions.
Frankly speaking, I haven't thought about this problem before. Many machine learning engineers say that the most important part of the process is data tuning. You also wrote the same in your article. A big risk is also blind faith in what the algorithm suggests. Because the algorithms get better every day, we start to treat them like an oracle, which can have very negative consequences. Artificial intelligence is a fairly new field of knowledge, and that's one of the reasons we are so excited about it.
ReplyDeleteHave you ever heard about this problem? If yes, from where?
ReplyDeleteTo be honest I've never heard about this problem before but it's quite interesting. It really leads to conclusion that we have to be aware (especially that some of us might already work or will work in the future on AI solutions) of what kind of data we're operating. Not only from cold validation perspective but also from moral point of view.
In your opinion, in which applications of machine learning this problem could also occur?
It won't be necessarily application related but I know that during previous US elections the AI studied voters based on their social media activities (content liked/shared) and digital footprints. Then it set up personalized ad strategy to show exactly what's most important - from political preferences point of view - for the receiver. As we all know it gained a huge success to convince people that'd been torn between candidates.
But at the other hand it's just next level of advertisement, quite similar to what traditional media did 40 years ago.
I have never heard about this issue. To me this is something new and it is actually hard to comment on that. But as I read about the last thing about Amazon, I thought that in the future stuff like this can cause people to get fired if this gets better in the future. When AI is going to be better and better a lot of humans will become useless due to the work they do, and I am glad that this engine in Amazon is now withdrawn.
ReplyDeleteBut as I said I don't even know if I am talking with sense, if I am wrong please don't judge. ��
I've never heard about this problem but I'm glad I learnt about it. Even though I know Machine Learning could be useful in many areas, I believe it's very overused and recently it just became a slogan.
ReplyDeleteIt's difficult to find such an application but generally I think ML is sooner or later going to fail in areas that are dynamically changing and require some sort of intuition, for example stock exchange. I also think it shouldn't be used as a replacement of human in direct face-to-face conversations since it's going to cause more trouble than good.
This is the first time I read about this problem in such detail description. I only have heard about machine learning and from high level I knew what it is about. This topic is in my queue to get familiar. I don’t know what in which this problem won’t occur, so every cloud application can be impacted
ReplyDeleteNo, I haven't heard about this particular problem, but I've heard about other similar ones. There where few of them, that we could read about in the Internet.
ReplyDeleteAs far as I'm concerned about machine learning, this problem can occur in many machine learning applications. It is crucial for the proper application behaviour, to select the entry data set properly.
I haven't heard about this problem, but I was participating in project when we were using machine learning so I know that bad input can be catastrophical for whole system. I think that machine learning can be used in people recognition or in security systems. System which have knowledge about many various incidents will recognize a new one in no time.
ReplyDeleteI know what machine learning is and I like to learn new information about it, I actually think that it is our future.
ReplyDeleteI haven’t heard anything about problem that you describe, but I can imagine problems that can occur in nowadays. Generally using AI hackers can gain access to IT infrastructure and control the whole systems. Also machine learning allows the criminals to analyze many information of stolen data and identify potential victims.
I think that technology isn’t bad, but it can be a bad thing in bad people’s hands.
Pretty cool articule Jakub. According to your questions, yes i heard about that from data science articule posted on Medium. There is a lot of sources showing how "notrusted" machine learning is, and a so much less articules showing benefits of ds. It's normal that neagtive news are more intresting, like this one about Amazon. Similary problems of machine learing will occur in every situation where computer has to judge people, this arouses folks bitterness. For example program which has to tell how pretty tested people looks.
ReplyDeleteI think ML is definitely the direction of the future. All kinds of organizations and institutions are interested in being able to predict the future, based on the past. I think that soon, it will be vastly implemented, like every single product on the stores shelf will be backed by computer calculation. The threat, that I notice is that this could lead to violation of privay, as goverments would try to dig deeper into our data. I would definetely be afraid, that goverments might want to gather informations how their potential voters could be or big social platform might reveal sensitive data towards big company, so they would to whom, to advertise their products to.
ReplyDelete