Since the invention of the first computer, the main principles of giving them commands
remained unchanged. In traditional coding, you write a very specific set of
instructions taking into consideration edge cases and other dependencies.
Thanks to that, you knew exactly how the computer is going to react in any
given circumstance. However, with the rise of today’s information era, this method started to be too limited. We had to start coding in a completely new
way.
https://blog.verseo.pl/wp-content/uploads/2017/01/ml.jpg |
In this approach, you give the
computer a set of inputs and a corresponding set of outputs and let the machine
write its own directions to follow. For example, you give the computer 1000
pictures of cats and 1000 pictures of dogs. Then the computer looks at them and
tries to build a classification system. This is machine learning. It is the
technology that lets us create self-driving cars, facial recognition, and many
other brilliant inventions. Google Translate used to be more than 1 million
lines of code currently, it is about 500 lines that just call machine
learning.
At the same time, this technology
is creating social and legal challenges. Most machine learning algorithms
are called “black-boxes”, that means that can’t interpret the way in which they
are making their predictions. If the historical data that was used to create a
model was biased, it will cause the model to repeat the same bias in its
predictions.
https://upload.wikimedia.org/wikipedia/commons/a/a9/Amazon_logo.svg |
A recent example is now-retired
Amazon’s recruiting engine, which was trained on resumes spanning a 10-year
history. Based on the data it was fed, the algorithm learned a distinct male
preference. Unfortunately, the team assumed that their historical data was free
of bias, which resulted in a perpetuation of inequality and a significant PR
disaster.
Here u can find another interesting example:
Here u can find another interesting example:
1. Have you ever heard about this
problem? If yes, from where?
2. In your opinion, in which
applications of machine learning this problem could also occur?