Artifial Inteligence, Machine Learning, Thoughts

Future and ethics. Artificial Intelligence.

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time-frame. 10 years at most.

Elon Musk

This quote from Elon Musk at shows the thought of man who really understand and uses everyday Artificial Intelligence. Do we, seriously, need to care about it? In my honest opinion, there is, and is not, a clear answer (as a good Galician guy, I will always say, it depends). As disclaimer: This blog post is based on my own personal thought. Thanks to my Mobgen colleagues for helping me with the review and feedback.

State of the art of A.I.

We are going through a time where A.I. is related with the next main stream. Most of the companies are focusing on introduce A.I. for wide variety of topics. From decision making, fraud detection, Augmented Reality or simple problems of classification or recommendation. This point, is where probably, most of us are. Now a days, we still facing some problems, that can be stated us social problems, since affects most of us on our daily basis. From detecting our car plate number on a parking, to the systems that assists banking to measure the risk of a operation. All this algorithms have biases, in the same way that unconsciously, our mind, use bias.

Bias everywhere

Can be worse? Indeed, even further, is even worse. I’ll not follow the topic of an apocalyptic future where AI turns against humanity with super aim, weapons and intelligence. Every time that your ask for a loan, all your financial profile passes through a set of algorithms that will output the risk for the bank, the maximum amount of money you can get, or simple, you loan request is declined… all this algorithms include biases. Is worth to take a read to Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O’Neil. Here Cathy expose some interesting cases where biases for evaluating teacher at USA institutes; long story short, some got fired only because the algorithm wrongly point them. In the late 2015, something similar happen with students.

Google been Asimov

Google, recently released set of rules/principles around A.I. that put the spot on avoiding developing any kind of A.I. that can hurt any kind of like. Something a bit vague knowing the 3 laws of robotics written by  from Isaac Asimov. Google splitted their principles into 7 goals for AI:

  1. Be socially beneficial. With the exposed above, and the facts shared by Cathy on her book, W.M.D., clearly most of the institutions are not making A.I. more than a self business benefit.
  2. Avoid creating or reinforcing unfair bias. Aligned with point (1), is not on googles hand, is a social problem that we need to solve via education.
  3. Be built and tested for safety. I can buy it. But, think it twice, look around for a second. Think how many things can heart you without no intention…
  4. Be accountable to people. Funniest one; indeed, we are getting better ads.
  5. Incorporate privacy design principles. (I’m the only one that see this one a bit odd) I don’t want to start with this one, i need another post for that.
  6. Uphold high standards of scientific excellence. I would recommend every one to take a soft read of the Scientific Excellence paper here.
  7. Be made available for uses that accord with these principles.  

and the other split… for what we shouldn’t use AI, bottom line, Asimov Laws.

The ideal world

One of the thoughts from the recently passed away, S. Hawkins, around this topic was well know by many Reddit users when he was asked about the “artificial intelligence unemployment”.

“The outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution.”

Ideally, we all would like to live under this scenario. Machines will run business paying their own taxes as human ones. Human intervention would still be required, but insignificant compared with our era. Wealth equality becomes a must. Supply chains are open and trusted by all citizens. If the idea sounds good for you, think it twice.

wall-e future leaded by Artificial Intelligence
Artificial Intelligence helping humans on Wall-e movie

Not that far :D. Something in between would be great. Shall we reach the human perfection? NO, we shouldn’t; been imperfect is part of the human kind. Due to this, we will always have problems to solve, jobs to attend…

The path to the future

The last few weeks we have seen Google canceling some military contracts, and few more companies focusing on using technology for creating a better future. We need to start thinking that Artificial Intelligence will replace our jobs, indeed, is going to do it. From radiologist, to manufacturers, passing through, drivers, risk analysts (if they still)… even further a new kind of jobs (Data Science Engineer is starting to be under huge demand), how the sports are been transformed by technology; it might happen that some day we will need to compete to machines/robots under an AR/VR environment 10k KM far from where you are.

What we shouldn’t miss, and is our responsibility, is to properly teach our fellows for been ready to this new era. They should be aware of what AI can do, how far technology, not only on computer science, but on chemistry, biology and many disciplines. Let me try to put a metaphor here with a table fork.

A fork, in cutlery or kitchenware, is a tool consisting of a handle with several narrow tines on one end. The usually metal utensil is used to lift food to the mouth or to hold ingredients in place while they are being cut by a knife. Food can be lifted either by spearing it on the tines or by holding it on top of the tines, which are often curved slightly. – Wikipedia

The fork was invented during the early stages of the Egyptian era. Since then, the fork was spread around the world at the speed of those times (years and years). Nowadays we are told what a fork is for, nothing else rather than assisting us on while having some meal. It a cultural fact, that in many countries as China or Japan, the usage of the fork is near to 0. But no one is using a fork for making damage to anybody or to threat the bank working hat is stating that you can’t own a loan, because the computer don’t allows him to make it due to some bias introduced in the risk algorithm (shit happens).

For closing this first thoughts about the current state of the industry around AI, and with this first points exposed, my personal thought, and conclusion is really simple:

We have enough knowledge to teach our fellows; to start putting the first bits of their path; share with them the right values as global citizen. Is the only path, for putting technology on every one hands.