From the Director’s Desk: Artificial Intelligence – With great power comes great responsibility.

Picture Courtesy – Laura Cox Senior Staff Writer – D/SRUPTIONI think the buzz around AI and machine learning is exciting and interesting. For those of us who have been close to this space for a while, this is not new, but what is new is that people are starting to see the hazards of misuse of big data in every-day life. I have written before about the use of big data for a myriad of decisions in the workplace: who gets the loan, what the sentencing is on a conviction, who gets admitted into an academic program and who gets the job.

Now it seems companies are starting to get ahead of the possible weaponization of AI. Google and companies alike have established basic principles for data scientists to live by. Here’s a summary of principles followed by Google (partly in response to the reported use of AI by the military to make drone strikes more precise).

The principles state that Google’s AI should:

  1. Benefit society
  2. Avoid algorithmic bias
  3. Respect privacy
  4. Be tested for safety
  5. Be accountable to the public
  6. Maintain scientific rigor
  7. Be made available to others in accordance with the same principles

The more fundamental premise by Cathy O’Neil in her book “Weapons of Math destruction” is that data scientists and their algorithms are often propagating the status quo and inherent biases or misallocations in society today will likely be trained into tomorrow’s systems thanks to the nature of the machine learning algorithms. See her video on the topic from TED.

While all these tools can be properly used and in some cases misused, it is not really an indictment of the technology but rather the applications and the judgment used for developing these algorithms. Behind every algorithm is an assumed relationship between the variables and Cathy can give us all something to think about when developing the next big algorithm. She raises the issue of transparency of algorithms, yet most algorithms are often treated as proprietary and often are very far removed from full transparency.

Big data and machine learning can become, if we aren’t careful, the hidden hand inside future societies that would form next class of “chosen people” without knowing the basis for the selection and with the possibility of propagating biases from the past.