Disambiguating 'AI': Preparing for a Predictable Future in the Digital Age

Summary and Discussion of 3 articles about the state of AI in the field of Data Analytics

Richard Welsh
Richard Welsh

What to Expect From Artificial Intelligence, MIT Sloan

Summary

The primary value proposition of artificial intelligence is making predictions inexpensive. Finding a use case for AI data analysis techniques depends on being able to orient the problem around using prediction to solve it. Although the concepts that AI operates on are not new, the underlying technologies that the system runs on have improved significantly, enabling more widespread and powerful models to be possible while costing less than ever before. Not only is it more economical than ever before to take advantage of AI, but advances in data collection and the higher availability of data to train models makes AI models I am better at prediction than ever before, and increased computing power means that more complex tasks can be successfully approached with AI that just wasn't possible before now.

AI is especially useful for managing repetitive tasks that do not require judgement in response to predictions. The more that the action necessary in response to prediction of outcomes does not require judgement before execution, the more suitable AI is for not only predicting, but even executing action based on predictions. As AI takes becomes more reliable and able to make predictions for humans, judgement will be the main contribution that the people working with AI need to have. A complete automated system would see AI working in harmony with sensors and robotics to handle everything from data collection to execution of action. Today, the technology is not quite at a satisfactory level for such systems, although we are moving closer towards this, for example the advances we are seeing in autonomous driving. Moving forward, key skills in managing AI will move toward the faculty of making good judgements (ethical, emotional intelligence, etc.) to compliment AI's advanced capabilities of making predictions.

Discussion

The core concepts underlying AI today is using algorithms with learning ability to make predictions at a massive scale, based on massive amounts of data. It seems like the trend is heading towards a day where AI will be able to reliably predict outcomes far better, cheaper, and faster than humans can, so AI will likely be responsible for this facility at some point in the future. This leads to the question, what will our role become as AI is able to do more of the things that we can do? At this point, human intervention is highly necessary to judge the best course of action in response to predictions about what will happen. This and the element of human emotion in certain situations will become more primary roles as more business adopts AI technologies.

My View

AI is not going to be able many of the things that humans are good at any time soon, maybe ever. It seems like an underlying theme in the field of AI is the fear that AI will develop its own personality and become nefarious. A system that can compare to a living human would involve many more technologies other than AI, such as sensors for data collection and robotic equipment to take physical action on the environment. The more people stop worrying about this silly idea, the more people will be able to see that AI will grow into a sort of extension of human intelligence, rather than a replacement for it. The real concern is that as organizations develop more advanced machine learning and deep learning capabilities, the soft skills such as ethics become more important than ever before. There are still philosophical details that need to be worked out in the budding field of AI, such as what should or should not be automated or predicted.

Expanding AI's Impact With Organizational Learning, MIT Sloan

Summary

Of companies that employ AI technologies in their business process, only about 10% are profiting from the venture today. Many companies might still be in the early stages of developing the technologies but deriving benefit from using AI is not as simple as one would hope, even if an organization already had advanced infrastructure to begin with. Deriving benefit requires significant change in the business process to leverage predictions effectively. The ideal system is a feedback loop, where the algorithms not only learn to predict from an organization's data, but the organization learns from the AI's predictions as well.

This process of mutual learning is what really gets things moving and allows an organization to derive significant benefit from using it. Data driven companies are already familiar with the change in perspective necessary to make decisions based on data rather than intuition alone, but adding AI models into that framework adds additional layers of complexity to making it work, and may require even more significant change than the shift to becoming 'data-driven.' It is more difficult to learn with AI than it is without it, but more effective as well. Companies find success when working with AI when they find a multitude of ways to effectively work together with AI.

Discussion

AI is one of the many tools in the repertoire of the data scientist. Oversimplifying a bit, suppose a linear regression is the data scientist's hand saw, something most people familiar with the task cutting things understand how to use can use quite effectively. AI however would be the chainsaw; a tool that can do the same kinds of things, but with much greater capabilities, but also more danger and required skill to master the tool. When businesses approach AI like any other analytics tool, it is similar to the avid hand saw user that somehow just became aware of chainsaws and figured using that tool would be about the same as using the handsaw. Of course, the early adopters that find success provide valuable insight to the rest of us, namely by prototyping the methods to success of the novel practice. This article provides a systematic approach to integrating AI analytics into existing business flows while providing much needed lucidity into what adaptations come with a radical new approach at strategic development.

My View

While what AI actually represents in data analytics today is far removed from the conjured image in people's minds when they hear the word, AI technology is a powerful thing that requires extensive learning to master. People that pick the tool up for the first time may be drastically less effective with it than the tools they have already mastered, but the AI masters that are developing today demonstrate that the potential power that can come from using it effectively is something that cannot be ignored. A component of the difficulty of learning to work with AI is in a sense an existential one, as the suggestions that AI may provide will require us to face what we have not been so good at doing and change it, something that can be extremely uncomfortable to do.

Closing the Human Rights Gap in AI Governance

Summary

A number of people in the field of AI have suggested that AI must fall under the guidelines of the international human rights law framework. In late 2019 author Element AI held a workshop with the Mozilla Foundation and The Rockefeller Foundation to discuss human rights as it relates to AI, and this report discusses a set of recommendations for people to consider on how AI needs to fit into the context of legislation and regulation. Element suggests that the governments role could be to ensure that human rights and freedom be the focus of AI regulation, along with various incentives and disincentives to act accordingly.

Explainability/transparency is a core aspect of human rights, but also a core problem of AI today. Certain high sensitivity or high impact institutions that are beginning to adopt AI technology should be given the highest priority for applying frameworks, penalties, and assistance. These areas, such as the financial and criminal justice industry, have the biggest potential for appreciable change, but also the largest risk for devastation if things go wrong.

Governance tools such as human rights due diligence and human rights impact assessments should be applied to AI while governments develop the inevitable regulations for the field. Focusing on rights, freedom, and risk management make AI governance a lasting approach, as these ideas are universal in nature, and much less volatile than the technology itself. While the government is still trying to catch up to the progress of AI, it is up to organizations to self-regulate its practices to maintain ethical and moral standards, a practice which seems to be good for the business's bottom line as well.

Discussion

A major concern with digital technology is that due to the rapid pace of development, the legal system struggles to maintain an effective approach to governing it effectively. However, developing legislation for AI does not have to mean rebuilding the wheel, as existing systems to prevent the misuse of AI analytics power are already in place in the world today. Perhaps we may one day need to prohibit and encourage certain specific AI behaviors, but how can that system be implemented without stifling the growth of the industry? The current time period that we are living in is not the first time AI has been in the spotlight, but perhaps now the critical underlying technologies that make it feasible are finally beginning to become powerful enough for AI to begin to live up to the hype. The guidelines set forth in this report are a commonsense approach to governing AI in the future, and hopefully will be reflected in the future legislation that we are likely to see.

My View

Not too long ago, I was a hearty proponent of the idea of free sharing of data; Whereas many people were typically a bit defensive about their rights to data privacy, I thought that breaking down the barriers on how necessary privacy is to quality of life would allow a richer feed into the big data machines, helping to improve the models and let organizations make life better than ever. This view was a bit too optimistic, as I was not really considering the lurking capabilities of potential bad actors misappropriating data analytics value to cause harm, intentionally or otherwise. It is imperative that legislators gain full comprehension of AI and perhaps some of the individuals talented with AI themselves, because the lack of mature regulation in the field means that ample opportunity awaits for nefariousness if nefariousness willing and able to use AI to the detriment of others; we are practically living in the 'Wild West of AI'. Beyond that, effective governing can also foster great growth, as best practices are encouraged and promoted similarly to the way other valued practices and technologies have been treated by government in the past.

References

Agrawal, A., Gans, J. S., & Goldfarb, A. (2017, March 13). What to Expect From Artificial Intelligence. MIT Sloan Management Review, 58(3). https://sloanreview.mit.edu/article/what-to-expect-from-artificial-intelligence/
Philip Dawson (2019, November). Closing the Human Rights Gap in AI Governance. Element AI. https://s3.amazonaws.com/element-ai-website-bucket/whitepaper-closing-the-human-rights-gap-in-ai-governance.pdf
Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, M., & LaFountain, B. (2020, October). Expanding AI’s Impact With Organizational Learning. MIT Sloan Management Review and Boston Consulting Group. https://sloanreview.mit.edu/projects/expanding-ais-impact-with-organizational-learning/