AI Decision Making: Advocating For A Human In The Loop System.
The sharp rise in the use of cognitive software has produced significant changes in business efficacy. This shift to more automated features in a work environment is productive, especially for output. However, this raises a question: is AI decision-making or cognitive software benefitting industries including employees, shareholders, and customers? Such cognitive software when used in industries, such as telecommunications, is for the purposes of decision making. A recent case study titled, “Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligence” [12] written by Verena Bader et. al examines the role of human involvement in algorithmic decision-making within the telecommunications sector. In considering the conclusion reached by Bader et.al, we must consider the larger implications and benefits of implementing a ‘human in the loop’ system rather than the current augmented intelligence decision-making software, as suggested by Meredith Broussard (2018) [1] in her Book Artificial unintelligence: how computers misunderstand the world. In order to analyze the implications and possible benefits of cognitive software for businesses, we must examine how industries, such as the telecommunications industry that have implemented this technology, are articulating algorithmic decision making. We then must examine the software boundary where human involvement in algorithmic decision-making occurs, as that is where we can arrive at a conclusive opinion about possible benefits.
Types Of A.I.
There are three main types of AI for industries: Assisted Intelligence like the assembly line; Augmented Intelligence, this uses human input to learn from, then recommends decisions based on such input; and Autonomous Intelligence, most notably, self-driving cars.
Assisted intelligence is ideal for assembling because it is ideal for repetitive tasks. In differentiating the three types of AI for industries, it is important to note that assisted Intelligence is rules-based. This sort of AI manifests as apps for customer-centered industries in which assisted intelligence can help with understanding customer habits and output predictions based on a more complex model the Assisted Intelligence built. The cognitive software referenced in Bader et.al, is closer to Augmented Intelligence. While technologically impressive, an Assisted Intelligence is what industries should strive for in that Assisted Intelligence hinges on a human in the loop system, where technology is merely a material technological object which provides aid, but not a completely independent decision-making agent.
Decision Support Software
In Bader and Kaiser’s case study, “Part of that study was investigating the use of decision-support software in the call center of a large cable operator that has about 1500 employees and about 1.3 million customers’’ [12]. The cognitive software is called IBM Interact; the software is ridged in its functions. The way IBM Interact functions in the call center is it will investigate past and present customer data in order to provide the call center representative with tailored sales or marketing options to present to the customer. This type of cognitive software has a human in the loop system where the call center representative has to decide what to offer the customer based on the data IBM Interact has recommended in order to make a sale. This is the type of human in the loop system many customer-centered industries should aim to acquire.
IBM Interact
The implementation of IBM interact in the call center illustrates the way in which a human in the loop system can function. However there is a caveat, by the very nature of this software, the user becomes solely reliant on its recommendations and feedback in order to make decisions. In order for a human in the loop system to be fully productive, and to ensure that the recommendations are not biased, industries and programmers need to account for where exactly the user will participate. Considering this, there are four broad categories of cognitive software used in customer-centered industries.
For one, Marketing Decision-Making is a software that creates a computational model and simulation based on the perceived personality of a customer then recommends marketing decisions. “This predictive modeling considered internal and external data like the customer’s purchasing and surfing behavior, age, and residency and the marketing and sales departments’ requirements to predict the likelihood of customer churn and promote customized service and individualized offers” [12]. Second, and perhaps most commonplace are the Customer Relationship Management (CRM) Softwares, which models a customer’s personality and “lifetime value”. Thirdly, we have Problem Solving Software; often, such software provides data, assessment, and recommendations. Lastly, we have argument Analysis: dealing with analytics content itself. “By automating the demand analysis, the decision about what to offer the client was presented to the agent via the user interface of IBM Interact, a simple display that gave the agent little information about how the decision was made, as the agent was not involved in the data collection and analysis” [12].
Software Boundary
A key to figuring out where that boundary lies is understanding where and how humans or the users interact with the cognitive software and the level of detachment and attachment to the software. Antoine Hennion suggests “Assessing attachment and detachment requires ‘social inquiries made on sensitive matters and things that count for people’ [13]. To get a sense of the call center workers’ attachment to decisions, we applied the commonly used qualitative methods of conducting interviews, making observations, and doing documentary research” [12]. We must also consider that the use of cognitive software will aid in quicker decision making, handling Multiple Inputs, and issues with worker fatigue.
These possible benefits of a human in the loop system were substantiated by the following findings, “In particular, we noticed that the agents switched between either conducting the manual demand analysis, where they remained highly involved with the decision or adhering to the algorithmic decision where they withdrew their involvement with the decision. By doing so, we included the subtle constituents of detachment and low user involvement as well as attachment and high user involvement” [12]. In addition to accommodating and addressing the software boundary, we also need to consider Non-Intuitive Predictions. In the Harvard Business Review, the article “What AI-Driven Decision Making Looks Like” by Eric Colson, July 08, 2019. states. “Our brains are inflicted with many cognitive biases that impair our judgment in predictable ways’’ [14]. In Bader’s et. al case study, “ call center agents were confronted with prescriptive algorithmic decisions made from data they would not otherwise have had access to” [12]. This decision-making model utilizes cognitive software that will act as the primary processor as opposed to human judgment and decision making.
A.I. Decision Making
We now return to the question posed earlier, is AI decision-making or cognitive software benefitting industries including employees, shareholders, and customers? Bader’s et. al analysis considered the interplay between human and algorithmic intelligence and how human involvement in decisions played out when the agents’ decisions were succeeded by choices presented via IBM Interact’s user interface [12]. This ought to be regarded as one of the most important arguments in arriving at conclusive benefits for implementing a human in the loop system. And in doing so, we must consider and account for the software boundary where human involvement in algorithmic decision-making occurs. I encourage a decision-making process in which humans are supported by cognitive software rather than solely reliant on it. We need a human in the loop system where an algorithm is a supplemental technological object and the human is the central processing component.
References
[1] Broussard, M. (2018). Artificial unintelligence: how computers misunderstand the world. MIT Press.
[2] Cheney-Lippold, J. (2018). We are data: Algorithms and the making of our digital selves. NYU Press.
[3] Chun, W. H. K. (2016). Updating to remain the same: Habitual new media. MIT press.
Colson, E. (2019, July 8). What AI-Driven Decision Making Looks Like. Retrieved December 2, 2019.
[4] Dabbaghian, V., & Mago, V. (2014). Theories and Simulations of Complex Social Systems (1st ed. 2014.). https://doi.org/10.1007/978-3-642-39149-1
[5] Debnath, L. (2011). Nonlinear partial differential equations for scientists and engineers. Springer Science & Business Media.
Johnson, N., Manrique, P., Zheng, M., Cao, Z., Botero, J., Huang, S., … Johnson, N. (2019). [6] Emergent dynamics of extremes in a population driven by common information sources and new social media algorithms. Scientific Reports, 9(1), 11895–11895. https://doi.org/10.1038/s41598-019-48412-w
[7] Martinez, Rebecca. Complex Systems: Theory and Applications. Hauppauge, New York: Nova Science Publishers, Incorporated, 2017. Print.
[8] Matei, S., Russell, M., & Bertino, E. (2015). Transparency in Social Media Tools, Methods and Algorithms for Mediating Online Interactions (1st ed. 2015.). https://doi.org/10.1007/978-3-319-18552-1
[9] Minai, A., & Bar-Yam, Y. (2008). Unifying Themes in Complex Systems IV Proceedings of the Fourth International Conference on Complex Systems (1st ed. 2008.).
[10] Strauss, W. A. (2007). Partial differential equations: An introduction. John Wiley & Sons.
[11] Zelinka, I., Sanayei, A., Zenil, H., & Rössler, O. (2014). How Nature Works Complexity in Interdisciplinary Research and Applications (1st ed. 2014.). https://doi.org/10.1007/978-3-319-00254-5
[12] Bader, Verena, and Stephan Kaiser. “Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligence.” Organization 26, no. 5 (2019): 655–672.
[13] Hennion, Antoine. “Attachments, you say?… How a concept collectively emerges in one research group.” Journal of Cultural Economy 10, no. 1 (2017): 112–121.
Harvard
[14] Colson, Eric. “What AI-driven decision making looks like.” Harvard Business Review (2019).