Unit 5 The AI Era: Industry Innovation and Ethical Reflections

Reading

Reading

AI’s Great Promise but Potential for Peril

Script

AI’s Great Promise but Potential for Peril

Are you interested in cutting-edge developments in artificial intelligence and how they’re revolutionizing various industries? AI is quickly becoming a crucial part of many industries, including healthcare, banking, retail, and manufacturing, promising to deliver better business results by automating and optimizing tasks. Furthermore, the widespread application of commercial AI programs in nearly every aspect of our lives holds the potential to significantly enhance the overall human experience. AI-driven virtual assistants, language translation services, and personalized recommendation algorithms exemplify how these technologies can positively impact daily interactions, making them more efficient and tailored to individual needs. Despite these promising developments, concerns have arisen that these complex and incomprehensible systems may do more harm than good to society, hindering their potential to fulfill their game-changing promise of reducing costs and improving efficiency. Private companies use AI software to make critical decisions about healthcare treatments, loan approvals, and even employment opportunities, but with minimal government oversight, there’s a risk that these programs may be encoded with structural biases, leading to unfair outcomes for certain groups.

The Usefulness of AI

The growing appeal and utility of AI are undeniable. Almost all major companies now have multiple AI systems and consider the deployment of AI as integral to their strategy. Early on, it was popularly assumed that the future of AI would involve the automation of simple repetitive tasks requiring low-level decision-making. However, AI has rapidly grown in sophistication, owing to more powerful computers and the compilation of huge data sets. One branch, machine learning, notable for its ability to sort and analyze massive amounts of data and to learn over time, has transformed countless fields.
Firms are now using AI to manage the purchasing of materials and products from suppliers and to integrate an enormous amount of information to aid in strategic decision-making. And because of its capacity to process data so quickly, AI tools are helping to minimize time devoted to the pricey trial-and-error of product development―a critical advance for an industry like medicine, where it costs $1 billion to bring a new pill to market, explained Joseph Fuller, a professor of management practice at Harvard Business School.
Healthcare experts see many possible uses for AI, including billing and processing necessary paperwork. Medical professionals also expect that the biggest, most immediate impact will be in the analysis of data and diagnosis. Imagine, they say, having the ability to bring all of the medical knowledge available on a disease to any given treatment decision.
Rather than replacing employees, AI takes on the important technical tasks of their work. In employment, AI software processes résumés and analyzes job interviewees’ voice and facial expressions as part of the hiring process. In transportation and logistics, it provides routes for package delivery trucks, which potentially frees workers to focus on other responsibilities, making them more productive and, therefore, more valuable to employers. It’s allowing employees to do more, and to do it better. They make fewer errors and can develop their expertise and disseminate it more effectively throughout the organization. Though automation is here to stay, the elimination of entire job categories is likely to be rare, according to Fuller.

Possible Risks of AI

Not everyone sees blue skies on the horizon, however. Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate against certain groups and divide society. “Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said Michael Sandel, a political philosophy professor at Harvard University Law School. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing replicate the biases that already exist in our society.”

Ethical Issues

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and, perhaps the deepest, most difficult philosophical question of the era, the role of human judgment. “Debates about privacy safeguards and how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar,” said Sandel, referring to the conscious and unconscious prejudices of program developers, as well as those built into the datasets used to train the software. “But we’ve not yet wrapped our minds around the hardest question: Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”
Panic over AI suddenly injecting bias into everyday life on a large scale is overstated, says Fuller. He argues that the business world and the workplace, which are full of human decision-making, have always involved “all sorts” of biases that prevent people from making deals or landing contracts and jobs. When adjusted carefully and deployed thoughtfully, résumé-screening software allows a wider pool of applicants to be considered than could be done otherwise and should minimize the potential for favoritism that comes with human gatekeepers.
Sandel disagrees. “AI not only replicates human biases, but also confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status,” he said.
In the world of lending, algorithm-driven decisions do have a potential “dark side,” said Karen Mills, the former head of the U.S. Small Business Administration. As machines learn from datasets they’re fed, chances are “pretty high” they may replicate many of the banking industry’s past failings that resulted in the systematic disparate treatment of African Americans and other marginalized consumers. “If we’re not thoughtful and careful, we’re going to end up with institutional discrimination again,” she said.

Bias In, Bias Out

Just like humans, AI systems are also expected to follow social norms, and to be fair and unbiased. When it comes to bias, the issue isn’t unique to AI models―humans have difficulty navigating bias as well. However, with AI, the potential outcomes of bias can have a massive impact. In AI, bias has a strong correlation with input data. For example, corrupted, unrefined, or flawed input data will impact the outcome. The important thing to grasp with bias is that it requires sensitivity, insight, and openness to navigate ethically.
Humans ultimately control bias in AI―the users select the original input data and introduce bias to influence outcomes. For example, one major American company that receives a massive amount of job applications decided to test applying AI to its recruitment process. When it did so, the company used the résumés of current employees as input data. So, what was the outcome? The company widely shared that when using the selected demographic sampling, the results were biased against women. During the testing process, it was discovered that if the word “women” was anywhere on a résumé, that individual never got a call. The company realized the input data was part of the issue and never deployed the model for hiring managers.
Sharing this information and being sensitive to the results are essential as we continue discovering the best use of this technology. Since bias is highly related to intent, the example above must not be interpreted as a malicious use of AI. Instead, it demonstrates the necessity of introspection in the use of AI. Companies can correct outcomes by factoring in bias to the model to help them achieve a more balanced result.
As previously stated, AI has very quickly become an essential part of business, and it should be expected that ethical issues such as bias will occur. The keys to overcoming bias are making sure the input data is as pure as possible and being willing to investigate unethical outcomes with openness and transparency. In light of this, it will be necessary to consider by whom and how bias can be overcome.

Potential Regulators of AI

Given AI’s power and expected ubiquity, some argue that its use should be tightly regulated. But there’s little consensus on how that should be done and who should make the rules. Thus far, companies that develop or use AI systems largely self-police, relying on existing laws and market forces, like negative reactions from consumers and shareholders or the demands of highly prized AI technical talent, to keep them in line.
Firms already consider their own potential liability from misuse before a product launch, but it’s not realistic to expect companies to prevent every possible unintended consequence of their product. Few believe the federal government is up to the job or will ever be. “The regulatory bodies are not equipped with the expertise in artificial intelligence to engage in oversight without some real focus and investment,” said Fuller, noting that the rapid rate of technological change means even the most informed legislators can’t keep pace. Requiring every new product using AI to be prescreened for potential social harms is not only impractical but would also create a huge drag on innovation.
Jason Furman, a professor of the practice of economic policy at Harvard Kennedy School, agrees that government regulators need “a much better technical understanding of artificial intelligence to do that job well,” but says it would be possible. Existing bodies like the National Highway Transportation Safety Association, which oversees vehicle safety, for example, could handle potential AI issues in autonomous vehicles rather than a single watchdog agency, he explained. “I wouldn’t have a central AI group that has a division that does cars; I would have the car people have a division of people who are really good at AI,” said Furman.
Though keeping AI regulation within industries does leave open the possibility of biased decision-making, Furman said industry-specific panels would be far more informed about the overall technology of which AI is simply one piece, making for more thorough oversight.
Business leaders “can’t have it both ways,” refusing responsibility for AI’s harmful consequences while also fighting government oversight, Sandel maintains. “The problem is these big tech companies are neither self-regulating nor subject to adequate government regulation. I think there needs to be more of both,” he said, later adding, “We can’t assume that market forces by themselves will sort out the issues.”
“Companies have to think seriously about the ethical consequences of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications―not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives,” said Sandel. He believes doing so will require a major educational intervention. This suggests that we all need to be educated enough about tech and the ethical implications of new technologies so that when we are working for or running companies in the future, or when we are acting as democratic citizens, we will be able to ensure that technology serves human purposes rather than undermines a decent civic life.