Essays

Essays

The Moment to Set the Ethical Standard for AI is Now


By Rob Docters

Author of Ethics and Hidden Greed: Your Defense Against Unethical Strategies and Violations of Trust

The rise of students using generative artificial intelligence to write class assignments was the topic of a recent front page article in The New York Times. ChatGPT, which was launched at the end of last year by OpenAI, an artificial intelligence lab, allows students to machine-generate assigned papers, and fraudsters to crack passwords and tailor individualized emails.


Various sources tell us that Artificial Intelligence (AI) is the harbinger of opportunity, productivity and—in the case of students assigned papers—free time to enjoy another beer. However, like any technology, what we make of AI is the key ethical question, and shaper of our society. So far, AI does not look good ethically.


Sponsors of the technology tell us that AI is good because it allows robots to perform tasks that humans either cannot, or should not be involved in. Examples include risky work in industrial settings (e.g., pouring steel, and handling radioactive materials), or dull and repetitious work, such as administrative assistance or sorting parts. Those applications sound innocuous, but other applications that fall under this heading are not so innocuous, such as robot warriors, crafting individualized fraudulent messages, or cracking security software. Robot warriors, to take one example, are already in development—they will be able to move rapidly, change course and tactics in an instant, and may be less vulnerable than human soldiers.

The chorus of AI and robotic developers who insist that AI will never harm humans is patently wrong—and developers know it. One motivation for the consistent lie is that watchdogs have suggested AI needs to be closely regulated, and its applications proscribed. The AI community, especially the AI defense community, is on the defensive. This is from domestic critics, but also from Russia and China, which have both announced substantial AI development efforts.


What do humans want of AI and robotics? What is sought from AI ranges from the simple to the complex. Different levels of sophistication and capabilities are needed depending on the utility sought. Some of the dialogue around AI revolves around whether such programs can have human feelings. But there is good evidence that feelings are not an ethical criterium in the AI (and related robotics) debate—at least on the part of AI robots.


Humans seem to have taken on the anthropomorphism of robots. At the low end, apparently several expensive sex dolls have been featured in “marriage ceremonies” and labelled “wives.” These dolls show little evidence of AI capabilities beyond a few phrases and facial expressions. It also seems unlikely there was a meeting of kindred spirits between “bride” and groom. But they seem to have served the purposes of buyers.

Of more concern are multi-million-dollar robots, funded by academia, companies and the defense establishment such as DARPA (Defense Advanced Research Projects Agency.) These robots are capable of extended dialogues, can read some expressions, and can be quite realistic in their responsiveness to questions and human facial expressions. One AI robot, called “Sophia,” appears very realistic. However, a caution from the past is worth considering. People listening to voices on the first phonograph declared it indistinguishable from actual voices. Today, no one would say that, and future generations might distinguish machine from human by identifying the use of long pauses and complete sentence structure.


In the same way responsibility for the health impact of cigarettes lies with the companies who manufacture them, and the death toll from defective cars lies with the negligent car companies, creators of an AI programs must be responsible for the harm resulting from their programming choices. Ethical responsibility for AI rests with its creators. Right now, there is no direct linkage between AI creators and the outcome of their work. In other words, if an AI program decides to cancel a flight, it is not always easy to find the basis of the action. That is not surprising, as it took decades to link cigarette manufacturers to the health consequences of smoking, to cite just one example. The linkage between AI and human consequences could be hard to prove, and if it is the result of the combination of several algorithms, it may be impossible.

That brings us to the central ethical point regarding AI. Artificial intelligence has no “core” or awareness, which is something all living beings possess. The AI performs and improves its processes through adding layers of algorithms, which adjust the program’s output to new requirements, e.g.: to parse emails for passwords, or to avoid bumping into tables when its roams a house. Useful, yes. However the lack of centrality to the AI “thinking” is highlighted by the inability of AI programmers to localize and correct glitches in the program outputs.


The growing complexity of AI, and the increasing difficulty to assign causality, makes it important to act now in setting ground rules for liability and responsibility. Right now it is not hard to view AI and robots as glorified filing cabinets. But shortly that will no longer describe the problem. If we set the rules up front (which is actually only fair) then the standard of ethical AI can be incorporated into most AI development. An analogy is the Volker rule for financial professionals—one needs to address the problem up front by setting the rules through incentives (bonus structures) so that potential malefactors will work to avoid problems up front—more effectively than any regulator.


Like any new technology or practice, AI has the potential to go very wrong. The consequences can be large, or small. Machine authorship of academic papers may not be the largest problem we face with AI. The moment to address the problem is now.


Rob Docters is Partner at Abbey Road, LLP, and leads their ethics practice. He is a former Senior Partner, Ernst & Young Canada; a former Lecturer in Management, Yale University School of Management and National University of Singapore; and led BCG’s Asia/Pacific pricing practice, based in Singapore. He is a member of the New York Bar. He was formerly Head of Market Innovation and Development at Bloomberg, LP, focused on the vertical markets. And previously a senior member of McKinsey & Company's Marketing and Sales practice. As Chief Strategist and Head of Pricing at LexisNexis, he was part of the senior team credited with the turn-around of that company by the Wall Street Journal. Operating experience includes running consulting practices at E&Y and Abbey, LLP, and being shadow CMO of Telcel at startup (now worth $35B), and Market Manager of a $250M/year segment at Verizon.


He is the author of the book Author of Ethics and Hidden Greed: Your Defense Against Unethical Strategies and Violations of Trust.

Share by: