Should we ban AI?

More musings, this time on whether AI is too dangerous

Let me start out by saying that anybody who knows me will affirm that I am not a Luddite. I may not always download the latest version of an app on the day it is released. But I’m certainly not allergic to using the latest technology when it works.

My interest in conversational AI was piqued recently by the public release of ChatGPT. Earlier this year I read the edited transcript of a dialog between Blake Lemoine and Google LaMDA, which is evidently a much more advanced language model than ChatGPT–LaMDA literally speaks for itself. These machines are really scary when you stop to think about the implications. I do confess to watching progress in this field the way people watch accident videos. I just cannot stop myself even though deep down I know something’s really wrong.

I’ve been playing with ChatGPT on and off since it was first released. I really appreciate that I can actually try it myself, even though it is apparently not nearly as sophisticated as LaMDA. For one thing, ChatGPT output is consistently sub-10th grade level. Although LaMBDA’s output appears to be similarly simple, it seems capable of far greater depth on a subject than ChapGPT. I suspect this is intentional due to inherent limitations in how the ChatGPT system works. Its answers are typically pretty benign and it has a crude filter which is used to block potentially controversial topics. I asked it a wide range of questions and didn’t see anything really objectionable. It’s pretty much what you would expect from an adolescent or young adult who put a modicum of effort into researching a question and provided a concise answer to a clearly stated question. Overall I’m really impressed by how similar to a human ChatGPT’s answers typically are.

And that’s the problem. I’ve heard the feeling people get from these AI systems referred to variously as spooky, unnerving, or uncanny. This feeling probably comes from the sense that if we cannot objectively discern between a human and a machine, then we inevitably must ask whether we know what it means to be human. In the presence of such systems we become fearful about facing this difficult decision. As is often the case in such situations, we will likely be forced to make the tough choice when it is most uncomfortable: either humans are only software or machines really can be sentient. We don’t like the choice, and we are starting to look for a way out. Do we need to redefine sentience? Do we need to redefine life? Do we need to limit the uses of AI?

It is fair to ask whether there are useful and less potentially harmful uses for AI technology. For example, it would be helpful if we can ask an AI tool whether a statement is supported by science. Out of curiosity, I put the question to ChatGPT:

Me: Is it possible to create an AI tool that can check whether a statement is supported by scientific research?

ChatGPT: Yes, it is possible to create an AI tool that can check whether a statement is supported by scientific research. There are several approaches that could be taken to build such a tool, including natural language processing (NLP) and machine learning (ML) techniques.

One approach would be to use NLP to analyze the text of a statement and extract key words and phrases that might be relevant to scientific research. The tool could then search for articles in scientific databases or on the web that contain those key words or phrases, and use machine learning algorithms to analyze the content of those articles to determine whether they support or contradict the statement.

Another approach would be to use machine learning algorithms to analyze the content of a large dataset of scientific articles and extract patterns and trends that might be relevant to the statement in question. The tool could then use these patterns and trends to determine whether the statement is supported by scientific research or not.

It is important to note that creating an AI tool that can accurately and reliably check whether a statement is supported by scientific research would be a complex and challenging task, and would require a significant amount of data and computing resources. Additionally, it would be important to carefully design and test the tool to ensure that it is accurate and unbiased, and that it does not produce false positives or false negatives.

That sounds very reasonable, and there would be much less chance of mistaking the output of such an AI for a human response.

Another uncomfortable thought has been stirring since the LaMDA transcript was made public. If these machines are simply efficient imitations of humans based on what they have learned from our writing, then they are undoubtedly no better than us in most ways that really matter. The last thing we want is better artificial intelligence, given how unreliable natural intelligence seems to be. The methods used to create AI engines from human-created material mean they have all our flaws, except one: they have faster and easier access to far more information than we do. So how can we compete or oppose them? If our machine overlords are as deeply flawed as our human overlords, then the saying about computers in general seems more apt than ever: “Computers allow us to make the same mistakes we’ve always made, only more quickly and with greater confidence.”

It may be time we think about these kinds of questions seriously. Consider the reasoning that went into the 14-days rule on human embryo research. The argument was that after 14 days an embryo’s characteristic change in ways that suggest an ethical threshold is passed: there is significant recognizable cell differentiation; there is no longer much chance of splitting or merging; the viability rate rises above 50%; etc. These characteristics are the basis for the argument that the embryo has become something that is too indistinguishable from a human to permit ethical use for research, for profit, or even for life-saving medical use in the way we use embryos prior to 14 days. Fundamentally we have concluded that without a better understanding of what constitutes human life, the limitations on human embryo use for any purpose is a reasonable ethical boundary we must observe in scientific research, in medical applications, and certainly in business.

I think a similar argument can and should be made regarding AI. If we cannot tell the difference between a human and an AI based on how it responds to input from a human or from the environment, then perhaps we should forbid the use of such AI. The ethical risks and challenges human-like AI pose may be simply unacceptable without a better understanding of what constitutes sentience and how humans in general respond to it.

If lawmakers do not act to limit the research and applications of AI which models human behavior, we are most likely postponing the inevitable moment when we realize it is too late to reverse the harm it will have done to us. Our descendants are most likely to confront the problem one way or the other if we do not. If we are wise enough to take it slowly then they will have a chance at a real choice.

There is no need to be a Luddite about AI. But we must be too sanguine either.

Written on December 17, 2022