On the Value and Lack of Values of Artificial Intelligence (6 pages)

Pdf version with page numbers and footnotes (not endnotes): On the Value and Lack of Values of Artificial Intelligence

Creative Commons License
On the Value and Lack of Values of Artificial Intelligence by Douglas R McGaughey is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

On the Value and Lack of Values of Artificial Intelligence[1]

Artificial Intelligence (AI) is being developed by leaps and bounds, and it is accomplishing tasks that were once thought to be uniquely the domain of human intelligence.  Some are raising concerns about the implications of this technology not with respect to what it can accomplish but with respect to what it means for understanding humanity.  No less a public figure than Henry A. Kissinger has sounded an alarm in The Atlantic with “How the Enlightenment Ends:

Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.[2]”  One can respond to this “crisis” by assuming that AI exhausts what constitutes “reason” in order to desperately seek a niche for humanity, or one can question whether what is taken to be “Enlightenment reason” has so truncated the meaning of reason that reason is left impotent in the face of the power of AI.

The irony here is that Kissinger’s reflections demonstrate what happens when a reductionist, socially constructed understanding of reason is allowed to dominate the discussion of “intelligence.”  The result is that the meaning of reason is readily reduced to pragmatic, instrumental reason informed by “analytical” critique of merely empirical data to the exclusion of what makes humanity a “rational” being in the first place – because this reductionist version is the kind of reason that we have come to elevate above everything when it comes to serving our interests.  In short, a misanthropic understanding of reason  is mistakenly viewed as a threat to human reason.

What follows proposes that we question the assumptions beneath the prevalent, anti-Enlightenment rhetoric concerning rationality not to undermine the clearly productive benefits of AI but to retrieve a far broader and beneficial understanding of reason.   This kind of “critique” (not merely analytical criticism) allows the identification of the limits to AI that, at least potentially, can illuminate a pathway through the thick and murky undergrowth of uncertainty and the fear that AI will one-day replace humanity’s role in the hierarchy of being.

In other words, in face of the purported collapse of Enlightenment Reason and its impotency over against the developments of AI, one can either join the dirge celebrating the death of reason for its being an arrogant human triumphalism or one can question whether what today is taken to be Enlightenment Reason is an adequate grasp of the discussion of reason at the end of the 18th century.  Succinctly, what “is” is not necessarily what “ought to be,” and the very exercise of investigating why “what is” is not necessarily what “ought to be” demonstrates the power of language both to confound and confuse as well as to illuminate and inform – even empower humanity individually and corporately.