Research Article
The Ethical Crisis of Artificial Intelligence in light of the Linguistic Controversy over ChatGPT
Shuijian Zheng*
Master’s degree, School of Philosophy and Social Development, Huaqiao University, China
Shuijian Zheng, Master’s degree, School of Philosophy and Social Development, Huaqiao University, China
Received Date: January 05, 2025; Published Date: January 21, 2025
Abstract
With the emergence and significant achievements of large language models such as ChatGPT, it marks the beginning of a new era in the development of artificial intelligence. The 2024 Nobel Prize in Physics was awarded to AI-related scientists Hopfield and Hinton. Hinton has repeatedly criticized the renowned linguist Chomsky, triggering widespread attention and discussion in the field of linguistics. Considering the current practical issues of ChatGPT applications and the development trajectory of AI ethics over the past decade or so, we seem to be able to glimpse the reasons for the changes in the ethical boundaries and frameworks of AI, and the underlying ethical crises. Rather than the role of the technology itself, the core issue should be the understanding and the consciousness of artificial intelligence, both of which are highly bound with “language”; and that forms the research background of this paper. To deconstruct this core, philosophical research needs to liberate from Chomsky’s universality of language and resort to Wittgenstein’s diversity and contextuality of “language games”, which aligns with the current needs of AI ethical development. Examining the current technological development and its application from an ethical perspective, and also facing the rapid advancements in technology, it might be no longer possible to define a clear ethical boundary, but AI ethics has never lost its significance; instead, it is becoming even more critical. By summarizing and tracing the past development of AI, reasonably predicting the future direction of technological development, and elaborating on the urgent shifts needed in current ethical research, we can avoid the alienation and potential loss of control in the development of artificial intelligence.
Introduction
The emergence of large language models represented by ChatGPT has unprecedentedly impacted the old framework of social ethics by changing the way we learn, make decisions, and interact. This marks an important ethical turning point in the third wave of artificial intelligence characterized mainly by deep learning. As AI technology is growing at an exponential rate and is showing vigorous development momentum, scholars from the technical and philosophical fields, represented by Chomsky and Hinton, have launched intense debates. Through their arguments and responses, combined with the practical issues currently faced by ChatGPT applications and the development path of AI ethics over the past decade or so, we seem to be able to glimpse the reasons for the changes in the ethical boundaries and framework of AI, as well as the ethical crises hidden behind them.
In the long history of human technology, ethical issues have always accompanied it. People in different eras have formulated corresponding technical laws and ethical norms based on the characteristics of technology at that time. However, no technology has ever had such a profound impact on the ethical order of human society as today’s artificial intelligence and genetic engineering. Humanity is facing the dual risks of technological crisis and social crisis. [1] According to Bostrom’s “The Vulnerable World Hypothesis” theory, if we cannot achieve breakthroughs in technologies such as artificial intelligence or genetic engineering and apply their results properly to solve social crises, such technological progress has the potential to create uncontrollable risks. The problem is the existing social structure and ethical framework may not be sufficient to address these risks, and humanity may face the risk of extinction. The reason why contemporary artificial intelligence technologies, represented by ChatGPT, still poses a significant ethical crises while attracting such high social attention and academic emphasis lies in the core issues at technical, philosophical, and reflexive levels respectively.
Main Contents
Ethical Challenges from Artificial Intelligence
Artificial intelligence and genetic technology differ from traditional mechanical technology; not only is it an emerging biotechnology, which shifts the research focus more from traditional biological objects to the human body itself, emphasizing the inseparability of technology and humans, but also it is a “nonnatural” technology. This “non-natural” aspect manifests in its ability to contradict the established laws of traditional natural development and the universal rules of biological evolution since Darwin. This technology breaks free from the shackles of naturalism, developing and “evolving” more according to human will. In the process, humans become the new “creator”, hence the meaning of “Artificial” in “Artificial Intelligence”. The reason why this new type of technology poses significant ethical risks often lies at the core of human “ignorance” as the “creator”. Technically speaking, we have the capability to train and educate artificial intelligence using preprocessed data full of personal biases, and even allow data models to be autonomously trained according to algorithms and ethical safety principles designed by us. However, we are unclear about the ethical implications of doing so for the future. Although technically we can pursue a certain sense of neutrality and objectivity through diversifying training data, as Han Byung-chul said, in a society lacking “the other”, the information we can access or provide is always relatively one-sided and singular due to the information cocoon and the high homogenization of information individuals come into contact with, as well as the intellectual property rights of the industry, privacy protection, etc.. What’s more dangerous is the rapid technological advancement will never stop eroding the current ethical framework. We are at such a crossroads of development, where technology has finally been freed from the heavy shackles of naturalism, capable of creating an “artificial evolution” according to human subjective wishes; yet we are clueless about the potential consequences and further constrained by the black box of data and the astronomical amount of information, unable to fully trace our technological implementation path to avoid a reproducibility crisis. The current technological developers are in such a state that they seem to be able to create everything according to their subjective will, but neither are they aware of the possible “consequences”, nor able to account for the “causes” due to technological black boxes [2].
Today, the intrusion of artificial intelligence into nearly “digital fluid” humans is almost ubiquitous. This “intrusion” is not similar to a cyborg, which is a development and extension of the body, nor is it the model of “control society” under the pre-artificial intelligence era described by Deleuze. Instead, it is a form of dependency and decision influence generated by new methods of information interaction and human-machine interaction. The direct manifestation of this change is the alteration in how we acquire information and construct and form knowledge systems. The more profound impact behind this is the change in the subjective status of both humans and artificial intelligence in the age of artificial intelligence. This change makes it impossible for us to define the new relationship between humans and artificial intelligence in the old subject-object relationship, which indirectly leads to the changes in the ethical boundaries of intelligence and requires us to reconstruct the ethical framework of intelligence when facing ethical risks in the era of ChatGpt. Perhaps only when that day truly arrives will we realize the true meaning of the statement, “Our next generation will truly be the last generation of ‘pure’ humans. “With the birth of ChatGpt, the third wave of intelligence has entered an important technological turning point, and this process has already accelerated [3].
Ethical Challenges Brought by ChatGPT
The development of human science and technology is driven by philosophical reflections, moving step by step from one paradigm to another. However, no technology has ever been like the new era of artificial intelligence, which is both so in need of philosophy and yet so estranged from it. Emerging artificial intelligence subverts the rules technologically and philosophically. The main reason for this subversion and the “lag” of philosophy is the rapid development of artificial intelligence technology and its incomparable self-evolution speed that old technologies possess. A prominent issue is that the philosophy and ethics of artificial intelligence are disciplines highly tied to the development and practice of the technology itself. However, the reality is that in the development of this technological philosophy, the academic world is far behind the industrial world, and the philosophy is behind the academic world. As a result, ethical reflections and constructions based on old artificial intelligence technologies are always an “outdated news.” Yet, artificial intelligence is a discipline that sorely needs the ethical framework to evolve and keep up with it to avoid irreparable consequences. Its embedding in today’s society is deeper than many anticipate. Even someone who does not use any artificial intelligence technology will inevitably be swept up as part of a dataset sample. This is determined by the nature of today’s digital era “data fluidity.” It is not ChatGpt that made this embedding a reality, but its actual effect in open complex scenarios makes people deeply realize this. The philosophical development of artificial intelligence has been proven to lag far behind the reality of technological development. However, if the development framework is formulated in advance according to existing ethical templates, many believe it not only hinders the development of this technology but also avoids existing ethical issues. After all, no one can clearly state where this technology, with so many “black boxes” and high “uncontrollability” risks, will ultimately go. The development of ethical norms is not like technological development, which can be planned. This is why some scholars believe that today we can no longer clarify the ethical boundaries of artificial intelligence. It also led some early studies to be considered by the academic community as empty slogans or concepts without practical significance. In such a rapidly evolving technological field that attracts the attention of all political entities, a lack of consensus means binding oneself. So, is it true, as some contemporary philosophers say, that artificial intelligence technology no longer needs philosophy and ethics? The author believes that, on the contrary, the emergence of large language models such as ChatGPT has given us another opportunity to understand artificial intelligence. It is because it presents the competition and game between artificial intelligence and traditional biological “humans” in such a tangible way for the first time. This competition and impact bring not only technical challenges but also philosophical and ethical challenges [5].
The Philosophical Turn brought by ChatGPT - The Debate between Chomsky and Hinton
The large language models represented by Chatgpt have broken free from the constraints of traditional AI philosophy research approaches imposed by the philosophy of language and the philosophy of mind. Criticisms similar to Hinton’s critique of Chomsky abound: “Chomsky misled three generations of linguists. Linguists were misled for several generations by someone called Chomsky who actually also got this prestigious medal” Within the traditional Chomskyan “Universal Grammar” paradigm, language possesses some form of innateness, which makes it impossible for artificial intelligence to achieve “understanding.” In this domain, Chomsky is akin to the pope, with his triad being:
a) innate structure,
b) external stimuli, and
c) natural laws.
This description might be overly simplistic. The author understands it as follows: firstly, the “brain” is the organ of “cognition,” encompassing both conscious and unconscious mental states, i.e., “This approach stands in sharp contrast to the study of the shaping and control of behavior that systematically avoided consideration of the states of the mind/brain that enter into behavior, and sought to establish direct relations between stimulus situations, contingencies of reinforcement, and behavior. This behaviorist approach has proven almost entirely barren, in my view, a fact that is not at all surprising because it refuses in principle to consider the major and essential component of all behavior, namely, the states of the mind/brain.”; secondly, “the ability to learn language is innate,” meaning “language is not learned”; lastly, “patterns” are key features of the cognitive structure of the brain, and the communication and interaction between various subsystems in the brain constitute the “brain.” For artificial intelligence, the first and third points essentially pass a death sentence on AI’s “understanding” or “conscious thinking,” seemingly telling us that neural networks will never achieve “understanding.” The argument is that data models process natural language differently from us; they do not possess a biological brain, and understanding and learning are changes in brain states. The second point emphasizes the sanctity of language. In Chomsky’s view, “language is fundamentally a biological system” [6].
In response, Hinton believes that the hypothesis that language cannot be learned is too extreme, “Language is obviously learned. And now these big neural networks learn language. And they didn’t need any innate structure. They just started from random weights and a lot of data.” He also points out that Chomsky’s theory focuses more on syntax rather than semantics. In the face of Chomsky’s claim that AI’s language understanding is essentially pseudoscience, Hinton believes: “I think the school is wrong and I think it’s clear now that neural nets are much better at processing language than anything ever produced by the Chomsky School of Linguistics”. In order to respond to the Semantic Field Theory and Componential Analysis, two cornerstone theories of modern semantics, Hinton explained with a small language model that neural networks can integrate the two, and pointed out that we can use large language models to deal with exceptions outside of rules. At the same time, regarding Chomsky’s accusations of pseudoscience (which is undoubtedly from the perspective of the Popperian scientific paradigm) and statistical games, Hinton pointed out that the predictions of neural networks come from transforming vocabulary into features, and it is the interaction between these features that produces AI’s “understanding”, rather than relying on storing text and then auto-completing it, it mainly relies on AI’s self-learning.
Hinton’s critique of Chomsky directly addresses two key arguments of Chomsky’s linguistics, firstly determining that the technical core of AI achieving “understanding” is not the mere probabilistic game Chomsky described, but the ability of language models to self-learn. Secondly, he counters Chomsky’s theory that language is unlearnable by demonstrating instances where language semantics can be digitized. For ordinary people who don’t understand the technical realities, the allure of the universal grammar theory is strong. The external performance of big data models, as Chomsky predicted, lacks the ability to form grammar, logical principles, and even parameter systems. From an external perspective, this lack can be seen as a “lack of understanding,” and from an internal perspective, it is “conscious thinking” and the “stochastic parrot” problem undoubtedly proves this. However, Hinton’s rebuttal presents his own viewpoint from a technical perspective, thus substantially challenging Chomsky’s assumptions [7].
ChatGPT’s Impact on Human Subjectivity and the Practical Dilemmas of Intelligent Ethics
Artificial intelligence technology differs from previous technologies in that no technology has ever had such a profound impact on human subjectivity. This impact is twofold: on one hand, the extensibility of human subjects, such as cyborgs, means that our material and spiritual lives are increasingly “embedded” in artificial intelligence. The most direct manifestation of this “embedding” is the increasing reliance on artificial intelligence in people’s daily life and work. This all-encompassing dependence on artificial intelligence for information acquisition, knowledge construction, and even decision-making undoubtedly has an unprecedented impact on the old subjectivity of humans. This is a aspect that is currently overlooked in artificial intelligence ethics research. In the existing paradigm of artificial intelligence ethics research, most philosophers emphasize the ethical boundaries of technology use and development, as if artificial intelligence technology itself does not have an ethical crisis unless it is solely used in human service. This actually only addresses one aspect of technology ethics. On the other hand, technology experts only see the disorderly competition and uncontrollable risks of artificial intelligence, leading them to collectively issue public letters demanding a “pause” and “mitigation.” They suggest that “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”. This approach is certainly commendable. The leaders of the tech world voluntarily slow down their projects based on morality and conscience, pursuing safety, stability, and consistency. However, if they do not consider the reflexive nature of artificial intelligence technology, then what they seek is merely an “London Naval Treaty” or “Washington Naval Treaty” for artificial intelligence, which will not only fail to address the ethical crises we face but may also delay the construction of a new ethical framework. Unlike nuclear scientists who distance themselves from moral responsibility, the ethical responsibility of artificial intelligence is highly bound to these technological entities and tech experts.
Both the philosophical and technological communities have not recognized or have tacitly accepted the ethical implications brought about by the reflexivity of artificial intelligence. In this “vacuum” ignored by both sides, the ethical impact caused by the reflexivity of artificial intelligence is spreading at a geometric rate. What truly disturbs the author is our increasing and irreversible trust in artificial intelligence. This irreversible dependence is precisely the nature of biological humans, characterized by “human nature’s greed and absolute dependence on technology”. Compared to the much-publicized and feared change in the subjective status of artificial intelligence, the change in the subjectivity of biological humans is more unsettling to the author. This is not some moral degradation of humans, but rather a sign or symptom that we’re accelerating towards an intelligent age and a post-human society [8].
The Transformation of ChatGPT’s Own Subjectivity and Future Trends
Another aspect of reflexivity is the transformation of the subjectivity of artificial intelligence itself, which has become a point of concern in AI ethics due to science fiction works such as novels and movies. If the AI models in the previous two waves were intelligent models trained through reinforcement learning under single scenarios and definite rules namely “expert systems”, then the cracking of language by large language models represented by ChatGpt truly makes it possible for artificial intelligence to transition from “mimicking humans” to “becoming humans”. The new generation of data models no longer needs to passively accept a large amount of existing data provided by humans, but possesses the ability to surpass algorithm design, self-generate, and self-learn, with stronger data mining and pattern recognition capabilities. They can extract deeper information from limited data. If the relationship between the subject and object changes during human-machine interaction, then what it first needs is the ability to “understand” or consciously think, the premise of which is language. Usually, discussions about the subject-object nature of artificial intelligence stop here, because in Chomsky’s theory of language, language is not only learned, but also something that neural networks and large language models completely cannot understand. The difference between how artificial intelligence and humans process language is structural and also related to external stimuli. In the author’s view, Chomsky has deified the language acquisition ability of humans: “When linguists seek to develop a theory for why a given language works as it does (“Why are these - but not those - sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.” Although this “myth” is based on his extensive case studies in the field of language acquisition, and he also responded to the doubts brought about by the emergence of ChatGPT, stating “It is absurd beyond discussion to believe that any light can be shed on these processes by LLMs, which scan astronomical amounts of data to find statistical regularities that allow fair prediction of the next likely word in a sequence based on the enormous corpus they analyze “, however, changes have already taken place. ChatGPT has demonstrated to us the ability to autonomously learn and generate infinitely when faced with complex and diverse situations, surpassing algorithms. In contrast, even with the biological mysteries of the human brain, human energy and the development of the brain have their limits. The development of the limited potential of the biological human brain is arithmetic, while the development of the infinite potential of artificial intelligence is geometric. As technology developers, what we can see more from large language models represented by ChatGPT is the potential of a new generation of artificial intelligence to spontaneously learn beyond the preset framework of algorithms. The biological human brain may have its inherent grammatical rules, but it is not mysterious. The linguistic talent of humans is also the result of hundreds of thousands of years of evolution. This leads to what the author believes is the greatest significance brought by ChatGPT to us. It is not about breaking through the Turing Test, but about demonstrating the possibility of artificial intelligence evolving towards “understanding” and “conscious thinking” by deconstructing “language”. This possibility is created by its autonomous learning, and this creativity is what truly touches the current ethical boundaries of artificial intelligence. It implies a shift in the subjectivity of future artificial intelligence. As for whether possessing this “understanding” will in turn conflict with the subjectivity of biological humans, and how to avoid such occurrences, is the question that artificial intelligence ethics needs to study. This is also the significance of the existence of artificial intelligence ethics [9].
Chomsky firmly believes that the universal grammar of humans and artificial intelligence are always different. The universal grammar of humans is a complex system expressed by an innate, genetically endowed operating system. It is this language system that endows humans with the ability to generate and understand. “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations. “ I believes that although current intelligent models still lack understanding as Chomsky describes, and can only make descriptions and predictions without being able to propose hypotheses and create explanations, the rapid development and demonstrated potential of generative artificial intelligence have proven that Chomsky’s viewpoint needs modification. Even if, in Chomsky’s view, such AI’s superficial and questionable predictions are pseudo-science of the probability game (the implication is that scientists seek empirical confirmation, while current AI has neither “understanding” nor “experience”), facing an intelligent system capable of constant self-learning, we should not deny the potential it embodies. A question worth asking is whether an expression that is sufficiently realistic and highly natural is still a so-called “fake expression” The encoding of high-level programming languages in artificial intelligence is essentially a mathematical expression of a human language (Type-4 grammar, where Type- 0, which is completely unrestricted by context, is the Turing machine). Coincidentally, it was Chomsky’s own linguistic theory that broke the hegemony of “behaviorism”, laid the foundation for the compilation of high-level languages today, promoted the development of symbolic artificial intelligence, and made the compilation of computer languages theoretically possible. His Type- 4 grammar shifted computers from binary programming to highlevel programming languages, laying the theoretical groundwork for today’s rich variety of computer languages. This contribution to the development of computers is comparable to Pope Sylvester II’s advocacy for the replacement of Roman numerals with Arabic numerals. Without this transformation, artificial intelligence could never have developed to its present state in less than a hundred years. This should be a consensus among those who have learned assembly language. The author believes that in the debate between Hinton and Chomsky, there lies the root cause of the current problems in AI ethics research, namely a shift in subjectivity. The reason why the old ethical framework and boundaries “fail” is that the old relationship between subjects and objects has changed. Whether it’s “quasi-subjects” or “moral communities”, the emergence of such new concepts precisely reflects the collapse of the old subjectivity relationship in the post-ChatGPT era [10].
The Ethical Research Turn brought by ChatGPT
Past research in the philosophy of artificial intelligence has been constrained by a paradigm in which the pursuit of the origin of “intelligence” always inadvertently traces back to the consciousness model of artificial intelligence. Whether it’s Descartes’ dualism of mind and body, the modern triad theory, Dennett’s physicalism and multiple draft Model, the discussion then moves from consciousness to the external manifestations of consciousness, namely language. Starting with Chomsky’s structural and universal nature of language, it shifts towards the study of philosophy of mind and language, overlooking the practical significance of artificial intelligence in “multiple contexts.” This research approach undoubtedly faces a shift today in the Chatgpt era, where the ethical crisis of artificial intelligence mainly stems from its practical applications in complex scenarios. So, to address the immediate ethical issues faced by artificial intelligence and to re-endow AI ethics with meaning, I believe that one possible direction is to return to Wittgenstein’s “language game” model. Instead of excessively focusing on the universality and structuralism of language interpretation to deconstruct the “universal grammar” of the language (through its commonality) so as to “understand”, we should admit the diversity of language use, emphasizing that language is rooted in social practice and life. Unlike Chomsky, for Wittgenstein, language is not merely a tool for description/prediction; it is a dynamic practical activity. The “language game” emphasizes the diversity and multifunctionality of language, where different uses have their unique rules. This aligns with the reality of ethical issues arising from the practical applications of large language models. Lastly, and importantly, the conventional nature of language rules means that each language has its unique, non-fixed, non-universal, and agreed-upon characteristics. Investigating whether these conventional linguistic rules influence the output of real-world artificial intelligence can help answer a practical question: How much does an intelligent model trained only on a single language dataset influence its output when faced with questions from other languages, and to what extent does this influence pose ethical problems? Because in Chomsky’s view, it is precisely because the program cannot deconstruct the rules of English grammar that makes the predictions of artificial intelligence forever superficial and dubious.
Wittgenstein’s “language games” and Chomsky’s “universal grammar” models are not inherently contradictory, but rather focus on two aspects of the language model. The differences and similarities between the two can be sufficiently illustrated by an example commonly cited by both Chomsky and Wittgenstein. Chomsky describes the child’s point of view when learning a language: “A child learning a language is unconsciously, automatically, and rapidly developing grammar from minimal data, which is an extremely complex system composed of logical principles and parameters. This grammar can be understood as an expression of an innate, genetically installed ‘operating system’ that endows humans with the ability to generate complex sentences and long trains of thought.” Wittgenstein, on the other hand, observed from the process of children learning language that the essence and rules of language understanding come from imitation and interaction with others. When a child repeatedly hears “apple”, he creates a reference for “apple” in his mind, which is essentially a “language game” based on agreed-upon rules in life. Wittgenstein emphasizes more that the meaning of language comes from its use, no longer mechanically viewing language as a tool to describe reality. Like Chomsky, he also believes that language can be a possible path to “understanding”, but the difference is that Wittgenstein places more emphasis on the diversity of language as an activity. Its essence is determined by the specific context it is in, and it is engaged in dynamic activity. Therefore, excessively pursuing the essence of language is not very meaningful. Chomsky believes that this diversity is just a superficial manifestation on top of the general rules of language. Wittgenstein found that language is rooted in social practice and is essentially a reflection of social interaction. Paying attention to the social function of language is a hallmark of Wittgenstein’s shift from early logical positivism to a later phase, while Chomsky separates his language research from social reality, overly focusing on the internal mental mechanisms of language and universal human innate characteristics. The current need for artificial intelligence requires us to pay more attention to diversity and practicality. Considering the practical needs of the development of AI ethics and the potential prospects of a future de-anthropocentrism, such a shift is beneficial and can give ethical research more practical significance [11].
Although the author is critical of Chomsky’s exaggerated view of language systems as “innate,” I still agree with Chomsky’s insightful observations on ChatGPT in the realm of AI ethics. Chomsky does not deny the revolutionary nature of big data-driven “deep learning approaches have been very useful in, uh, protein folding, for example, they’ve really advanced understanding there, it’s a good engineering technique, that is, I mean, I’m not at all critical of engineering.”. Overall, Chomsky does not oppose the technological advancements brought about by large language models, but he believes we overstate the role of deep learning and artificial intelligence. This overly optimistic and dependent view of AI technology is dangerous and will ultimately lead to a decline in our moral standards: “Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. - machine learning - will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.” Regarding the future of artificial intelligence, he boldly predicts and warns: “ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either over generate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or under generate (exhibiting noncommitment to any decisions and indifference to consequences).” Whether or not it is in response to Hinton, Chomsky’s ethical perspective remains profoundly relevant in our time.
Conclusion
Human language acquisition involves both innate and acquired factors, whereas large language models represented by ChatGPT rely on massive data and unsupervised learning. The ways they learn languages are different. However, this also raises an issue. As ChatGPT’s imitation of humans becomes increasingly similar, As mentioned above, A question worth asking is whether an expression that is sufficiently realistic and highly natural is still a so-called “fake expression. In my view, due to the actual impact brought about by ChatGPT and the potential ethical risks it poses, this question is no longer a pure linguistic issue but an ethically significant one. ChatGPT has proven the feasibility of its theory through its actual performance, which is Hinton’s technical confidence. The point of contention between the two is not whether ChatGPT can reflect the unique mechanism of human brain language learning. It can be said that ChatGPT is the victory of language technology based on largescale corpus statistics, but it cannot be used to deny the rationality of Chomsky’s innate theory of language.
Regarding the debate between Chomsky and Hinton mentioned above, I believes that the significant difference in their views stems from their technical backgrounds and different perspectives on the new generation of artificial intelligence represented by ChatGPT, Chomsky is a symbolist, while Hinton is a connectionist. The reason why Chomsky seems to be at a disadvantage in the debate with Hinton is entirely due to the current technological reality, that is, the massive data in the Internet era and the powerful computing capabilities possessed by large language models such as ChatGPT provide data-driven support for connectionism, while Chomsky’s symbolism’s answer to the root of knowledge still remains at the stage of self-evidencing assumptions. However, Hinton’s criticism of Chomsky is undoubtedly too harsh. The universal language proposed by Chomsky still has strong practical significance for the study of natural language, and his ethical questions about the current ChatGPT are real ethical issues that we need to face and think about in the long term. Chomsky presented an ethical “prison” for Hinton’s view of intelligence, but Hinton did not provide a systematic theoretical response on ethics. From this point on, their dispute ceased to be a simple metaphysical epistemological issue within the scope of linguistics but became genuinely related to the dimension of intelligent ethics. Many people view their debate as a mere linguistic issue. Indeed, linguistics needs to expand its research scope to the field of machine language due to this technical dilemma, transitioning from the description of phenomena in a single language to a higher-level theoretical explanation across languages and through history to meet the current needs of technological development. However, I pays more attention to the new direction and new ideas in intelligent ethics research brought about by the shift in linguistic research induced by ChatGPT, as manifested in the debate.
Chomsky believes that the dawn of the era where artificial intelligence surpasses human intelligence has not yet arrived, but I think signs of it have begun to emerge from the infiniteness, generativeness, and even creativity displayed by large language models. In order to address the challenges and crises that ChatGPT brings to traditional ethical frameworks, research in AI ethics needs to shift towards more practical responses to the situational and subjectivity crises that have already arrived. Technological advancements make it impossible to redefine the boundaries of AI ethics. The only way to fundamentally prevent the risk of AI going off-control is to focus on the humanization of AI, to direct research towards “language games” in complex situations, to use humanism rather than anthropocentrism as the ethical foundation, and to pay attention to the regulation of technology and its impact on human thinking and life.
Acknowledgement
None
Conflict of Interest
No conflict of interest.
References
- Bostrom, N (2019) The Vulnerable World Hypothesis. Glob Policy 10: 455-476.
- Tang Daixing (2023) From AlphaGo to ChatGPT: Where are the Ethical Boundaries of Artificial Intelligence? [J]. Philosophical Analysis 14(06): 120-138+193.
- Geoffrey Dache (1994) Post-Human. In: Art World, Xu Mingqing (Eds.), Issue No. 3.
- Geoffrey Hinton's (2024) Ulysses Prize. In: Contemporary Linguistics 26(4): 489-495.
- Zhang Jialong (2001) Critique of Wittgenstein's Anti-Essentialism Program. "Language Game" Theory and "Family Resemblance" Theory [J]. Philosophical Research (07):47-53+60-81.
- ChatGPT, human intelligence (2023) Ramin Mirfakhraie (Eds.), Noam Chomsky responds to critics Noam Chomsky Interviewed.
- Chomsky, N (1999) On the Nature, Use, and Acquisition of Language. In: Ritchie, W.C. and Bhatia, T.K., (Eds.), Handbook of Child Language Acquisition, Academic Press, San Diego, CA, pp. 33-54.
- Charles Yang, Stephen Crain, Robert C. Berwick, Noam Chomsky, Johan J. Bolhuis (2017) The growth of language: Universal Grammar, experience, and principles of computation.
- (2024) Pause Giant AI Experiments: An Open Letter". Future of Life Institute, pp. 07-19.
- Chomsky, N (2023) Noam Chomsky: The False Promise of CHATGPT. The New York Times.
- Kemp, Gary (2013). What is This Thing Called Philosophy of Language? New York: Routledge.
-
Shuijian Zheng*. The Ethical Crisis of Artificial Intelligence in light of the Linguistic Controversy over ChatGPT. Iris On J of Arts & Soc Sci . 2(3): 2025. IOJASS.MS.ID.000537
-
Artificial Intelligence, Linguistic, ChatGPT, Language games, Ethical Challenges
-
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
- Abstract
- Introduction
- Main Contents
- Ethical Challenges Brought by ChatGPT
- The Philosophical Turn brought by ChatGPT - The Debate between Chomsky and Hinton
- ChatGPT’s Impact on Human Subjectivity and the Practical Dilemmas of Intelligent Ethics
- The Transformation of ChatGPT’s Own Subjectivity and Future Trends
- The Ethical Research Turn brought by ChatGPT
- Acknowledgment
- Conflict of Interest
- References