In 1950, the Religious of the Sacred Heart of Mary—a small consecrated order founded in 1849 by Fr. Jean Gailhac (1802-90), who previously had run a shelter for women devastated by prostitution—established a two-year women’s college just outside of Washington, D.C. Like other colleges set up by the RSHM, this one was named after Mary, the mother of Jesus. Thus it was called “Marymount University.” In 1973, it became a four-year institution, and, a few years later, the school began to add graduate programs. By the late 1980s, Marymount was fully coeducational and boasted an active Division III sports program, whose teams competed under the name “Saints.” This theologically-tinged moniker was a fitting choice. Not only was the university a part of the RSHM’s network of “Marymount colleges,” but the school motto, “Direct Us by Thy Light” (Tua Luce Dirige), had biblical associations (Isa. 60:19) and was once adopted by Cardinal János Csernoch, who served as primate of Hungary from 1912 until his death in 1927. Marymount’s founding, then, was about more than scholarship. Yes, the university was to provide a viable education, but this task was secondary to its stated purpose—to promote sainthood, which is not a mere human work but a gift of God’s gracious illumination.
Times change, however. Early in February 2023, Irma Becerra-Fernandez, the seventh president of Marymount University, presented a plan to the university’s Board of Trustees, recommending that the school eliminate bachelor’s degrees in theology, philosophy, mathematics, art, history, sociology, English, economics, and secondary education. Master’s programs in English and the Humanities were placed on the chopping block as well. Already known for embracing market-driven initiatives, Pres. Becerra-Fernandez framed Marymount’s plan in terms of supply and demand:
Universities that will thrive and prosper in the future are those that innovate and focus on what distinguishes them from their competition. Digital disruption, economic conditions, and the explosion of low-cost, online course providers have put pressure on universities to reinvent their institutions in order to compete. Students have more choices than ever for where to earn a college degree, and MU must respond wisely to the demand.
Pres. Becerra-Fernandez’s proposal received backlash in certain quarters. And yet, on February 24, Marymount’s Board of Trustees unanimously voted (20-0) to approve it. The thinking, it seems, is that students will still be exposed to, say, the metaphysics of Aristotle and the poetry of Shakespeare via the university’s “core curriculum.” At the same time, however, any expectation that these subjects can inspire years of in-depth study must be abandoned. In fact, the very nourishment of such an expectation is improvident. The task of a university, says Pres. Becerra-Fernandez, is to increase, and this can only be accomplished by dispensing what society seems to desire. “[It] would be irresponsible to sustain majors [and] programs with consistently low enrollment, low graduation rates, and lack of potential for growth,” she explains.
Hence, in a span of about 75 years, Marymount has essentially inverted its telos: once established to edify students, its goal is now to accommodate them. “The customer is always right,” as the saying goes, and Marymount is simply instantiating this logic. There is, in truth, a certain honesty to Pres. Becerra-Fernandez’s admission that Marymount is no longer driven by the cultivation of virtue, whether intellectual or spiritual. The goal of utility is both easier and more enticing. It is in this regard that Marymount’s decision dovetails with one of the other major stories in higher ed—the burgeoning popularity of ChatGPT on college campuses.
For those who don’t know, ChatGPT is an artificial-intelligence chatbot. In a sense, then, ChatGPT is simply software designed to hold conversations online, not unlike those annoying customer service “assistants” that pop on various retail outlets. However, it takes this form of functionality to a higher level. As a “generative pre-trained transformer” (GPT), which “learns” by transferring information from previous tasks to later ones, ChatGPT is capable of impersonating more advanced human language and thought. Thus it can be used to mimic conversation, write essays, answer test questions, and play games. Unsurprisingly, then, ChatGPT has already found its way into education. Students are using ChatGPT to carry out writing assignments and, in the process, raising complex debates about critical thinking and plagiarism. At least one professor has asked students who utilize ChatGPT to rewrite and to resubmit their written work, but it’s possible that such measures will not hold up. In a recent article in Wired, an administrator at Brown University concedes that students who use ChatGPT are not exactly plagiarizing: “If [plagiarism] is stealing from a person, then I don’t know that we have a person who is being stolen from.” Moreover, students are already arguing that ChatGPT is principally a matter of expediency. As one sophomore at Brown puts it:
Calling the use of ChatGPT to pull reliable sources from the internet “cheating” is absurd. It’s like saying using the internet to conduct research is unethical. To me, ChatGPT is the research equivalent of [typing assistant] Grammarly. I use it out of practicality and that’s really all.
While there are a plethora of unexamined assumptions in this student’s assertion—for example, what exactly constitutes a “reliable” Internet source?—it is hard to blame him for viewing the situation in these terms. After all, his rationale for using ChatGPT corresponds with the decisionmakers at universities such as Marymount, who also view education in terms of efficiency and utility. How can ChatGPT be deemed “unethical,” he rightly asks, when education itself is a matter of “practicality”?
Perhaps this is why some institutions are “leaning in” to the use of ChatGPT on campus. After all, so the thinking goes, “nearly 30% of U.S. professionals say they have already used AI in their work.” This rate will no doubt increase in the future. According to Mihir Shukla, CEO of Automation Anywhere and an “agenda contributor” to the World Economic Forum (WEF), ChatGPT is but an early example of a coming technological revolution, noting that up to 70% “of all the work we do in front of the computer could be automated.” At Boston University, a course in Computing and Data Sciences sought to develop a “blueprint” for AI in the classroom—a task designed to help students think constructively about ChatGPT and its ilk. As the course’s instructor explains, “[Students] need to figure out how to master these tools and integrate it into our toolkit.” At Texas Women’s University, faculty have organized workshops about integrating ChatGPT into assignments. According to one English professor, ChatGPT is best seen as a facilitator, rather than as a hindrance, to education: “I could see it becoming a new feature of our published writing. We could cite it. It's not going to replace all writing. It could replace the boilerplate, the sort of clear-your-throat writing that we don't like doing in the first place.” Such a perspective may seem cutting-edge, but no less a mainstream authority than the New York Times has endorsed it. As Kevin Roose, technology columnist for the Times, argues:
Creating outlines is just one of the many ways that ChatGPT could be used in class. It could write personalized lesson plans for each student (“explain Newton’s laws of motion to a visual-spatial learner”) and generate ideas for classroom activities (“write a script for a ‘Friends’ episode that takes place at the Constitutional Convention”). It could serve as an after-hours tutor (“explain the Doppler effect, using language an eighth grader could understand”) or a debate sparring partner (“convince me that animal testing should be banned”). It could be used as a starting point for in-class exercises, or a tool for English language learners to improve their basic writing skills.
When a high-school teacher recently asked Roose, “Am I even necessary now?”, Roose was unmoved. Sure, he’s “sympathetic” to such concerns, but they’re ultimately beside the point. Getting rid of ChatGPT, he insists, is simply “not going to work.”
In the end, then, Roose’s technological determinism mirrors that of Irma Becerra-Fernandez. That the humanities belong to an ancient tradition of Western education; that AI bots such as ChatGPT often provide false information; that professors and teachers may be put out of work—these and other related problems are not denied. However, they are deemed irrelevant in the face of an inexorable techno-calculus. One might almost conclude that this calculative mindset is divine, so inevitable does it seem. Yet, in actual fact, it is human, all too human—an outcome of a particular mode of Western life and thought. Consider, for example, that ChatGPT was developed by a California-based company called OpenAI. It started as a nonprofit enterprise, but an influx of venture capital transformed it into a multibillion-dollar, for-profit corporation. Many of the world’s biggest and wealthiest investors—Jeff Bezos, Elon Musk, Peter Thiel—backed OpenAI from the beginning. Moreover, in January 2023, Microsoft expanded on its previous relationship with OpenAI, announcing “a multiyear, multibillion dollar investment to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world.” One of the first manifestations of this deepened partnership was the launch of an upgraded version of Bing—Microsoft’s web search engine. This AI-powered Bing will not cost more to users, but, needless to say, it will feature an abundance of advertisements. No wonder that OpenAI itself is now valued at roughly 30 billion dollars and is expected to hit 1 billion dollars in annual revenue by 2024.
AI, then, is good for business, precisely because businesses (and consumers) prize efficiency and utility. The new Bing, for example, has been promoted as “faster, more accurate and more capable.” Similarly, Microsoft boasts, AI makes sure that “even basic search queries are more accurate and more relevant.” Of course, these goals, in and of themselves, are not “bad.” Yet, as they have crept into every facet of Western life, other ways of being-in-the-world have been lost. Marymount’s dissolution of the Humanities is just an obvious example of this trend—one, moreover, that parallels the rationale for embracing ChatGPT in the classroom. “There is no growth potential in these fields.” “Writing papers needs to be streamlined.” “AI is irresistible.” Since so many areas of contemporary life are now considered “post-truth”—that is to say, reducible to individual preference and emotional appeal—market logic remains the last verity. If we can’t agree on topics as diverse as the validity of Christian doctrine or the origin of COVID-19, then we can at least unite over the need to make money. Why bother with anything else?
Is this, then, the “end of the humanities”? Marymount seems to think so. The sudden rise of ChatGPT suggests the same. I myself am ambivalent. On the one hand, I’m not confident that the Humanities (broadly speaking) will persist as a major university subject. There is simply no reason to expect most universities, sensitive as they are to fiduciary concerns, to advocate for programs that do not prioritize market relevance. Lip service, of course, will be paid to the importance of a “well-rounded education.” But students are adept at reading between the lines; they realize that Humanities courses are essentially preconditions for taking a degree in a more “practical” subject. On the other hand, I’m certain that the questions posed in and through the Humanities are perennial. Growth curves and AI chatbots can obscure, but never annihilate, human creativity, meaning, and hope. Studying history, literature, philosophy, and theology will always beckon human beings, even if institutional support continues to diminish.
Still, one must choose a response. This is probably the hardest part. In A Literary Review (En literair Anmeldelse, 1846), Søren Kierkegaard envisioned a coming time in which persons would be confronted with a terrible decision: either conform to the calculus of mass society or “leap” into a new life dedicated to the service God and neighbor. Kierkegaard viewed this choice as religious in nature, but his insight is applicable to education as well. As the “leveling” of academic subjects to a utilitarian arithmetic develops apace, people will have to muster up the courage to be different. But in what sense? Kierkegaard posited that, in “the present age” (Nutiden), true religiousness will necessitate a willingness to be “unrecognizable” (ukjendelige). With this in mind, it’s fair to wonder if the dedicated study of the Humanities today necessitates something similar—a daring will to be useless.
The death of the humanities is a tragedy of major proportions. Hollowness is beating its chest in victory?
“This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang , but a whimper.”
Well done, Chris. So much I could pick up on for comment, but suffice to say I'm moved by this presentation of the topic. What I found perhaps most disclosive of the heart of the issue was the student at Brown's comment on how the use of ChatGPT is simply already supported by what's already assumed is practical, useful, and good. The logic supporting ChatGPT's usefulness is just an extension of Grammarly, just an extension of the internet, of computing itself. I feel the weight of that.
My reading right now has been in Gadamer's REASON IN THE AGE OF SCIENCE, a work that's perhaps adjacent to Heidegger's On the Question of Technology, but strikes some more humanistic and Aristotelian notes than MH. In the essay "Hermeneutics as Practical Philosophy," Gadamer makes a succinct distinction between what he calls the practical philosophy of Aristotle [and by that I think he means to group together the Ethics and the Politics and the Rhetoric], and the know-how of technicians and experts:
"[Practical philosophy] has to be accountable with its knowledge for the viewpoint in terms of which one thing is to be preferred to another: the relationship to the good. But the knowledge that gives direction to action is essentially called for by concrete situation in which we are to choose the thing to be done; and no learned and mastered technique can spare us the task of deliberation and decision. ... What separates it [practical philosophy] from technical expertise is that it expressly asks the question of the good - for example about the best way of life or about the best constitution of the state. It does not merely master an ability, like technical expertise, whose task is set by an outside authority: by the purpose to be served by what is being produced."
That last distinction in HG's quotation I found to be stunning. In terms of your post, the "outside authority" dimly could be named as capitalism, the bottom-line, but I think more ominous the more one asks about it, like as you suggested by also drew back from in your post, another kind of god.