ChatGPT: What Do We Stand to Lose?

ChatGPT: What Do We Stand to Lose?

Talk to anybody who knows about the notorious artificial intelligent (AI) platform ChatGPT, developed by OpenAI, and you’ll be met with a flurry of dichotomous opinions. Students marvel at its ability to write essays and professors lament its potential for plagiarism (if that’s what you’d even call AI-generated work). Some have concerns about equitable accessibility given OpenAI’s announcement for a $20 per month premium subscription service, called ChatGPT Plus. I, however, have a broader question: how will ChatGPT progress (or retrogress) the way we work?
I first heard of ChatGPT while mindlessly scrolling through TikTok in mid-December. The creator—whose video I cannot relocate amidst the never-ending sea of TikTok content—explained, with awe, the incredible speed with which ChatGPT was spreading globally. Within five days of its debut, the AI platform reached an impressive one million users. For contrast, it took Netflix three and a half years, Facebook 10 months, and Instagram two months to reach the same number of users after their respective launches. Perhaps ChatGPT has apps like TikTok and Instagram to thank for its immediate success.


Like many other students at WashU and other higher education institutions, I’ve put ChatGPT to the test. And I have to give it to OpenAI: they have developed an impressive piece of software. It is surprisingly good at writing fiction stories, acting as a search engine, and providing advice. This is unlike any type of AI we’ve seen before.


As a result, many of my spring semester syllabi included provisions for safeguarding against ChatGPT: in-class handwritten essays replaced Canvas quizzes, drafts must be provided before turning in final papers, and warnings against using the platform are now woven into academic integrity statements.
What does this mean for academia? While ChatGPT’s capabilities still verge on rudimentary, it is only improving as more people experiment. Computers have become so ubiquitous in academia that it’s difficult to imagine completing a university-level course without one. The online era prompted by COVID-19 only exacerbated our reliance on technology.


Not to be too dystopian, but I foresee the bursting of the digital age bubble. When one contemplates the consequences AI generators have for our institutions, including academic institutions, it can be difficult to avoid conjuring up scenes that would make a compelling episode of “Black Mirror.”
In addition to academia, the ChatGPT raises a number of questions about our labor force. For years, robots have been coming for people’s jobs, mostly in industrial sectors. This can be seen as a positive thing: why should humans continue to work in sub-ideal, let alone dangerous, conditions for subpar pay when robots can bear the burden?


A 2020 study from researchers at MIT found that wages dip 0.42% for every robot added per 1,000 workers in the U.S. This additionally corresponds with a 0.2% decline in the employment-to-population ratio — a loss of about 400,000 jobs.


According to the report, robots are most likely to replace workers in routine manual positions and middle class workers, particularly blue-collar workers, such as assemblers, machinists, material handlers, and welders. The authors of the study also acknowledge that while laborers of all education levels are impacted by robots, those without college degrees suffer the most.
ChatGPT, on the other hand, is a different kind of robot that threatens a different sector of the labor force: the upper class. Traditionally white-collar workers like writers, academics, and graphic artists, to name a few, are threatened by ChatGPT and similar AI platforms.


It’s no wonder the alarm bells are ringing.


Don’t get me wrong. I am just as nervous as the other aspiring writers, lawyers, and artists attending academic institutions that serve as pipelines into white-collar careers.
But as the saying goes, modern problems require modern solutions. AI was developed by humans, and while it has benefits, the negative consequences posed by this technology can also be solved by humans.


Safeguards are already being developed, like an app with the ability to detect whether ChatGPT wrote an essay, created by a student at Princeton. Will this be enough to stop a platform that’s already seeing major investments from companies like Microsoft?


The future of AI will require an extensive cost-benefit analysis. In the end, we have to weigh the benefits of AI (efficiency, creativity, a reduction in labor costs) against the costs. These costs are not only tangible costs, like unemployment, but more abstract costs, like the value we place on human-produced art, how we protect certain professions, and how we define intellectual property.


We survived for millenia with simple pens and paper. Maybe the digital-bubble burst is what will save us.

Share your thoughts