Since the debut of OpenAI’s ChatGPT in late 2022, experts in computer science, business, and other fields likely impacted by the disruptive communication and programming abilities of large language models (LLMs) have shared their reactions, hopes, and concerns. Here, we’ve assembled 10 quotes and their sources that are part of a sweeping conversation on the future of science and education in the age of conversational AI.

ChatGPT and the Latest LLMs

(1) Brown University: “Are AI chatbots off the rails or doing just what they were designed to do? Brown expert explains” [source]

On whether chatbots like ChatGPT are sentient or self-aware:

“Current programs either have broad exposure to human knowledge but no goals, like these chatbots, or they have no such knowledge but limited goals, like the sort of programs that can play video games. No one knows how to knit these two threads together.” — Michael Littman, computer science professor at Brown University and director of the National Science Foundation’s Division of Information and Intelligent Systems

(2) The Conversation: “ChatGPT is great – you’re just using it wrong” [source]

“A language model like ChatGPT, which is more formally known as a ‘generative pretrained transformer’ (that’s what the G, P and T stand for), takes in the current conversation, forms a probability for all of the words in its vocabulary given that conversation, and then chooses one of them as the likely next word. Then it does that again, and again, and again, until it stops.

So it doesn’t have facts, per se. It just knows what word should come next. Put another way, ChatGPT doesn’t try to write sentences that are true. But it does try to write sentences that are plausible.” — Jonathan May, research associate professor of computer science at the University of Southern California

(3) Salk Institute for Biological Studies: “AI chatbot ChatGPT mirrors its users to appear intelligent” [source]

On how LLMs can further our understanding of the human brain:

“Language models, like ChatGPT, take on personas. The persona of the interviewer is mirrored back…For example, when I talk to ChatGPT it seems as though another neuroscientist is talking back to me. It’s fascinating and sparks larger questions about intelligence and what ‘artificial’ truly means.” — Terrence Sejnowski, Salk professor and Francis Crick Chair and UC San Diego distinguished professor

(4) Johns Hopkins University: “Where is ChatGPT taking us? And do we want to follow?” [source]

On how the technology could evolve:

“Current models, such as ChatGPT, don’t perceive their environment. For example, they can’t see where my phone is or how tired I am. Soon we will see ChatGPT with eyes. These models will use different modalities of data (text, visual, auditory, etc.), which are necessary for them to serve us daily. This will lead to self-supervised robots based on the data of their physical environments, including physical objects, humans, and their interactions. The impacts here will be enormous. In less than 10 years, any physical appliance we use daily—car, fridge, washer, etc.—will become conversational agents you will talk to.” —assistant computer science professor Daniel Khashabi

Impacts on Science

(5) Nature: “ChatGPT: five priorities for research” [source]

“We think that the use of this technology is inevitable, therefore, banning it will not work. It is imperative that the research community engage in a debate about the implications of this potentially disruptive technology.” — Authors Eva A. M. van Dis, Johan Bollen, Willem Zuidema, Robert van Rooij, and Claudi L. Bockting

The Nature commentary outlines five scientific priorities for LLMs, including (1) human fact-checking and verification, (2) establishing rules for accountability, such as acknowledgements by authors when using AI tools, (3) investing in open-source LLMs for transparency, (4) integrating AI into the research and publication process to speed innovation while retaining essential and useful skills for human researchers, and (5) immediate conversations within the scientific community, from research groups to international stakeholders.

(6) Northwestern University: “When ChatGPT writes scientific abstracts, can it fool study reviewers?” [source]

In a study led by Northwestern University, human reviewers only identified ChatGPT-generated scientific abstracts 68% of the time. They also incorrectly identified 14% of real abstracts as being AI generated. Further, the study found that AI output detectors were better at identifying fake abstracts than traditional plagiarism detector tools.

“We found that an AI output detector was pretty good at detecting output from ChatGPT and suggest that it be included in the scientific editorial process as a screening process to protect from targeting by organizations such as paper mills that may try to submit purely generated data.” — Dr. Catherine Gao, Northwestern Medicine physician-scientist

(7) Columbia University Data Science Institute: “Columbia perspectives on ChatGPT” [source]

On what applications ChatGPT might have in data science:

“Researchers often want to disentangle their data into a couple of distinct clusters to understand what differences exist between the groups. This often takes a lot of time, but instead, you tell ChatGPT there are two distinct groups in this data and ask it to tell you what differences there are between them. We can also see ChatGPT being used in the labor-intensive process of labeling data.” — Amir Feder, postdoctoral research scientist

“In general, if you’re trying to understand a topic, and ChatGPT can find the relevant data sets, you can imagine asking it freeform questions, such as ‘What’s the trend for how housing prices look like in Nebraska?’ You can also imagine programs incorporating ChatGPT as one of many steps. You might want to create a custom application for searching through sports information; a data scientist could combine a piece of code for performing arithmetic calculations with another piece of code for searching stats in sports databases. ChatGPT could stitch these interesting data sets together to perform useful tasks.” — Eugene Wu, associate professor of computer science at Columbia Engineering

Impacts on Education

(8) Duke University Provost’s Forum: “ChatGPT is here to stay. What do we do with it?” [source]

On using generative AI in education:

“There are ways to use GPT tools to integrate it into the learning process. We’ve done this with other tools previously. Should you stop students from using spell check? During a spelling test, yes, but if they’re writing an essay, why would you cut them off from using that technology? The question for educators is to decide when the work is a spelling test and when it is not, when you can use GPT and when you can’t.” — Casey Fiesler, technology ethicist at the University of Colorado-Boulder

(9) University of Colorado Boulder: “5 burning questions about ChatGPT answered by humans” [source]

On ChatGPT and LLMs as communications tools:

“At its core, it’s a communications model. Anytime human beings are communicating with each other, we’re going to see opportunities for using artificial intelligence tools to help them communicate faster…It will make it easier to communicate fast, which kind of scares me. Communication should be insightful and valuable or make us feel better. But it should not be more frequent. I don’t know about you, but I get enough emails.” — Kai Larsen, associate professor of the Leeds School of Business

(10) The Conversation: “AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers” [source]

On the potential gain to creativity but lose skills:

“Large language models are making creativity and knowledge work accessible to all. Everyone with an internet connection can now use tools like ChatGPT or DALL-E 2 to express themselves and make sense of huge stores of information by, for example, producing text summaries. …While there are significant benefits to opening the world of creativity and knowledge work to everyone, these new AI tools also have downsides. First, they could accelerate the loss of important human skills that will remain important in the coming years, especially writing skills.” — Lynne Parker, associate vice chancellor at the University of Tennessee. Read a PillarQ&A with Parker about the goals for leading a statewide AI initiative.