AI in academia…how big a ‘problem’?* *This text was not written with AI.




Been silent for a while. Every day stuff, meetings, meditation(?)…you all know what I mean.

 

However, what got me thinking lately, is the advent of LLM and its impacts on academia. I’ve heard a lot and read some more, about its grey aspects: students writing essays with it, fellowship candidates preparing applications with Chat gpt, or even full papers being drafted by AI. The challenges are clear for teachers, journal editors and PIs who have (too strongly?) relied on the writing skills of students and researchers as a means to show their capacity to do good science. But is AI bad for science?

 

Less often we hear the positive aspects that, such an amazing technological development, can offer to academia. For starters, its speeds access to code which may be needed for statistical analysis, model building, larger coding or drawing neat figures. Nevertheless, it is for the writing that I think there is much to gain. One aspect that is without question, is how it improves speed in the process staring with ideas and experiment deployment to the manuscript stage. I think we all agree that communicating our findings fast, is important. Close your eyes for a minute and imagine Charles Darwin, returning from his voyage in the Beagle and using a chatbot to draft the “Origin of species”.

 

We are all very much aware that English is the dominant language of science communication and that it can be a hurdle for many non-native speakers. Some may elaborate more or less complex arguments of why that should not be the case, but the sad truth is that is really hard, wherever you live or work, to succeed in academia with a limited grasp of the English language. We need to write papers and to do so, we need to read papers. And these are in English by a vast majority.

 

For this, LLM can be immensely helpful. From summarizing a paper for you-although we still do not trust them totally- in your native tongue, improves access to the literature (assuming naturally, that you have access to the literature which is another issue altogether). Also, it can help us write in English. Asking a chatbot to improve our writing is something that is good. If it’s all about clear communication of our science, who could argue against that?

 

Clearly, AI can help lower the language barriers. We’ve talked a great deal about this and worry much about biases against scientists from the Global South or non-native English speakers. Interestingly, in a recent informal survey I carried out among ecologists from Latin America, among 81 respondents (mostly ECR and postdocs), nearly 90% were more worried about the publication costs than language related issues. Importantly, again 90% of them had or would use AI to improve their writing.

 

We may need to think differently and embrace the positive aspects of LLM. How we recruit students and evaluate their performance in courses we teach, calls for imaginative ideas. For example, could the ability of students to use LLM and how they undergo quality control can be skill we may like to see in them?  Or, is it time now to seriously consider forgetting H factors to recruit staff, something very much in everyone’s minds for some time (ages now?) and shouldn’t interviews take precedence?  

 

A different picture arises when it comes to editorial work. No doubt, papers will be better written but we are still very much bugged by a paper not written by the (human) lead author. Should acknowledging AI be a solution? Will submissions be screened by AI to see how much of it is conceived by AI (I know this sounds weird, but this is a thing!). How is responsibility for authorship assigned, when ethical issues arise? This and more are being discussed, for sure by scientific societies, publishers and the community in general. 

These are very challenging times which together with the clear efforts towards Open Science and Open Access, decolonizing science and improving equity to mention a few, speak of the tectonic movement that academic publishing is facing. I believe that all, if in different degrees, are paving the road to more robust-earthquake sound- foundations of science.

 

Although we human beings are irremediably attracted to apocalyptic views when it comes to technological development (i.e., TV will kill radio, DVD players will end cinema theaters, etc.), I believe LLM overall are going to be good for academia. I hope I’m not wrong. And of course, we are all aware that aside from the changes this will imply us to all in everyday life we should keep an eye open on how it deals with the environment.

Comments

Popular posts from this blog

To cite or not to cite, that is the (authors) question!

Publication from the Global South in the current world

Global warming. So much progress, huh? (a message for the younger scientists)