RonO
2024-08-10 21:32:02 UTC
https://phys.org/news/2024-08-junk-ai-scientific-publishing.html
Several examples of scientists using AI to write papers with AI
generated mistakes that passed peer review. I noted before that ChatGPT
could be used to write the introductions of papers, sometimes, better
than the authors had done. One example of a figure manipulation
indicates that some authors are using it to present and discuss their
data. That seems crazy. ChatGPT doesn't evaluate the junk that it is
given. It just basically summarizes what they feed into it on some
subject. I used a graphic AI once. I asked it to produce a picture of
a chicken walking towards the viewer. It did a pretty good job, but
gave the chicken the wrong number of toes facing forward. Apparently
junk like that is making it into science publications.
With these examples it may be that one of the last papers that I
reviewed before retiring earlier this year was due to AI. It was a good
introduction and cited the relevant papers and summarized what could be
found in them, but even though the authors had cited previous work doing
what they claimed to be doing, their experimental design was incorrect
for what they were trying to do. The papers they cited had done things
correctly, but they had not. I rejected the paper and informed the
journal editor that it needed substantial rewrite for the authors to
state what they had actually done. What might have happened is that the
researchers may have had an AI write their introduction, but it was for
what they wanted to do, and not for what they actually did. English was
likely not the primary language for the authors, and they may not have
understood the introduction that was written. If they had understood
the introduction, they would have figured out that they had not done
what they claimed to be doing. Peer review is going to have to deal
with this type of junk. The last paper that I reviewed in March came
with instructions that the reviewers were not to use AI to assist them
with the review, but it looks like reviewers are going to need software
that will detect AI generated text.
Ron Okimoto
Several examples of scientists using AI to write papers with AI
generated mistakes that passed peer review. I noted before that ChatGPT
could be used to write the introductions of papers, sometimes, better
than the authors had done. One example of a figure manipulation
indicates that some authors are using it to present and discuss their
data. That seems crazy. ChatGPT doesn't evaluate the junk that it is
given. It just basically summarizes what they feed into it on some
subject. I used a graphic AI once. I asked it to produce a picture of
a chicken walking towards the viewer. It did a pretty good job, but
gave the chicken the wrong number of toes facing forward. Apparently
junk like that is making it into science publications.
With these examples it may be that one of the last papers that I
reviewed before retiring earlier this year was due to AI. It was a good
introduction and cited the relevant papers and summarized what could be
found in them, but even though the authors had cited previous work doing
what they claimed to be doing, their experimental design was incorrect
for what they were trying to do. The papers they cited had done things
correctly, but they had not. I rejected the paper and informed the
journal editor that it needed substantial rewrite for the authors to
state what they had actually done. What might have happened is that the
researchers may have had an AI write their introduction, but it was for
what they wanted to do, and not for what they actually did. English was
likely not the primary language for the authors, and they may not have
understood the introduction that was written. If they had understood
the introduction, they would have figured out that they had not done
what they claimed to be doing. Peer review is going to have to deal
with this type of junk. The last paper that I reviewed in March came
with instructions that the reviewers were not to use AI to assist them
with the review, but it looks like reviewers are going to need software
that will detect AI generated text.
Ron Okimoto