Sign In

I made a weird picture with AI for my thesis, and it actually got accepted.

Haebom
One of the most hotly debated topics in the scientific community these days is the publication of AI-generated content in academic journals. A recent case, where a paper featuring obviously unrealistic images and diagrams created by AI passed peer review, has drawn particular attention. This incident raises serious questions, not just about the credibility of academic journals, but also about the trustworthiness of the scientific community as a whole.
According to a finding from a user on X (formerly Twitter), a recent paper published in the journal 'Frontiers in Cell and Developmental Biology' contains a slew of blatantly unrealistic images and diagrams generated by AI. For instance, one of the images features a rat with a form that is completely divorced from reality, depicted in a way that goes beyond any scientific explanation. What's especially striking is that the images are labeled with nonsensical terms like 'testtomcels', 'retat', and 'dck'.
This raises several important issues.
First, it points to the need for discussions on how content created using AI technology should be regarded within the academic community.
Second, it invites questions on what made it possible for such cases to pass peer review and, by extension, challenges the efficiency and reliability of the peer review process.
Third, there is criticism that the journal 'Frontiers' is a so-called 'predatory journal'—a journal that collects publication fees from researchers without conducting proper peer review. We should once again consider the existence of such journals and the negative influence they have on the scientific community.

Solutions?

Strengthening the peer review process: Set firm standards for evaluating the reliability of AI-generated content during peer review and make sure they are rigorously enforced.
Improving journal accreditation: Establish clear definitions and criteria for distinguishing 'predatory journals,' and put in place a system to help researchers more easily identify reputable journals.
Exploring appropriate uses of AI technology: While acknowledging the positive role AI can play in academic research, we need to set out proper guidelines to ensure its use does not compromise the credibility of research.

But actually...

In truth, there are a lot of dubious journals out there. Some of them don't even bother with peer review and will publish your paper or register a DOI as long as you pay. In Korea, we use classifications like SCI or KCI, but strange journals like this still slip into SCI as well. As mentioned previously, 'Frontiers in Cell and Developmental Biology,' the journal at the center of this case, hasn't enjoyed much trust in its field for some time.
Personally, I feel this should have been screened out in the initial submission check, not during peer review, but the biggest issue is that it went straight to peer review and was accepted without any such filtering. (On top of that, the journal charges over $3,000 for paid publication.)
The fact that journals like this operate so differently from their supposed authority and actual practice—they don't conduct proper peer review and can't even catch bizarre images created by AI—makes me increasingly skeptical of journals managed by humans at all...
I'll wrap up by sharing a typeset I've been personally enjoying.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
haebom@kakao.com
Subscribe
1
U
usernameg2mw4jgdw6d
❤️
4
See latest comments