Discussion about this post

User's avatar
Roy's avatar

Hi Daniel,

I am one of the authors of this article, and I agree with Claude...

As we wrote in our rapid response, the main goal of this article was to show the absurdity of treating LLMs as humans, as happened in too many other article that were published not on the Christmas issue, and had a serious tone.

The BMJ states in the headline that it is Intended for healthcare professionals.

I doubt that there are many doctors who read the phrase "As the two versions of Gemini are less than a year apart in 'age,' this may indicate rapidly progressing dementia" and took it seriously.

Still, even if there are doctors who took this interpretation literally - I believe it is quite harmless. However, I am truly concerned what will happen the next time they will read an article about a new medication that was funded by the drug company. Will they prescribe it to their patients without having the basic sense of criticism towards the interpretation of the findings? What if these doctors will get some information from hallucinations of LLMs, without reading properly this references?

Anyway, while I regret that some websites wrote about the article without mentioning it is from the Christmas issue, I am glad that it sparked this debate regarding the role of LLMs in medicine.

Expand full comment
Luxorion's avatar

Hi,

It is important to note that these articles, published in the Christmas time in the BMJ, are often written with a touch of humor or satire, aiming to entertain while addressing serious topics.

Although applying tests intended for humans to intelligent machines seems incongruous or even a serious error of judgment (see below), these results are useful because AI still too often gives rise to a wave of enthusiasm from our politicians and even from certain companies developing LLMs, but also justified fears concerning the reliability of AI.

-Thierry, computist, AI tool user, Luxembourg

Expand full comment
2 more comments...

No posts