GPT-4V, OpenAI's image interpretation model, lacks proficiency in medical image interpretation and diagnosis. While it can identify and explain medical images, its diagnostic accuracy and clinical decision-making abilities are poor and could pose risks to patient safety. Therefore, caution should be exercised when using it for clinical decision-making. https://arxiv.org/abs/2403.12046 You've to be super stupid to believe LLM is a replacement for the trained doctor.
@nixCraft It makes no decisions. It's merely putting together patterns. Some patterns form more than others. It doesn't select between them based on knowledge but on frequency matching with a random seed. With zero decision making actually taking place, it can never make a good decision by definition. It lacks and cannot ever have understanding.
The sad thing is, LLMs could have instead been used complimentary with actual experts as ways of narrowing possibilities to be double checked.