Non-tech founder’s guide to choosing the right software development partner Download Ebook
Home>Blog>How ai is revolutionizing healthcare: challenges ahead

How AI is Revolutionizing Healthcare: Challenges Ahead

Although the current hype may make it look like AI is unstoppable, in reality the field faces many challenges ahead. In particular, there are difficulties in applying AI to healthcare, due to the serious nature of the sector. 

I divide these challenges into three categories: technical, ethical, and legal.

As I wrote in my previous article, the general trend is towards AI augmenting healthcare, rather than replacing people outright.
How AI is Revolutionizing Healthcare: Challenges Ahead
How AI is Revolutionizing Healthcare: Challenges Ahead

Technical Challenges

Much of the rhetoric surrounding AI suggests that it will only get better and better, with no end in sight. But this is very much an unproven assumption, and in fact there are concrete obstacles ahead.

Part of the “inevitable growth” mindset comes from the tech industry’s decades-long experience with Moore’s Law. Moore’s Law was an incredible phenomenon while it lasted, but it is now reaching hard physical constraints at the quantum level. Some consider it dead.

Unfortunately, there’s no necessary reason to believe that anything AI related is guaranteed to follow a Moore’s Law like pattern. 

Take LLMs, for example. They work so well because they’ve been trained on an incredible amount of text. But believe it or not, we could actually run out of text to train them on in just a few years. At that point, someone would have to design a more clever use of the data available.

There’s also the possibility of such programs getting worse as the internet gets filled with AI generated content. What will happen if you train a LLM on content generated by an LLM that was trained on content generated on an LLM?

Ethical problems - Robot vs Human
Ethical problems - Robot vs Human
Ethical Challenges
Because AI technology is so new, it provokes many ethical questions. This is a major obstacle for the industry, but an important one to be mindful of.

The situation is similar to another life-and-death domain: self-driving cars.

The main reason there are practically zero self-driving cars on the road is not because they don’t exist, but because the cost of failure is way too high.

Suppose I told you that a new AI program had a failure rate of 1%. If all the program did was draw silly cat pictures, that 1% failure rate would mean nothing. But if it drove an SUV, that 1% failure rate could kill somebody.

With healthcare, people’s lives are also at risk. But the situation tends to be more forgiving, as the healthcare decisions AI makes are not usually made in real time. If an AI program generates a treatment plan, a doctor will read it before giving it to a patient.

Fortunately, there are legal safeguards in place. Computers cannot be licensed as doctors; consequently, they can’t make the final decision on healthcare related matters. 

For now, ethical problems are mostly held in place by legal standards. But that could change.

Legal Challenges

Medicine is a tightly regulated sector. And for good reason- you want to know for sure that everyone involved in your health is well-trained and competent.

But that also limits the involvement of AI to what has been thoroughly proven to work. Trained doctors are sworn to “First, do no harm.”, and if there’s a reasonable chance of AI causing harm, they should not use it.

Who exactly should be held accountable if an improperly trained AI program detects a disease that’s not there, making a doctor prescribe medicines improperly?

As I mentioned above, legal safeguards prevent AI tools from making final decisions. Those safeguards place a limit on AI. But by now you might be wondering if the laws could change. After all, didn’t ChatGPT pass a medical exam? 

Yes, sort of. But that doesn’t mean it has real medical expertise or the right to be licensed.

ChatGPT’s output is probabilistic, meaning it won’t say the exact same thing every time. While it may have passed a medical exam once, that doesn’t mean it will always do so. Whoever gave it the exam may have tried several times, and cherry picked the right answers.

Furthermore, exams usually rely on stock, repetitive language. It’s not unlikely that multiple years worth of the exact same exam were in the GPT-4 training data. In that case, ChatGPT’s success doesn’t imply any actual medical awareness. It just means that standardized tests are predictable, unlike the human body.

Most importantly, many of the skills that make doctors valuable are not book knowledge. Consider using a stethoscope, performing an eye exam, or consoling a parent about the death of their child.
Futurama Meme
Futurama Meme

The Future Is Bright, But Unclear

I have little doubt that AI will continue to impact many industries, including healthcare. There are just too many useful applications to pass up.

But the impact will continue to be careful and deliberate, especially in healthcare. There is so much to lose when people’s lives are at risk, and AI brings more questions than answers.

Expect to see AI programs augmenting existing healthcare processes, rather than outright replacing anything. Optimally, doctors will use AI to improve the accuracy of their diagnoses, prescribe more personalized treatment plans, and spend less time on repetitive tasks.

Discover More Reads

Recent Projects

We take pride in creating applications that drive growth and evolution, from niche startups to international companies.

Do you have a tech idea?

Let’s talk!

By submitting this form, you agree with JetRockets’ Privacy Policy

If you prefer email, write to us at hello@jetrockets.com