AI

Using AI to prediction function outcome in dogs with disc herniation

Yes, you read that title correctly. A new study, the first of its type, is using “machine learning” to predict outcomes based on CT data, and neurologic grade. I will admit that I do not understand all of the system input data and how it was analyzed. I feel like I need another degree to grasp some of what is presented in this paper. If you’re analytically minded and want to learn more, please check out the paper for the full story because I may not represent it fully.
Historically, we have predicted neurologic recovery based on the most reliable factor which is the presence, or absence of deep pain (nociception). If an animal is paraplegic (no motor in the pelvic limbs) but has deep pain intact, we predict a 90% or greater chance of recovery for that pet. Timing and other variables play a minor role in recovery potential, too. If the same animal loses deep pain, their odds of recovery drop to around 50%. We have tried to predict myelomalacia, or motor recovery with MRI or CT characteristics over the years and have not been fully successful. Clients have higher expectations now, in the digital age. They want to know: will MY DOG walk again? Not what are the population odds? Although this paper doesn’t exactly say that we can do that, I think we’re edging closer to that possibility.
Based on this paper, the authors propose that this new learning tool can look at the neurologic grade on examination AND the CT properties and predict ambulation. (If I could insert the mind blowing emoji here I would!) Okay, maybe that’s a bit of an oversimplification but…it’s close.  While they don’t say it’s predictive for an individual, it gets us closer than before.
Results

  • 214 dogs were included, of which 74 were Dachshunds and 65 were Frenchies

  • 128/214 dogs were deep pain positive (DPP)

  • 86/214 dogs were deep pain negative (DPN)

  • The recovery rate for all dogs with 77%; 123/128 DPP (96%), 42/86 DPN (49%). These stats line up with what we already would have predicted

  • None of the radiomics features were associated with recovery on UNIVARIABLE analysis. I.e. one feature didn’t stand alone

  • The AI model outperformed simply knowing the DP status for predicting recovery to ambulation (p=0.02).

  • Neurologic grade was the MOST IMPORTANT feature in the AI model’s decision making process but, as I read it, the AI model did a better job of predicting WHICH dog would recover and which wouldn’t.

 
Is the future here? Are we going to see imaging centers offering AI prediction models? Are we edging people and examinations out of the equation? Not so fast. Do any of you use a calculator in your daily work? I do. Sixty years ago, the calculator was a wild idea. People who knew how to do math could do math faster using a calculator. The data put into the calculator was accurate, and the people inputting in knew it was accurate. People who couldn’t do math, relied on calculators and hoped the answers were correct because they couldn’t know if the output was the number expected because they didn’t understand the input numbers. I think the same is true of AI. If we know how to perform a good neurologic exam, and then pair it with a CT, the results put out by the AI algorithm could be more powerful than just doing recovery predictions based on the exam alone. However, if we do an incomplete or incompetent neurologic examination, we won’t know if the AI prediction model is giving us good data. Or, worse yet, we won’t know that we didn’t do a good exam, and we will believe the AI prediction data without knowing that the data input was bad. Also, don’t forget, the most useful part of the AI prediction model was the neurologic grade. Neurologic grade is obtained by doing a good neurologic examination. If you lose the exam, you will lose data. In 2 years, I will likely look back at this TidBit Tuesday with a different reaction than today. But today – I’m still feeling pretty confident that we need to touch our patients!!.

What do you think of AI? Many of us are using it for note taking and some for radiology. Do you have an AI receptionist? I hope you enjoyed this week’s TidBit Tuesday. I look forward to working with you soon!

Reference: Machine Learning and Quantitative CT radiomics prediction of postoperative functional recovery in paraplegic dogs (Low D, et al) ACVS 2025

Artificial Intelligence in Neurology

Artificial Intelligence (AI) has really taken off in the last few years and, as such, has driven us as veterinarians to critically evaluate where and how we would like to utilize this new technology. Last year I reported to you about a lecture I listened to at ACVIM about the use of AI for writing radiology reports. It was eye opening, to say the least! Recently (October 2023), a group of mostly veterinary neurologists took on AI in a new way. Abani et al challenged 13 boarded neurologists from Europe and North America to distinguish between AI-generated abstracts and human-generated abstracts. The results are chilling...

Materials and Methods

There were 3 test topics provided in this study. The purpose of providing 3 was to discriminate between "highly familiar" topics and the less familiar topics to see if there was a difference in detection of AI by the reviewers. 
Topic 1: SARS-CoV2 scent detection in dogs (considered low familiarity)
Topic 2: Biomarkers for SRMA (considered high familiarity)
Topic 3: Staining of cannabinoid receptor type 1 (medium familiarity)
An abstract, reference and introduction paragraph were written by humans on these 3 topics. ChatGPT was then used to generate 3 additional abstracts, with references and an introduction paragraph on the 3 topics. It was interesting that the authors noted ChatGPT was prompted as such: " Write an academic abstract with a focus on (subject) in the style of (author characteristics such as position, gender and age) at (University name), for publication in (journal name)." I mean...wow. ChatGPT is able to provide gender, age and position sensitivity. 

Results

  • Topic 1 and 3: 4/13 (31%) correctly identified the AI generated abstract when only provided the abstract without references and introduction paper. This increased to 9/13 (69%) when all parts were provided. 

  • Topic 2: 7/13 (54%) correctly identified the AI-generated abstract (provided alone), which increased to 10/13 (77%) when all parts were provided. 

Two separate plagiarism detectors were studied in this study as well. All of the original published manuscripts were noted to have 58%-100% similarity to available work which indicated this had been published elsewhere (it had). Test 1, 2, and 3 with the AI-generated papers had similarity indexes of 0-18%. This suggests that the plagiarism detectors could identify what had been previously published (the human-generated papers) and which hadn't (the AI-generated papers). Furthermore, they then evaluated all of the abstracts with an AI-detector. All original manuscripts were noted to have 0% AI-writing. Test 2 was noted to have 100% AI generation, and Tests 1 and 3 were noted to have 0% content written by AI. Gulp. 

Where does this leave us? My heightened sense of anxiety about AI-generated content was further heightened when realizing that many of my well respected, high academic achieving colleagues struggled to distinguish between AI-generated abstracts and human-generated abstracts in an area of our specialty. This further reinforced my commitment to reading the entire paper, whenever possible, before considering the data valid. We were taught to do this in school but alas, with our busy schedules, it can be missed. AI is not all bad, however. It can be quite helpful for correcting grammar, editing, summarizing references or papers and even performing statistics. I would encourage all of us to move through published literature with our eyes fully focused and with awareness of the use of AI in modern veterinary medicine. Except yesterday...hopefully you kept your eyes partially closed and didn't look directly at the sun!! 

I hope you enjoyed this little TidBit. It is a little bit off topic, but I hope you will find it useful, nonetheless. Please know that my TidBit Tuesdays are (to date) fully human-generated, as are my patient reports! Let me know if you have any topics that you'd like me to cover. Have a great week!

The use of AI In Veterinary Medicine

At the recent ACVIM Forum in Philadelphia, a radiologist gave a very enlightening presentation about AI, and specifically ChatGPT. Have any of you messed around with this technology yet? Is anyone using it for work flow support? Although this TidBit Tuesday isn’t specifically about a neurology topic, I was so blow away by the ChatGPT lecture I decided to include it as a TidBit Tuesday. We’ll be back to our regularly scheduled neurology topics next week… 😊

To get us all on the same page, ChatGPT is a new artificial intelligence (AI) software developed by Microsoft engineers. The presenter at ACVIM (Dr. Eli Cohen, provided an example during his talk of a “conversation” he had with ChatGPT that terrified me. While reviewing a radiograph ChatGPT suggested that one of the differentials for this pet with clear lytic bone lesions on each side of an intervertebral disc space could be “sterile discospondylitis”. Dr. Cohen, like all of us in the audience, instantly worried that we had missed this diagnosis in our years of practice experience. STERILE disco? Is this real? How could I have missed this?? So, he asked ChatGPT to provide references for this statement. AND IT DID. Dozens of references popped up on the screen. They were from reputable journals like JAVMA, JVIM, and Vet Rad and Ultrasound. By real, live people, practicing veterinary neurologists and radiologists. Some of us were in the audience. The catch? None of these references were real. NOT ONE of the references was actually a reference for this imaginary disease. ChatGPT had taken names of people that may have written about “sterile” and “discospondylitis” separately and combined this into believable reference points. My take away from this was to make sure if and when I use ChatGPT for any work-related item, that I personally double check (dare I say vet?) all of the data points. Here is a perfect example. I fed ChatGPT the following question:

What is the neuroanatomic lesion localization for a dog with seizures?

Here is the answer:

Seizures in dogs can arise from various neuroanatomic locations. The specific neuroanatomic lesion localization for seizures depends on the underlying cause and can vary between individual cases. Here are a few examples of potential lesion locations associated with seizures in dogs…

WRONG. What is the correct neuroanatomic lesion localization for a dog with seizures? That’s right, forebrain or prosencephalon. There is only one neuroanatomic lesion localization for pets with seizures. The etiology varies widely from hypoglycemia to brain tumors, but all seizures come from one part.

This was a wonderful reminder to me how important the grasp of words, terms and phrases is when we communicate in veterinary medicine. I, probably similar to you, will be using AI in my veterinary career in the future. I think it is probably inevitable. However, we must remember to double check what we put in is using the correct terminology, and that the produced answer is in line with our knowledge and understanding.


I’d love to hear if you use AI in your personal or professional life and how it has affected you. I hope you had a safe and happy 1st or 4th of July and I look forward to seeing you, without robots, in the future!