Artificial intelligence used to detect risk of heart attack

The Canary

Artificial intelligence can identify people most likely to have a deadly heart attack five years before it strikes, scientists have found.

Researchers from the University of Oxford used machine learning to develop a new biomarker that detects changes to blood vessels supplying the heart.

Described as a fingerprint, it can identify “biological red flags” such as inflammation, scarring and new blood vessel formation, which are pointers to a future heart attack.

Start your day with The Canary News Digest

Fresh and fearless; get excellent independent journalism from The Canary, delivered straight to your inbox every morning.




Currently scans known as coronary CT angiograms (CCTA) check the coronary arteries for any narrowed or blocked segments, but there are no methods routinely used by doctors to spot these other, underlying characteristics.

Half of all heart attacks happen in people who do not have significant narrowing of the arteries.

Used alongside these scans, the new fat radiomic profile (FRP) can analyse changes to the fat surrounding the heart vessels to calculate how likely someone is to have a heart attack.

The researchers plan to make the technology available to health professionals over the next year, and hope it will be included in routine NHS practice within two years.

Professor Charalambos Antoniades, professor of Cardiovascular Medicine at the University of Oxford, said: “Just because someone’s scan of their coronary artery shows there’s no narrowing, that does not mean they are safe from a heart attack.

“By harnessing the power of AI, we’ve developed a fingerprint to find ‘bad’ characteristics around people’s arteries.

“This has huge potential to detect the early signs of disease, and to be able to take all preventative steps before a heart attack strikes, ultimately saving lives.

“We genuinely believe this technology could be saving lives within the next year.”

The findings, presented at the European Society of Cardiology (ESC) Congress in Paris, are published in the European Heart Journal.

The team compared the CCTA scans of 101 people who went on to have a heart attack or cardiovascular death within five years of the scan with those who did not, to understand what changes indicate someone is at an increased risk.

They tested the performance of the new fingerprint in 1,575 people and found that the technology could predict heart attacks to a greater accuracy than current tools used in clinical practice.

In the groups tested, the technology was 80-90% accurate at predicting the patients’ risk of a future heart attack.

The fingerprint became more accurate when more scans were used, the research, part-funded by the British Heart Foundation (BHF), found.

Professor Metin Avkiran, BHF associate medical director, said: “This research is a powerful example of how innovative use of machine learning technology has the potential to revolutionise how we identify people at risk of a heart attack and prevent them from happening.

“This is a significant advance. The new ‘fingerprint’ extracts additional information about underlying biology from scans used routinely to detect narrowed arteries.

“Such AI-based technology to predict an impending heart attack with greater precision could represent a big step forward in personalised care for people with suspected coronary artery disease.”

Since you're here ...

We know you don't need a lecture. You wouldn't be here if you didn't care.
Now, more than ever, we need your help to challenge the rightwing press and hold power to account. Please help us survive and thrive.

The Canary Support
  • Show Comments
    1. Perhaps this ‘science’ is not as grand as it seems.

      In essence ‘machine learning’ as currently undertaken does not equate to intelligence in the ‘machine’ or require much in the savant operating it.

      There is parallel with a set of statistical techniques encapsulated under the heading ‘linear regression’. These provide a ‘best fit’ statistical model of a set of data. The model is represented as linear combination of the input variables each weighted by a calculated coefficient.

      In the right hands, regression analysis provides insight regarding the relative impact of the (independent) variables included in the linear expression upon predicting the value of the ‘dependent’ variable e.g. risk of heart attack.

      A properly conducted analysis entails exploring different subsets of the independent variables and maybe looking at how combinations of variables interacting with each other influence the fit (predictive ability) of the model.

      The resulting model is strictly only a parsimonious expression of information inherent to the set of data used. It does not purport to represent causal mechanism. Its predictive value lies only within the ranges of the variables considered and happenstance of the particular set of data gathered. Nevertheless, depending on context, analysis can provide useful insights into the role of variables which may be explored in further studies using appropriate experimental designs. If the rationale was to devise a pragmatic tool for prediction then its performance must be tested on other sets of data.

      The process described can be automated, sadly too often it is, but input of human intelligence is necessary for determining a plausible helpful model and interpreting its import.

      By comparison, ‘machine learning’ is a black box. Data repetitively are fed into the ‘machine’. The ‘machines’ ‘knowldge’ is tested against the original data and then must be re-tested against fresh data in order to determine its utility.

      ‘Machines’ generally contain simulated neural networks. Learning determines weights given to transmission of gobbets of information along differing pathways. The process and resulting weighted network are open to inspection by the ‘scientists’ involved. Yet, studying the final network offers no insight into how/why it arrives at particular predictions. That is, it literally is a ‘black box’.

      Machine learning may be a useful tool for predicting outcomes but, just like regression analysis, the, in this case occult, model is entirely atheoretical. Its predictions do not test a theory (the basis of science). Justification for its existence rests upon utility.

      Hence, much hyped AI is not yet anything of the sort. The ‘machine’ can offer no insight into mechanisms (biological or physical) how/why variables, each representing a process, interconnect to produce predicted outcomes.

      That said, ‘machine learning’ and AI are excellent trigger words in grant applications.

    Leave a Reply

    Join the conversation

    Please read our comment moderation policy here.