44% of digital health companies score zero in “clinical robustness”? US magazine’s survey results “worthy of pondering”

According to a recent study published in the internet medical research journal JMIR, many venture-backed digital health startups are not clinically ideal, judging by the number of regulatory filings and clinical trials.

The study used information such as the Rock Health Digital Health Ventures Database, FDA 510(k) data, De Novo, premarket approval documents, and the number and type of clinical trials listed on the U.S. Clinical Trials Database to provide an analysis of the headquarters-based In the United States, a group of digital health startups that have raised at least one round of VC funding of more than $2 million since 2011 was reviewed.

The researchers assigned each company a “clinical robustness” score, based on regulatory filings and the clinical trials each company completed.

Of the 224 startups included in the study, 98 (44%) had a clinical robustness score of 0 and 45 had a score of 5 and above.

The average score for the 224 companies was 2.5, with 1.8 for clinical trials and 0.8 for regulatory filings; the median score was 1.

The company whose main business is disease diagnosis has the highest average score of 2.8, followed by the company whose main business is disease treatment, with an average score of 2.2, and the company whose main business is disease prevention, with the lowest average score of only 1.9.

Startups whose client segments are employers achieved an average clinical robustness score of 3.1, while companies whose client segments were providers, consumers, and payers received an average score of 2.7, 2.2, and 2.0, respectively.

In addition to the very low mean clinical robustness, there were also significant differences between clinical domains (eg high scores in cardiovascular and nephrology and low scores in oncology and primary care).

This result may reflect the different maturity levels of digital health solution technologies in different clinical disciplines.

It has also been previously documented that well-funded clinical areas (eg, diabetes) and under-funded areas (eg, reproductive and maternal health) differ significantly in technology maturity.

The researchers also looked at public statements from 224 companies on clinical, economic, contractual, etc., with an average number of 1.3, and 43% of companies made no statements.

The study also noted that startups that sell to employers make more public statements than companies that sell to other customer types such as consumers, suppliers and payers.

Overall, the study did not find any correlation between clinical robustness and publicly declared number, robustness, and total funding, or clinical robustness and number of years in business.

740

Of course, the study also pointed out some limitations of the analysis, such as the fact that only companies that raised more than $2 million were included in the survey process.

They suggest that future research could use condition-specific measures of efficacy, standardized across the clinical domain, to provide a clearer picture of impact.

Although this analysis found that 20% of startups received a score of 5 or higher, indicating that their core product was highly tested.

However, the sheer number of companies with low scores is evidence that many venture-backed startups still lack clinical validation.

“While a small fraction of these companies’ scores may indicate advances in medical technology, nearly half of digital health companies (44% with a clinical robustness score of 0) still lack meaningful clinical validation,” the researchers said.

“The lack of an overall correlation between the total amount of venture capital invested in these companies and their clinical robustness scores suggests a significant asymmetry in the potential value of companies in today’s market. However, the amount of funding may reflect expected future value, not current value.”

In fact, as early as 2019, the Korean scientific research team raised the issue of lack of proper clinical validation of artificial intelligence algorithms. Leifeng.com’s “Medical Health AI Nuggets” had previously discussed such topics: “South Korean scientific research team: over 90% The medical imaging AI paper has not been rigorously validated in a clinical setting.”

In 2019, several MDs, including Dong Wook Kim from Taian County Health Center in South Korea and Hye Young Jang, Kyung Won Kim, Youngbin Shin and Seong Ho Park (corresponding author) from Ulsan University School of Medicine Radiology Research Center, have published papers evaluating Design features of research experiments on the performance of AI algorithms that provide diagnostic decisions based on medical imaging.

Findings showed that of the 516 eligible published studies, only 6% (31 studies) had external validation.

The research team finally came to the conclusion that almost all the evaluation experiments on the performance of medical imaging AI algorithms published during the research period were designed to verify the feasibility of the technical concept, and did not strictly verify the performance of the AI ​​algorithm in the actual clinical environment.

If domestic medical AI companies also accept such an investigation, what will be the result?

source:

https://ift.tt/tMV5jAz

https://ift.tt/dmERKyO

This article is reproduced from: https://www.leiphone.com/category/healthai/T4CzQfqgbhqLkMif.html
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment