2020
"Understanding ROC (Receiver Operating Characteristic) Curve | What is ROC?"
"从价格双轨制,推演公立的衰落"
"沈向洋博士:三十年科研路,我踩过的那些坑"
"Making Your Neural Network Say “I Don’t Know” — Bayesian NNs using Pyro and PyTorch"
"Computer vision news December 2020"
AI applications in ultrasound: Select clips with good quality; Select only the important clips; Sorting clips according to what view and cross-ectionof the heart they show;
detect features on the clips; taking measures; predict certain pathnologies;
Before training a deep learning
model, it is vital to understand your
data. Is it sufficiently variable to cover
the application being developed?
There are so many factors that come
into play here, including the quality of
the ultrasound, and characteristics of
the patient, such as age, gender, and BMI, which can all effect the size of the
organ being imaged. Once the data is
ready, you need to work with your echo
specialist or radiologist on the best
labeling and annotation procedure. It
is an iterative process which is partly
about the algorithms and partly about
the data. Try one approach, train a
model, and then give the data back to
the echo specialist who will look again
and may make further suggestions.
Data is a key requirement for deep
learning algorithms to work, but for
ultrasound, data availability can
be limited. Also, most applications
involve heavy human interaction,
whereas in computer vision, you work
with a picture or video. “I personally
see there’s a traditional imageguided interventional subfield and a
robotic field,” Yipeng explains. “The
robotic field is assuming robots will
be controlling all medical devices in
the future. With that as our end goal,
we try to make our algorithms as
automated as possible. We’re still very
close, but these two fields will merge
at some point.”