Uncategorized

Welcome to MIT’s Koch Institute for Integrative Cancer Research We’re delighted you’re all with us this evening And welcome to our program, which is SOLUTIONS with/in/sight It’s a series that we conduct And tonight, our title is Using Artificial Intelligence to Make Mammography Smarter So this is a series And it’s a series that attempts to explain to people where the excitement is, the excitement at the various convergences; the convergence between biology and engineering, the convergence between academic studies and clinical practice But the thesis of the Koch Institute is that, through these various intersections, we can actually accelerate progress on cancer And so tonight’s program will give you a little sense of that We have a really remarkable trio of women presenting tonight And I know this is going to be a really extraordinary conversation Between many new innovations, many new discoveries, there is, invariably, a personal story And there is a personal story here between Connie, Regina, and what they have put together to make mammography a whole lot smarter And it’s a very special treat to have Linda Pizzuti Henry as our moderator tonight She’s going to conduct the conversation and then take questions from you afterwards So if you have questions, hang on to them because we’ll turn to the audience So I want to make a few just very brief introductions to let you know who these people are Regina Barzilay is the Delta Electronics professor at MIT She’s a member of the Department of Electrical Engineering and Computer Science She is also a member of the Koch Institute She is a member of the Computer Science and Artificial Intelligence Laboratory Lots of words Simply put, Regina Barzilay is one of the world’s leaders in machine learning And she has committed herself to using her insights into machine learning to make clinical care better Joining her is Connie Lehman, who is the chief of breast imaging at Massachusetts General Hospital She’s also a professor of radiology at Harvard Medical School And she’s the co-director of the Avon Comprehensive Breast Evaluation Center at MGH Connie has devoted her clinical practice to improving the health of our communities by delivering the highest quality patient-centered care in a setting of active innovation and education And you’re going to see how the two of them have put their minds together and their talents together Now, let me just say a word about our moderator Linda Pizzuti Henry I have to first say we’re very proud that she’s an MIT alumna Hurray And she’s the managing director of The Boston Globe She’s co-founder of Hub Week, which is an annual collaboration among regional institutions that explores the intersection of art, science, and technology She’s founder of the Boston Public Market And I have to tell you, there are dozens and dozens of community-focused activities that she has made to happen and fostered She has been a real champion of our community So it’s an incredible privilege to have these three amazing women with us tonight I’m looking forward to their conversation I know you you’re looking forward to their conversation And at the bottom, it is, how do we make mammography’s diagnostic power even greater, so we can save lives? So with that, I’m going to hand it over to Linda, Regina, and Connie Come on up Get started [APPLAUSE] Well, good evening And thank you so much for coming And thank you so much for having me tonight I’m so excited about this time that we have together We have an incredible problem with breast cancer It is amazing and staggering when we look at the numbers I’m sure, if I asked for a raise of hands of how many of you have been touched by breast cancer, either yourself or a family member, a friend, and the hands all go up We all have And out of the 2 billion adult women who are of screening age, over 2 million are diagnosed every single year worldwide, over 2 million women every single year being diagnosed with breast cancer And globally, over 600,000 die year after year after year It is really staggering We all agree on a lot There are some areas that we get into, especially when we’re trying to be innovative, where people have a lot of disagreements But every person that I talk to emphasizes that we cannot

continue to look at late-stage disease treatment It is both costly and ineffective We must be better at identifying breast cancer early, when it can be cured But we are missing two key ingredients to that paradigm of early detection and cure of breast cancer We simply don’t have accurate tools to predict who will and won’t get breast cancer And we don’t have accurate tools to identify cancer early Now, we think that we do We talk about all the different tests that we have out there, the genetic tests, the screening mammography, the new tools, automated ultrasound, MRI But I want to share a little bit about how these are failing and what we need to do and the power of AI to address our greatest challenges Breast cancer impacts so many women And it has been doing this for decades and decades, young and old, all races, all walks of life, an incredible array of women And when you look at them collectively, you realize how far we need to go to change the face of breast cancer Each of these faces are so unique And also, what we’re discovering, that every mammogram of every woman is also unique When we look at mammograms, we can tell, as if that was the woman’s thumbprint, so much about that individual And we’ve never leveraged this before AI We’ve never actually taken all of that digital information in every woman’s mammogram and leveraged that information to predict the future and to assess more accurately, does this woman have cancer? What type of cancer is it? Will she have cancer in the future? And this is something that all of us have talked about in the clinical setting If I’m reading mammograms and a technologist accidentally puts another woman’s mammogram up in a collection of my patient’s, I’ll know that it’s not the same patient I can look at the different ones and say, well, this belongs to someone else So we’ve sort of known that But we didn’t have the tools to really extract that incredibly rich data out of the mammogram People have noticed this for a long time Someone that Regina and I like to read about is John Wolfe He’s still alive, so I really think we need to invite him out for lunch or dinner and talk to him and say how much we appreciate that, early on, he was noticing this He was writing about it, and everyone thought he was crazy I remember when I was in my early training And I asked about Wolfe and his patterns of the mammogram They’re like, oh, that was all debunked He didn’t know what he was talking about But what he said in 1967 was normal and abnormal parenchymal elements are noted And he was really looking at all of the data besides just, is there cancer or not? But these alveolar tissue and ducts, what’s their distribution? And he said this material was coded and later subjected to analysis by computer And I would love for him to see just how forward thinking that was and what we’re doing now in this same domain We could also laugh with John Wolfe because Regina will talk a little bit about some of the challenges we’ve had in getting our innovative thinking published and the rounds that we’ve done with the Journal of Radiology, which is considered the highest impact journal within the imaging sciences If you can’t get into Radiology, you go to AJR So I know when I see this paper when it’s first in the ’60s was in Radiology and this next one in 1976 was in AJR, Radiology did not accept this paper, which they thought was crazy because what he said was maybe breast patterns can be an index of risk for developing cancer He thought he was seeing something with his eye He thought he was noticing something in his practice And everyone thought he was crazy Maybe one of the reasons why they thought he was crazy is because they were so excited about the technology And it was like an arms race Who can build the better, faster, smarter mammography machine? And we saw incredible advances going from zero mammography– I’m so glad I don’t have to read the mammogram on the far left, but that’s what people were reading– to film-screened mammography to digital mammography But what we found was our advances in imaging technology outpaced our human ability to process the information that was provided And yet, we just kept collecting more data with our equipment So now we have tomosynthesis For any single woman, she could easily have 200 images that, as a breast imager, I need to sort through looking for a cancer There’s thin slices through the breast Each view of the breast is about 50 images in an average-sized-breast woman So think about the challenge that we present to humans to find six cancers out of more than 200,000 images? Now, I say it’s challenging, but it’s actually ridiculous This is not a feat that we should have humans be doing

This is a feat that we should be having highly intelligent computers doing This first level of screening mammograms, that 200,000 images, let’s have computers read that And let’s have humans work on then conversations with patients, calling them back, biopsying them, talking them through their cancer diagnosis, doing the things that physicians can do well and allowing computers to do the things that they can’t And we’ve all known this to be true I did a very large study the Breast Cancer Surveillance Consortium, hundreds of millions of mammograms, thousands of radiologists And what we find is there is wide human variation in mammographic interpretation So that we know that 40% of US-certified breast imagers are not meeting the criterias that are set And we give people a lot of latitude In fact, the requirements for cancers diagnosed per 1,000 mammograms is as long as you’re finding 2 and 1/2 cancers per 1,000 and above So on these graphs, what I’m showing is you have radiologists that can read thousands of mammograms and find six cancers out of every 1,000 or five That’s pretty average But it’s OK if they’re only finding two And in fact, some are reading here, that we did in this huge study, and they would find one In other words, they’re missing a lot of cancers There’s no performance of mammography There are these wide error bars We can talk about recall rates or sensitivity How good is mammography at picking up cancers? It depends on the human And we want to change that We don’t want this to be tied to that human variation Sensitivity of mammography average is 80%, but we have people that it’s 30% And we have others that it’s 95% These wide error bars in performance in this domain And we talk about it amongst ourselves We talk about the fact that, in about 30% of cases, when you identify a cancer, you can see it in the year before when you look back We talk about in our false negative mammograms, when the mammogram is read as negative But some time during that year, the woman feels a lump or identifies something, goes into the doctor and has cancer diagnosed When we look back at that negative mammogram again, 30% will say, we actually can see something This is an example This was read as negative But in hindsight, we can see this spot Now, why did a human miss this? Because for breast imagers, you look at this, and you say, actually, I can see that cancer It’s that little white spot There’s a human reading hundreds of mammograms, going through, gets distracted It looks like the other tissue It is not something, again, — for hundreds of thousands of images looking for six cancers — that’s well suited to humans So the government has tried, as much as they can, to help us with very good intentions We have a Mammography Quality Standard Act to improve the performance by radiologists The standards that are set for us are higher than any other field in radiology We have to read a certain number, which is not required in other areas We have to take CME We have to audit our practices to know, how good are we at what we do? How many times do we miss cancers? How many times do we have false positives? And then more recently, the government also said, let’s share information with women Let’s make sure they know their breast density So what about that? I’m going to now shift over and sort of bring us to the transition over to Regina by asking the basic question, can computers read mammograms better than humans? We think that they can In fact, we know that they can The first question, and this is going to be audience participation I like that people raised their hands before So pay attention because this is all the education you’re going to get These are mammograms of different density White means dense And the grayish is not so dense And these last two are very dense Turns out that radiologists don’t do a very good job of this We did a large study of 83 radiologists At one end of the spectrum, the radiologist said, I think 6% of mammograms are dense And at the other end, they thought that 85% were dense, and everything in between So this is the information that the government is asking that we share with all women So we developed a model that had 97% agreement with expert readers and 94% agreement in the clinic This has been implemented And we’re using it now And every patient that comes through Mass General for their mammogram has their density assessed with this MGH-MIT deep-learning model This is the sorting that one would do You have all these different mammograms And we’re asking the models to sort them out depending on how they were assessed I’m putting the fatty ones, the gray, to the left, and the dense ones to the right This is a confusion matrix And I want you just to look at these patterns Reader one sorted these mammograms into columns Reader two sorted them into rows And so look at these patterns And which look more similar to you, the columns or the rows?

Which do you think was the machine? And which do you think was the human? This is where there is agreement Both reader one and reader two said that the upper left was fatty and the lower right was dense And that makes sense That’s what they look like But these are disagreements One said dense And the other one said, I don’t think it’s dense So the model is the columns That’s the computer And the rows are the radiologist You can see the variation by the humans And the consistency by the models It was giving this feedback to our radiologists that had them accept using AI in the clinic because there is a lot of resistance The more they got this feedback, the more they realized, oh, there is human variation I don’t know what I was thinking that day But I was wrong Maybe my fingers slipped I don’t know And so that is why, at MGH, we have our radiologists that are supported both in the infrastructure that we’ve built and the tools that we have to integrate seamlessly the AI assessments into their clinical practice It doesn’t increase their time They’re not working at separate computers It’s been integrated into the standard clinical flow We’re now ready to launch this into other areas Regina is going to talk about our other models But we’ve built a platform where all images at MGH are pulled into a PACS or an archiving system, run through an IT application, through a model, and then back through into our EPIC and our reporting system And finally, the final question, the first step to independent reading, where we actually get to the point where it’s computers reading our mammograms, not humans, is our one-year triage model, where we are showing, right out of the gate, giving this model very little information, not the prior mammograms, not the advanced tomosynthesis mammograms, just the basic 2D mammograms, we’re showing that we can identify those mammograms with a high likelihood of having cancers And we can identify those mammograms that were human-read false positives We are already at a point with our computers where we are performing better than many, many radiologists certified in the US to read mammograms And we’re truly just getting started So I couldn’t be luckier to have been recruited to Boston to have one of the men at Mass General that recruited me say, you have to meet my patient who’s just finished her treatment Her name’s Regina Barzilay And she’s spectacular And then I’m like one little T stop away on the Red Line from Regina, her lab All that MIT has to offer is something that was beyond my wildest imagination when I made this move from Seattle to Boston And couldn’t be more proud, also, of my team back home at MGH that’s open and excited, enthusiastic about how we can do more for our patients in the future So I’m going to turn it over to Regina And thanks for your time [APPLAUSE] Can you hear me? Hi [INTERPOSING VOICES] Yeah OK, I will try to shout I am an MIT professor, so I can try to shout I’m teaching one of the biggest machine learning classes And it’s a big class So, at any rate So, it’s really my pleasure to be here And I remember a few years back at the time that Connie and I met, I also came to this particular place and met a number of people and told them, you know, guys, you don’t know what you’re doing I’m going to show you the way So you can see that I was totally obnoxious I came just from across the building And I started my journey into this area And while Connie focused on talking about how a machine can help radiologists to do a better job than they are already doing, the question that I personally cared about is to use AI to solve questions that humans cannot really do So for instance, here, you can see two breasts Both of these breasts do not have cancer OK? And I don’t know if there are any more radiologists besides Connie in the audience But we tried it on many expert breast radiologists They cannot say which one of these breasts are likely to get cancer They can say there is no cancer now But they cannot really assess what’s to come in the next two, three years And this is the question that I really cared about So now let’s think how humans are thinking about risk So the way humans think today about risk, because it is a lot of research in this area, is they start by thinking, what can be correlating with the risk of cancer? So for instance, for breast cancer, it will be maybe your family history, your BRCA status, your breast density So you’re going to think about various factors, combine them with a statistical model, and you’re going to get a prediction Now I’m sure not many of you remember what is the area under the curve, which is, in this case, 0.6

But it pretty much tells you, how accurate is the model? Let me give you the ranges If you totally do random guesses, you’re going to get 0.5 If you are perfect, you’re going to get 1 So you can see that the models that is currently used today for predicting who needs extra screening, who can be on chemo prevention, they’re closer to random than to 1 And the question is, can we actually do a better job? And I know that many women in this room who are getting their mammography yearly are getting this letter that comes back home and tells them, you have high density You’re an increased risk of breast cancer and so on and so forth And now it became a federal law Now let’s look at how predictive is this indicator because we are telling somebody, you should be worried So if you’re looking at non-dense women, if you look at the cohort of mammogram, you would find six out of 1,000 cancers If you are a dense woman, you are the one who got the letter, it’s eight out of 1,000 Today, we are sending 42% of patients the letter that they should be worried Now I will tell you my personal story So this is me diagnosed in 2014 And this is me in 2013 And this is me in 2012 And you know what’s interesting about 2012? When I went to do my first mammogram, they saw that I have a dense breast And they told me, you know, you shouldn’t worry about it You’re going to get this letter, but it doesn’t matter because 40% of women are getting these letters So this is the type of information, the risk assessment that we’re giving to the woman And the question that I thought about is, forget about using human to predict the risk factors Our capacity to identify patterns is very limited because it’s like the dogs have better sense of smell than we do The same way machines can remember many more pixels that we would ever be able to do So the question for me was, if I am going to get a very large cohort of mammograms, let’s say, for 50,000 patients or 100,000 patients, and I would know for each one of these images what happened to this woman in two years or in five years, can a machine discriminate the pattern that human eye cannot really discriminate? And the technologies that we use is a deep-learning technology It’s the same technology that recognizes your face when you have the new iPhones So this is a deep-learning model It pretty much takes an image, I think, it’s a dog here It takes the image of a dog, which machine sees just as a matrix of zeros and ones There are many different nonlinear transformation on this image in such a way that it correctly predicts the label It has the image It has the label And it tries to just, all this wires, to predict the correct assessment And what’s interesting about these models, in contrast to what we, as humans, are thinking, when we are thinking, we need to tell the machine what is the right pattern That’s what Dr. Wolfe was trying to do He was saying, if you see more white or dense white, it’s predictive of future cancer Here, we are not telling it to the machine We’re giving the machine the input and the output and let it figure it out And if you’re not convinced, I want to give you an example from another area, which has nothing to do with breast cancer Whenever you are training the machine to predict, to recognize faces, OK, again, you give to it an image and the label And what happens, you would see that, without ever teaching the machine explicitly about the eyes and the ears and the nose, through the layers, you can identify increasingly complex patterns The first layers of the network, the one in the very bottom would recognize these very simple lines When you go one layer higher, the machine would identify bigger parts, ears and nose and so on, until it recognized the whole face So this is quite remarkable So we want to take all this power and to use it to identify these very subtle patterns in the breast tissue to predict future risk So we collected with Connie– Connie led this part of work– all the patients that we had for whom we knew the outcomes And we trained two models, one model took the breast and predicted five-year outcome And the other model used the image and some risk factors, so for instance, BRCA And this is kind of surprising because you say, what can you learn from the image? Can you actually learn something, even if you don’t tell the model how old is a woman, did she ever have cancer in her family, and so on? So let’s see how it works You can see the green one is image alone And the yellow one is the image plus risk factors So you can see it helps But it already does pretty reasonable Now, I will tell you more what it means, but I just want to show you one graph that was the most striking graph for me and for Connie

when we did this research We actually wanted to stratify the population and to see how the model does for different classes of subpopulation And what we have seen here that our model, the yellow one, sustained good performance for different races What you can see is that the model, which is called Tyrer-Cuzick, which is currently in clinical use in many hospitals in the city, actually performs below random on African American population, but not enough below random so that you can actually switch it And Connie and I met Dr. Cuzick a few months ago And if we asked him about it Do you know that your model, which is currently used, according to which you can get tamoxifen as chemo preventative, that it performs so badly on such a big subset of population? He said, yes, of course, I know because this model was developed on white women from 50 to 70 in London So you can really see that these models, not only that they’re not very powerful, they also are quite biased And coming back to Connie’s point that what we’ve discovered that, of course, there is a correlation between the risk that our model predicts and the density But they are not the same You can see that there are some women who have high risk and low density and women who have low risk and high density So they don’t totally and fully correlate So that’s why the recent law that was just developed, actually, accepted by and signed by our President Trump, actually goes based on the science that was developed in 1967 That’s where the Wolfe density paper came out At any rate, I want to give you some more intuitive understanding of how this model works And I showed you the model for two years out So what you can see here, this is the deciles So for instance, this is top 10% of the patients that the model predicted high risk You can see that 40% of the cancers fell into this category If you look top 20%, 60% of the cancers happen to be here At the same time, if you look at the bottom 40%, these women have very, very low chance of getting breast cancer So what you can imagine may be happening when we move forward, that we can do the first mammogram, assess the risk, and then create personalized screening, depending on your particular tissue Either you look really suspicious, and you may need an MRI and other things, or maybe it’s OK if you’re coming every two or three years for screening And what I wanted to say, I don’t know how many of you know this story This is a woman called Nancy Cappello from New Jersey In 2003, she came and did her mammogram Her mammogram was fine Within, I think, a month, she found a lump And she was diagnosed with metastatic breast cancer when her mammogram came out fine And at that moment, this woman realized that there is something wrong with the process She was a high– she had high density And nobody told her that her mammogram may not be very predictive and maybe she needed additional screening So what she did, she actually decided to reach out to her state government and to change the law, which requires the providers to tell women that they have high density And even in states like Connecticut and New Jersey, you can get a supplemental screening from your insurance based on this information And I really applaud her for what she did, despite her own unfortunate circumstances But what I feel we should be doing now is to move to the best science to help women And today, the deep learning models, like the one we developed with Connie, actually are much more accurate They’re bias free And they are consistent And those are the papers Now I want to take three minutes and tell you how did I get to this research because we had a conversation with Linda prior to prepare for this event And she told one should share stories So I’m going to tell you some stories So I got tenure at MIT when Susan was the president And my core research is in natural language processing And I work on a variety of topics Like for instance, we developed the first model which took dead language like Ugaritic and fully automatically deciphered it using Hebrew Bible We taught the machine to play Civilization reading manual symmetrical language to play better, so variety of fun things, OK? So I had fun And then in 2014, in April, it’s actually the April five years ago, I was diagnosed at the end of April with breast cancer And this is the spring vacation when my son and I went to Florida And within a month, I was diagnosed And this is him in July when I was already doing my chemo, and he had to cut my hair because I couldn’t do it myself

At any rate, I went through the treatment All was fine And then I came back to MIT And when I came back to MIT after really seeing human suffering at MGH and rethinking completely, is it the best use of my time to do Ugaritic or Civilization or other things? I was totally confused And I remember this was the biggest confusion of my life You see my hair is curly because that’s how it comes after chemo And I was just thinking, what can I do to change it? Because what I felt that we have such an amazing technology that we are developing in computer science and at our lab here, but it really doesn’t go to MGH And I was trying to find a place how I can contribute And I went from office to office Part of it was coming to Koch and saying that they don’t know what they’re doing But the second part, I was going from office to office at MGH and saying, I don’t need any money I can do machine learning Can I help you? And in most cases, the answer was, thank you very much We’re doing great And then I found Connie And then we started our work And I remember that it was very funny I had to give a talk, which is like a big plenary talk in my area in a big conference And it’s like once in your lifetime you give this talk And I was thinking, what do I want to talk about to my community, to my research community? And so I cannot talk about anything else I want to talk to them that they have to do what I am trying to do, to use technology to help patients And I gave this talk And it was kind of amazing People were saying it was such a great talk I was so happy And I said, wow, this is great Then the reality hit So we submitted some proposals to NCI and other agencies And I wrote in the proposal, I’m a breast cancer survivor I know what needs to be done You know? And I want to make sure that the technology comes to the patients So I was sure that I can write proposals My proposals are almost never rejected I wrote this plea And I was sitting there and waiting millions to come And then all my proposals were rejected Connie and I were trying to get access to data Typically, I never wear a suit because there’s definitely nobody, unless they are not at computer science department, wear suits So I was wearing suit and meeting a lot of different officials at MGH And we couldn’t get the data This was just terrible And I remember I was reading these reviews And we were talking about 2016 And people were asking, why are using neural deep-learning models? Why are you not using something else? I was thinking, if these people just read New York Times, they would know the answer So it was just unbelievable And then I realized that this is bigger problem because if you look, for instance, this is a DOD breast cancer panel, who is missing here, there is no computer scientist Computer science, three years ago, and AI were not part of this story at all So obviously, this people cannot read my proposals At any rate, and I don’t know how many of you read this book, Americanah I highly recommend And there is this place where this woman describes how she [INAUDIBLE] And she goes from place to place And everybody hears And it sounds good And nobody gives her money This was me I mean, I read this book And this is me And then what we’ve done, so Connie and I didn’t stop This was the point that we said, we’re not stopping here So I gave some humongous amounts of talks all over the world I met with any person who say they want to talk to me I said, fine I will find time for you We wrote a lot of nonfederal proposals And we tried to get some publicity while we continued doing our research And I should say that I’m really grateful for people from Koch who really helped us Among the first grants that we got was a Bridge Project grant from Koch Susan and Phil really help us to negotiate a variety of different complex interactions with various agencies and MGH So there were people who started helping us and start moving things along And I have to tell you the last funny story So I was participating in some podcast in Washington Post And when they were putting like makeup and beautifying, there was some guy sitting near me And the guy asked me, what are you doing? I said, I’m professor of computer science And he asked, why are you here? Because it was about cancer So I told him, you know, I came to tell to all this room or people that they really should be using machine learning They really don’t know what they are doing People in NCI are clueless And I was going on and on and on They finished my makeup And at this point, I say, oh, I’m really sorry, that I came from MIT, we’re very, very intense I said, I’m really sorry I’m sure you have nothing to do with it OK, who can recognize the guy? I mean, it’s– yes? He’s the director of the NCI Exactly So he told me, he actually has a lot to do with it,

so we made a picture But it was really funny And anyway, and he’s still the director of NCI Somebody and then there was a time, we got in our funding to do our research So we’re doing well I am now in the data science advisory board to the NCI director We have images from multiple hospitals And we are really spreading it across the country And the part that I am really most excited about is that whatever tools we develop, besides writing papers, we really clinically implemented them at MGH So everybody who does their mammograms at MGH has them read by our tools And I hope that we will spread it to partners in other institutions Thank you very much [APPLAUSE] Thank you Thank you Those were both extraordinary presentations Can’t hear you No? Am I on now? Sorry Those were both extra– can hear me? We’re good? Those were both extraordinary presentations And what I love is, so many things about it that I love about how you’re changing things but that this is an unusual collaboration that you are making happen And it so represents, I love that we’re sitting here in the Koch Institute talking about it because that’s so much of what the Koch Institute stands for is this integrative approach to finding ways to battle cancer And you are a living example of what the Koch Institute stands for So thank you for having us here This is a perfect setting for this conversation So we heard your side of the story about her saying that everybody in this industry is doing it wrong How did you feel when she approached you? I was incredibly excited because my career had always been seeing the problem, chipping away at it, making a little bit of a dent, either of the work in Uganda or other countries where we just don’t have radiologists or the patients that we have that would come in with a palpable lump that had had a negative mammogram And so there were just all of these problems where we were chipping, chipping away And then I had Regina come in with saying, I have these tools I think can help I have something I think could be really powerful And once we started the discussion, it also really impressed me I’ve always said if you get like-minded people that have the same vision, have the same mission, have the same tenacity, they can do great things And that’s what impressed me most about Regina She was in it for all the right reasons She wanted to help patients And that was pretty extraordinary Of course, having people, trainees and residents often say, now tell me the difference between a sponsor and a mentor And I was like, OK, look up sponsor You’re going to see Susan Hockfield because she knows what that means When she says, I see these two women I want to support them I want to help them And I’m going to figure that out So it was really a spectacular process And so what you walked us through here was really incredibly persuasive I’m pretty sure everybody in this room would prefer to have any sort of screening read by AI, to just have this database One of the things that you talked about is how you have the training from looking at thousands and thousands of images over your career, that things pop out to you, that you were able to see it The same thing happened with the woman who discovered the CTI injuries in the brain You were just able to see these patterns You heard this And you said, that’s not good enough I do appreciate that Connie reads my mammograms every year together with the machine But I think that it’s not about who is better We have a really challenging task in front of us, trying to predict And what we’ve had, I want to reiterate statistics from Connie’s presentation, 30% of breast cancer, one year, were growing in the breast because they were not identified And we’re even talking about top institutions So the question is, can we put machine and human together to do it better? And we all know that, and I’ve been to the room where Connie reads these mammograms, it’s eight hours You’re just looking at the images And I’m sure each one of you can think about part of the day when you’re distracted, or you want to make a phone call,

or you need a cup of coffee And you just lose this concentration So if machine can help human to do their job better, I think we all will benefit And furthermore, we can ask machine questions that humans cannot answer like predicting risk And the question that I’m asking myself, and Connie’s asking the same question, let’s say now, we are putting into the clinical implementation this model that predicts risk Model can actually tell you what it looked at when it identified risk But the pattern is so complex that human cannot comprehend it So how do you work in this world where machine, in some ways, are smarter than us? Because it doesn’t mean you can open the black box You can You can make it white box It will show you what it’s seeing But the human still may not be able to understand it And even with AI, there is still a really important human element, which I’ve heard you identify, Connie, where there is a human trust And you yourself played a really important role in showing that computer-aided, what is it called? The CAD, Computer-Aided Detection radiology was not effective because the element of human trust is so important So can you tell us a little bit about that story and what you learned from that to help with what you’re working on today? Right And so one of the things that Regina and I are excited about is not repeating the mistakes of the past So computer data diagnosis, computer-aided detection has been around in mammography for decades It was a very different approach Rather than the approach that Regina explained where you give lots of images and say the outcome and the deep learning is going to learn that Instead, we would segment the breast We would say, this is what a cancer looks like These are what calcifications look like This is what a mass looks like And we would teach those features and then run the mammograms through, and it would flag and highlight like, look at this Look at that What about that? Well, it turned out, when those products came out, and they were, really, the companies were rushing to get them to market, they put marks on all the mammograms So we had radiologists with lots of false positives saying, oh, maybe that’s something Maybe that’s something So then the companies said, well, let’s try to filter that down Don’t make so many marks If it’s really obvious, we won’t have to put a mark on it When we actually studied what happened when radiologists used that all across the country, they performed worse with this assistance than without And that was something that people were pretty upset with me for publishing because there had been a whole industry that had been created around this People were billing for it They were collecting for it And so that was an unpleasant story It was an uncomfortable story But it is, it’s the power of science and research and information to guide people’s best practices So we don’t want to repeat that We know a lot of this study designs that were done, although they were accepted in journals that would reject us, the study designs were pretty sloppy They were reader studies They were highly, highly biased, data samples that wouldn’t translate over into routine clinical practice So we’re paying a lot of attention to those lessons And we’re really determined not to repeat them And, Regina, you’ve talked about how this is another aspect of medicine that you think is broken, which is the clinical trials Can you talk about why you think they’re broken and what we can do about it? So thank you for bringing it up This is my other favorite topic is the question of how the information is used today in medicine Because if you go to Amazon, go to any company that deals with data, they would use every single piece of data that they can put their hands on Now if you look at cancer, cancer care today, and I took this statistic from American Society of Clinical Oncology, if you want to check it, they claim that, today, all the clinical decisions are based on 3% of the population that participate in clinical trials So it’s not random It’s not random It’s not random, and it’s only 3% People who said– It’s just why not 97%? Each one of us went to the doctor Something happened to us It is a clear outcome Given how different we all are, machine or human, should be learning from all the information, not only for a small, non-randomly selected subset And one of the shocks of mine, when I started understanding how it is done in cancer clinic at MGH, was the fact that, if you want to use the retrospective data about the patients, actually, there is a fellow, which is a clinician, who needs to sit down to extract this information by hand into the database and then run the statistics Clearly, you cannot do it on thousands of people They can write a paper, which will have 200 samples of retrospective data, and it just looks, to me, totally bizarre because in no other industries that I work with, people have this type of practice

There are tools to do it automatically Why are we not doing it automatically? Cancer registries in the whole country are done by hand They’re not done automatically as it’s done, again, everywhere And there is really not much value by doing this by hand It delays it It introduces mistake It increases cost So how I envision it moving forward, that we can really utilize the data for every single patient that went through the clinic And we can adjust their treatment accordingly So this gives us an opportunity to fix You talked about how the model didn’t work for African-American women But there’s a real opportunity here with AI because you’re dealing with such a huge amount of data and so many more points of information of correcting this What are some of the ways that you anticipate this correcting the discrimination that’s built in in the past? I think one of the– we had a lot of highlights We’ve had some highs and lows on this whole journey But one of the highest was we were invited to give the keynote addresses at a national conference for breast cancer centers And we shared our experience, what we’ve discovered and what we’re doing And it was a little bit like being at a rock concert I mean, people came up afterwards They were hugging us They were thanking us They were like, what can we do to be part of this? We just had a phone call the other day with a group in Chicago and a group in Wisconsin that heard us there and said, we have over 200,000 mammograms that we run every year We want to provide this We want to study We want to be part of this What can we do? One of my former trainees in Seattle now runs a gigantic, many hundreds of thousands mammogram program in California He’s like, we’re all in We’re in this We want to do this And it feels like there’s a real groundswell of people saying, we know it’s been broken We know we’ve tried to say, well, there is a better technology coming out Or now we’re going to do something a little bit different, sort of these incremental changes But they know it’s broken And now that they see something that they could be part of that would really be a lot larger, so I think that’s what’s going to happen I think people are going to want to be part of this And they’re going to see the value of it We’re going to work through all the challenges But I think there’s so many people out there that want to do this And those that don’t or those institutions that can’t figure it out, they won’t I think there’ll be so many that will, that we’ll move forward What’s extraordinary is you were talking a little bit about the numbers of mammograms out there According to the FDA, there are more than 39 million mammograms performed annually And so if you think about the number of radiologists, and we’ve been excited, you showed the progression from the quality of the imaging, the technology side, but that we’ve really just been relying on the training of the radiologists for this And what the potential is, if you have 39 million every year to really synthesize and learn, it brings me to the question of there’s a full spectrum when we think of cancer There’s predicting risk There is the early identification And there’s treatment Where do you think AI has the biggest potential impact on this whole spectrum? We see the full spectrum from the identifying those at risk, being able then to not just predict any cancer but the specific types of cancer, and then have prevention strategies that work So for example, if we can predict this woman is at risk for an ER-negative tumor, why put that woman on tamoxifen that will block ER-positive receptors and all the morbidity associated with that? There would be a different prevention strategy based on the type of cancer that woman is at risk for So there’s a whole domain in there We’ve shown that early detection, more targeted therapies, more personalized care And what we wanted to do was, in some ways, have proof of principle early on So we got engagement from clinicians saying, we see the value We want to use this We have it in our clinics Now let’s build upon this because we felt like that would be the biggest hurdle, just being comfortable with having it in the clinic as part of routine clinical care Susan has talked about the importance of bringing down barriers to getting these things forward You’re really good at identifying barriers We talked about clinical trials But you’ve identified the FDA as another barrier So I think FDA, in some case, it still kind of learns, what is the right way to engage in how these tools need to be regulated? The place where I see one of the biggest barriers to bring AI, and I’m not talking about like MIT, I’m talking the whole army of people who are getting into AI, is the lack of data So as you just mentioned the number of mammograms available in the country, so I think the biggest collection

of mammograms that is available for public use was collected before digital mammography It was films And it was done, I think, in the ’80s or early ’90s And it hit maybe 70,000 mammograms So you can say, how come all this data getting generated and we don’t have one collection in which everybody, any hacker in this country can just try to build the models? Again, while we are obnoxious MIT professionals, I can totally recognize that there are people who, in this area, will be much better than me and can do better stuff or can do different stuff And as you’ve said both that we can ask different questions But to date, this resource just doesn’t exist And it’s not only the case for breast cancer It’s the case for many, many other cancers Today, the data is held within the institutions And nothing is shared with researchers who actually can benefit and build better model To me, this is the biggest hurdle today Well, for the AI to get broadly adopted, we need to have the right regulatory environment, what you’re talking about, but we also need the right pay structure How do we change the pay structure to incentivize clinics to adopt this? I think that this is one of the barriers for a more rapid implementation So there are companies with AI and health care, and they’re looking at different models, expecting that we would have left fee for service behind us But we haven’t So if you’re not in a fee for service, you want to have the highest quality, most efficient care possible If you’re in fee for service, you want to make money every time you do an exam, whether it improves the life of the patient or not I think the AI can be helpful in both domains So if I’m running a large hospital fee for service, I’m not going to be able to keep up with my competitor across the street that as an AI reading mammograms rather than paying those expensive radiologists to read all those mammograms So that that is going to be one method And then if I’m a Kaiser Permanente that’s looking to be more targeted, more precise, that these women should get mammograms every two years, these should get them every year, these need an MRI, that’s going to help me as well on my higher quality, lower cost How, though, the government starts to have the CPT codes and the billing and all of that with each of these, that’s going to be something we’ll have to keep working on We can again look back at the history of CAD Lobbyists successfully convinced Congress to pay $18 every time we pushed a button to have a CAD overlay on the mammogram It didn’t have value, but everyone was doing it because they could push the button and make $18 Over time, that dropped and dropped and dropped Now there’s really no added payment for CAD It’s all bundled into the mammogram But that was a 20-year story of, from my perspective, people figured out how to get paid for it Did it really provide value? It didn’t So we’ll have to figure that out I have a lot more questions But I want to share this with the audience So I think we have some runners here with microphones, if other people in the audience, I’m sure, have questions There’s some this side of the room One here Oh, one here This is fantastic research I was wondering how applicable this has been to other populations, such as younger women, older women, men who might get breast cancer, and how it might be applicable to other kinds of cancers that might have patterns that could be recognized and what not Thank you We’ve been very excited about taking exactly this model and applying it to lung cancer For us, that would be the next because the domain is the same You have CT screening that has been shown to reduce lung cancer mortality and all-cause mortality So you have images You have different risk factors that you can assess And then you can have algorithms trained to do a better job going through those hundreds of thousands of CT slices and images trying to find those small cancers There’s other domains as well In some ways, pap smears and cervical cancer, and, of course, there’s a whole domain where vaccinations are going to have dramatically reduced cervical cancer But in that domain, they already have been having computers read the pap smears And in some ways, we can look back and say, well, we had that model We used to have humans looking at the pap smears, and then computers did So how do we just keep doing that with any of the imaging that we have? Let me just add, we’re actually, in addition to stratifying the model by race, we stratified it for a variety of different areas, the age and the menopausal status And the models seem to be pretty robust across different subpopulation And in addition to just predicting cancer, Connie and I are now starting collaboration in the area — May I say? it’s not cancer, right, even though we’re in Koch..– to actually predicting the heart attacks because this

is actually the killer of women way before the breast cancer And apparently, looking at the mammogram, you can predict the risk of heart disease because of the calcifications So we have a collaborator who studied this question And moving forward, you don’t only want to do a single read on the mammogram, but you want to predict all other conditions where the sign of it are there So the way I envision myself, health care or MGH partners and other places moving forward, that whenever you do any scan or any test, you’re not only evaluating for the disease that initiated the scan, but you’re changing probability distribution over all different diseases And in addition to lung, I don’t know if Phil Sharp is still here, but he actually was one of the initiators of a program, Stand Up to Cancer, to do similar type of analysis for pancreatic cancer screening So we collected a very large set of screens of people who were incidentally screened for, say, belly ache or accident or whatever And some of them developed pancreatic cancer later So the question is, can you early enough to detect who is likely to develop the disease? And we are in very early stages But we can imagine what kind of potential it could be for this disease, where, today, it’s pretty much deadly when it is diagnosed Oh, I imagine there’s some kind of tension between building a really complicated neural net that treats everything as a special case and just sort of a more general neural net But if you don’t want to make some crazily complicated neural net, just a reasonable neural net to analyze this, is 70,000 images enough? Do you get a lot better with 200,000 images, with a million images? Or are these just complicated problems, and the area under the curve will never be over 60%, 70%, 80%? So I think that we have, I would say, not overly complicated neural net It’s fairly standard neural net I mean, we do different things, which are specific to this domain But it’s not something totally crazy and unique There is science to be done, and we’ve done it I think the question that you are asking is, how much data do we need? And if we’re increasing the data, are we going to get better evidence? And I think it really depends on the task So for instance, breast density, which was our first product, you can train it well even on 15,000 images If you’re looking at the cancer detection, there, we do see improvement Because even if you have 200,000 mammogram, how many cancers would you have? 2,000 cancers maybe? So you have highly imbalanced data set And now we’re getting 86%, 87% I believe that if we can increase the size, we can do even better I think that totally outperforming the radiologists is just a matter of getting a bigger data set The question where I am not quite sure where even with increase in size we can improve, if you look at what I demonstrated the AUC for, let’s say, two-year prediction where it’s high 70s, I don’t think that even– the perfect risk model cannot be perfect Because within two years, you may change your lifestyle You can go to medication You can lose weight You can change a lot of things So we’re just talking about probabilities So I think there, it’s not the matter of adding more images It’s a matter of adding other information, for instance, sequencing data, blood work, a variety of other things And the beauty of this model is they actually combine all of this information to make predictions There was another question So MGH is using your tools on all the scans Breast Breast All the breast scans Are there other hospitals that are going to start using the tools as well? So currently, we are in the process of transferring the tool We tried on data from Newton Wellesley And we’re moving now Partners-wise It was Newton Wellesley and all the others outside of Boston As Connie mentioned earlier, we’re talking, and they’re moving forward with this network Advocate, Lutheran Advocate, which is pretty big network in Midwest Detroit, Henry Ford, and Montefiore Hospital, so this is our immediate kind of network for expansion And the way we’re selecting the hospitals is, first of all, they need to be large So we have a lot of test ground They need to have a reasonable IT system And we want hospitals which are really diverse in terms of their population

Very good It’s very exciting Thank you Yep? Do you update the tool that you’re using at MGH with a retrained data center or is it just static right now? So right now, the breast density, for instance, we’re already outperforming humans And whenever humans disagree with the system, the experienced human actually sides with the machine So for breast density, I don’t see there is a point even to improve it further Now, in terms of the cancer prediction and triaging, we are retraining the model pretty regularly And as we move forward with risk assessment, when the new data becomes available, we are retraining the model So it’s just a matter of the amount of compute when it runs on the side It’s all automated And that’s one of the parts– OK, so what’s the protocol for introducing a retrained model into actual clinical use? So in this case, I want to remind you that it’s not instead of radiologist It’s just information that is given to radiologist Then they need to accept or reject And we record all this information At the end, it is a radiologist responsibility about the final read So we didn’t follow the exact procedure It’s a different level of control that you need to have when you’re taking a radiologist out of the reading or whether they are collaborating together I don’t know where the microphone is You can just talk loudly Sure So the question I had was, you had the earlier slide where you showed a neural net doing face detection And you showed how from layer one to layer two, it started detecting eyes Is there a way you could interrogate your model to see what kind of patterns it’s detecting in the images? And in other words, it’s there new knowledge that we can learn from these models on what it’s picking up? So this is a great question In terms of what the model does, and again, I want to tell you, it depends on the type of question If you train the model to predict density, to zoom in on the white pieces and do what you expect it to do, if you ask the model to predict cancer, it can, again, zoom in on certain parts of breast where there is cancer The pattern here is clear So whatever machine shows to you, it’s called attention mechanism, where the model attend to when it trains Human and machine can understand this It’s great The really gray area is when we are asking machine to make predictions that human cannot make And I am pretty sure that if you even show it to the human and say, OK, there is this pattern Since our, I would say, –I’m not sure what is politically correct,– cognitive capacity here is a bit limited in which patterns we can visually recognize, the fact that the machine illustrates it to you is not going to be really informative And there is a lot of work in AI about how to make black boxes open And it’s a very valid question And in some cases, when we can understand the explanation, of course, it’s worth to provide this explanation But there will be cases like this one when it’s not clear what the explanation will be We have time for one more question Hi I’m Robbie Kahn I’m a former microfabrication researcher here at MIT And I just wanted to talk to the relevancy of what you’re doing on behalf of a stranger I met a couple years ago at the Boston Beer Works We were both having a burger and a beer I was going to go to a seminar in an hour She was going to go downtown to the hospital to find out if the treatment for her stage four metastatic breast cancer had been effective And what was more impactful than even that was that she told me that she had been going for regular screenings every year And the way that she detected this cancer was just through her own sense of touch So on her behalf, I just wanted to mention the relevancy of what you’re doing Oh, thank you That was very kind And there are so many stories like that You two are doing something really remarkable Thank you Thank you for your ingenuity and for what you’re really making happen Thank you to the Koch Center for having us here today And thank all of you for coming and learning and having this discussion with us I’m going to end on one thing Is there anything– what can we in this room do to help this? Is there anything we can do? Wow One, the engagement is fantastic for us We certainly have benefited, as Regina said, when we needed to pursue other avenues that weren’t the traditional NCI and Department of Defense I can’t tell you what it meant to me for my chairman in my department to see that people were interested and wanting to support what we were doing

and to be able to start that up That has been really incredible We can’t wait for the next step Because the next step in having computers read mammograms that we’re very excited about is sequential mammograms and tomosynthesis We need bigger servers We need bigger storage We’re working through all those pieces But we’re barely scratching the surface on what we can accomplish, and to your point, not just in breast but in other cancers as well So I should say that I was an outsider, which started my second career I was really grateful for all the support that I got from Koch And for many, it looked like the crazy person who tries to break into the clinical care But people at Koch actually did seed and supported me, as much as they could, in all different ways And I’m really grateful that this environment exists And everything that we can do to keep this environment going and supporting all types of different research and not the same, all the same, but really different approaches, people coming together and making the change And then that is something that I want to ask you as patients because of all of us, at some point, are going to be patients The question is when and how, but we’re going to be patients I am thinking about this woman Nancy who changed the laws And she took her own sorrow and used it as a specific case to change the law And I think there will be a lot of exciting things going on in AI in breast cancer and all different diseases And I think we, as patients, deserve to get the best technology And it shouldn’t take 60 years to translate scientific insight into clinical practice So each one of you who is a patient or going to be a patient, we all need to think very creatively, what is a path we can take to bring the science into clinical care for all of us? That’s great Wonderful Thank you Thank you all so much [APPLAUSE]

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.