seat. Our session will begin soon

>> Welcome to Google io 2018 We welcome to you celebrate

platforms at goog e. By now you have checked into registration

to receive your badge. You need to ride the io shuttles and tone

ter the after-hours activities at

Throughout the event make sure to visit Google com/io or download the mobile io app for the most up to date schedules and For questions for assistance, stop by the information desk located a Krause from the code labs building or chat with Sessions will take place in the amphitheater and at the eight stages throughout the venue. If you haven’t already, reserve a seat for your favorite session the io website or Mobile App And, if you miss a session, don’t worry. Recordings will be available online Be sure to visit the CodeLabs building for hands-on experience with ready the code kiosks and self-paced tutorials. Google staff will be on hand for helpful advice and to provide direction Make time to visit the office hours to meet one-on-one with Googlers and ask all your technical questions and get If you are looking to network with Googlers and fellow developers join one of the several meet ups hosted in the Finally, we would like to invite you to visit the Sandbox domes, where you can explore, interact with our After session is done for the day, stick around for food, music, drinks, and fun. Be prepared foresaw prices along the We would like to take this opportunity to remind you we are dedicated to providing an and inclusive experience for Everyone. By attending Google io you agree to our code of conduct posted throughout Your opinion is valuable to us After the event look for a feedback Thanks for attending and have a wonderful time exploring Google io

Io18 Hash >> Welling come. Thank you for joining

Our session will be begin soon welcome. Thank you for joining Our session will begin soon soon >> Hi guys >> All right. So, we’re going to cover what is new in Chrome Dev Tools >> PAUL IRISH: And we’re good go to be capturing what has been going on from the Chrome Dev Tools team in the past year, the the features and function >> JASON MILLER: Just says joke >> PAUL IRISH: We’re going to be covering a few different things. We’re going to go through offering, accessibility, performance and jaufy script capturing feature areas of the >> PAUL IRISH: To kick things off we want to look at authoring. In authoring this is really the experience of us crafting a nice user experience for Everyone who is enjoying our web content, and this is create a nice feel, a nice look,

and there is a I can first possible way is the classic, the reliable way I work in my editor, I load it up, I tweak things in DevTools, you know, copy paste what I like, bring it back to my edit to are, save and refresh. That old save and refresh cycle Good Now, previous I/O’ we’ve also talked about work spaces. This is a way where we take your entire project like your get checkout essentially, drag them to DevTools and DevTools can now work with that, save to disk and you have access to the entire folder there and really makes for a fast >> PAUL IRISH: This is great if it is your project, but there is also this other case of I want to make changes and I don’t necessarily have, like, a get checkout of it, but I still want to try things out. And, that is what local overrides is So, like Jason, if you wanted to make changes on Paul Irish >> JASON MILLER: Right F >> PAUL IRISH: So far you would not have good access, so that works for me. But local overrides would work for your de-a face meant I’m going to go show a little bit of how this works in actuality. Let’s switch over to my screen for a So, we have here the Google io extended site. Nice little page, but I feel like making some changes and it’s not my So, first thing is you will head over to sources panel, and in this little drop down we go to overrides. First thing that we have to do, just once, is set this up so we just need to So, I’m just going to go right in here and you just need to put this someplace on disk. Really anywhere. . And, we’re going to be using the disk to kind of back all these changes. So, once you select, you click allow so that DevTools has permission to work with this folder. >> JASON MILLER: It only gets permission inside that folder, in that root? >> PAUL IRISH: Yeah. Yeah So, I this that I is all good And, now — let’s make some changes This guy right here, actually, >> JASON MILLER: Something nice. >> PAUL IRISH: Let’s go — hot pink is a good choice I like hot pink >> JASON MILLER: Hot pink >> PAUL IRISH: Wow >> PAUL IRISH: Probably needs >> JASON MILLER: Yeah, some >> PAUL IRISH: All right >> PAUL IRISH: Now, I don’t know if you saw that. Let me just do that again. Refresh Oh, sticking around post refresh. I like that Now, the cool thing here is that I can make changes to kind of anything. Say, for instance, this text beneath here, the hosted 534 viewing parties. Cool. I think that number needs an upgrade. I don’t know exactly where this text is, it could be in the JavaScript file, maybe the html So, I’m going to select and copy So, command shift P brings up this >> PAUL IRISH: Nice And, I’m going to show search This will s across all the files that are loaded into the page And, okay So, here we have just the html All right. Easy enough. We can make a change to that. >> PAUL IRISH: Good. We hit control S. And, you can see this little purple That this is telling me that this file is linked to the network file And, this also means if I right click this and open it, this is just, this html file that I just changed is just a regular old file on disk. I can drag this into my texted it tore and make changes there, and they will instantly be reflected if I just refresh. Over 9,000. >> PAUL IRISH: Nice. So, this enables a lot of cool things, and you can make dice changes, prototype If you get to a point where you’re like okay, made some changes, they feel good, but were they all again? There is another little pain that you can access to summarize everything that has happened. Again, you can use command shift P or you can just open up the little menu down here on the bt the bottom and go to changes. The changes view will tell you what change Now, in this case it is minifiedCSS society is not entirely useful to look at the dif, but then again I could open this file and pretty print it and at that point I (Laughter)

>> PAUL IRISH: All right Let’s go back to the slides All right So, you saw it in my demo search across all files, and we’ve been looking at other Places to surface more content via search >> JASON MILLER: Yeah. So, one of the ways that we’re doing that is through improvements to the network search. So, in the network tab we’ve had search for a while. It’s just like a bar that shows up at the bottom, and that bar is only searching urls. So, it’s useful, you know, if you’re looking for something that is in the url, but not quite so useful if you’re looking for something that isn’t in the url So, now there is a new Icon in here, >> PAUL IRISH: Yeah, control F Yeah. >> JASON MILLER: And that’s going Search sidebar searches through headers and their values in addition to all those other pieces of information Let’s say I want to search for cache control headers. It is going to tell me whether the browser going to cache stuff locally and get an overview of what we’re sending down. If I hit enter on this it will show me an aggregated list of all the cache control headers that is sent down. And, you can click them in the left. You can see the details of those on the right. It is very nice. There is still the familiar controls in the header for case sensitive search, RegX space search. If you want to search for authorization headers to know which of your requests are authenticated, >> PAUL IRISH: If you add specific domains and looking up what CSP headers >> JASON MILLER: Yeah. So, we’re trying to show you kind of an at a glance overview in this interface, and there is other Places in DevTools where we have kind of similar problems here, like huge lists of CSS Raise your hand if you use CSS variables, custom properties Good stuff. Yeah, I love them, too. We use them all the time on the Chrome Dev Tools themselves, and we’ve been looking at it and they could probably >> PAUL IRISH: So, if you have seen some variables before they might look a little something like this. >> PAUL IRISH: Good. But then you’re like what was that again? So, today you’ll see something a little bit different. You will just see a little color swatch place there. We recognize that this is recognizing a color value and some of the color swatch seems good You can also just hover on that variable name and it will just, the tool tip will resolve the actual value. Works with colors, it works with >> PAUL IRISH: But, with colors it was one more thing that was pretty cool. If you open up the color picker, down at the bottom is this kind of color pallet And, this is the material design color pallet. A lot of nice tasteful choices of >> PAUL IRISH: But there are a few other pallets available, and a brand new one is CSS variables You click into this and this is listing the CSS variables that apply to this element right here So, it’s basically the — while there may be many, many variables in the page, these are only the variables that apply to this specific element via the cascade >> JASON MILLER: Right. Right. >> PAUL IRISH: So that is kind of cool Right now we’re hovering over text link color if I select this, the cool thing is we apply a var text link color >> JASON MILLER: Society doesn’t use the hex, it uses the actual variable >> PAUL IRISH: What you would >> JASON MILLER: That is pretty >> PAUL IRISH: When you choose colors, it is important you choose something that looks good, tasteful, but also accessible. You want to make sure that what you have is an accessible choice. So, we’ve introduced some features to make that a little bit easier. And, we’re going to walk through this So, I’m looking at this page and I want to tweak the foreground color of this text. So, I bring up the color picker and it says contrast ratio right there. It says it is not very good right now. But as I drag the color around, the contrast ratio is going to update. It’s kind of cool. And, if you open that section up, you’ll now get this line in the color pecker So, a line a green line is bad and on the other side is good You get ratings beneath on what the current ratio is and what if it meets Now, it is currently trying to identify what the background color is, but there is a little color dropper there if — a little eye dropper if you want to manually select your background color. >> JASON MILLER: Cool Did I notice that you were editing your Lighthouse score with that tool? >> PAUL IRISH: I was in Lighthouse with DevTools I wasn’t editing my score, but my score could use some

improvement. >> JASON MILLER: Right. >> JASON MILLER: We should probably tell them about the new >> JASON MILLER: Seems like a good Yeah, last year at Google io we announced that we were bringing Lighthouse into DevTools. This was huge because it meant everybody who had Chrome obviously has DevTools Everybody who has DevTools, which is the Chrome users now had Lighthouse. Nothing to install. No extensions. None of that business. Now, that kind of started us thinking, was there anything else we could do to kind of make Lighthouse more helpful to people? We’re always thinking of that, but moving into Chrome was sort of a turning point here. We want Lighthouse to act like a real Lighthouse. We want it to kind of keep you away from the rocks and guide you towards other meaningful information that exists So, this year I’m excited to announce that we are introducing some improvements that we hope will bridge the gap between Lighthouse and DevTools. And, I’m just going to show you my favorite new feature of all time, which View trace to stretch the analogy to the point of breaking unless you dig >> JASON MILLER: Yeah >> JASON MILLER: Lighthouse records a trace of your application as it runs. It loads your application, reloads it, grabs a trace, which is going to be details about JavaScript network stuff, all those kind of underlying pieces of information. We thought why not let people dig into the trace. We have it. It is sitting there. And, turns out this is a really nice way to kind of see why you got a Lighthouse I’m on Paul Irish Irish.com. My favorite website. Just the performance ought I had for So, behind the scenes this is going to be reloading the page, gathering a trace, analyzing it and coming up with a pretty page and a lovely score I think there, yeah But you see now there is a view trace button. Clicking this, it is actually going to take me right over into the performance tab and I can dig down into the trace that powered that Lighthouse score, and this is awesome because that tries was run-on an emulated 3G network o a slow >> JASON MILLER: And, you can kind of figure out what exactly those odd ilgts were derived from Let’s say you had a critical request chain for CSS or something. You can see that in the performance tab blocking the rest of your page rendering with screen shots. So, to me this is huge >> PAUL IRISH: I’m looking at this, and in a way you’re calling me out. Your oo calling my sight out at least. I was wondering maybe we should spend a few minutes and see if we can improve >> JASON MILLER: I think we can >> PAUL IRISH: On the site a >> PAUL IRISH: We can make it To make it faster let’s bring back one of the features we were talking Okay First up, let’s reload the page to capture a trace of the page load. All right. Now, what we want to do is measure from the very beginning of the html load right there to when the web fonts finished loading >> PAUL IRISH: At the bottom Yeah Now in the green right there waf files. These are my web fonts Big gap before they start loading in. So, we can fix this. This is a bit of code Going to usury source code and API, copy it to the clipboard, but really I We’re going to come over here and use local overrides. We paste them in >> JASON MILLER: Ignore the multiple mult cursor magic >> PAUL IRISH: Reload I think that is link pre-load Again, 2000 milliseconds before Let’s reload with the local overrides changes. Now we are pre-loading. >> PAUL IRISH: Yeah. Yeah. The fonts are at the top T. green. Yeah >> PAUL IRISH: And our total time is, yeah, 13. >> JASON MILLER: A 33. >> PAUL IRISH: Uh-huh >> PAUL IRISH: That is pretty fast >> JASON MILLER: Very impress So, while we’re talking about performance, there is a few other changes. So, you might see something like this if you’re looking at a trace. You have the main thread. You may have i frames that are using site isolation and those are going to be represented on a separate track >> PAUL IRISH: Are positioned on The nice thing now is that as you select each track, the summary t bottom up, the cull tree, all that stuff at the bottom is going to be updating to >>

JASON MILLER: Very nice >> PAUL IRISH: Kind of cool Enables also another feature that I Say, for instance, you’re looking at a trace of your web app. You see a flame chart like this. Now, in the flame chart there is a lot of stuff going on. There is a function names like complete work. Create element. Mount class instance. You’re like yeah, mount class instance >> JASON MILLER: Good old mount >> PAUL IRISH: Great >> PAUL IRISH: A number of frameworks also use user timing to say when things started and ended. And, usually this is very meaningful because it is talking about your components When your components were mounted and rendered. So, you can select the user timing track and now see that data summarized >> PAUL IRISH: This is really great. Having this structural profiling data come in via user timing saves time, because otherwise you would be writing all this start and end, like, marks yourself. It is a big time sync to do all this. Here DevTools is doing a lot of the >> JASON MILLER: Right Paul, you mentioned syncs and lifting. >> JASON MILLER: Right. So, one of the things I wanted to cover is actually related to that, and it’s a sync >> JASON MILLER: I think I was >> JASON MILLER: This is the last >> JASON MILLER: No There is two. There was two >> JASON MILLER: There is a couple >> PAUL IRISH: Is that a >> JASON MILLER: It is a >> JASON MILLER: So anyway What I was going to say is it is 2018 and it is year of a sync await So, a sync a weight is every part of Java at this point We added this in DevTools Amazing. This year we’re taking things a few steps further F you’re not familiar with Async await, it kind of looks like this You have a wrapper function, very important. With the Async keyword. Inside of the body of this function you can use the await keyword to await on the return value on the promise It looks like soo*erng synchronous code this, is promesed under the hood but it is going to run kind of line by line to you. This will run Async row nowsly when it This is fine. Certainly fine for application development, but one place where this is actually kind of annoying is down in the console >> JASON MILLER: So, we are happy to announce that today the console has >> JASON MILLER: No wrapper No shenanigans. So, I will walk-through an example and In this case, I’m going to await the response from this fetch call, which is going to connect to a GitHub API that I googled it. So, we’re going to search for work box. We’re going to stick that in res, then we’re going to await, no wrapper, no promises, the value of res did the json. This responds. You notice it dumps it straight into the console >> JASON MILLER: Exactly. You have to wait and expand it. Get the value. Clever hats Finally I’m going to use a DevTools >> JASON MILLER: This is the previous result from the console. So, I can just do dollar score under sign items map, map them over the name and I have the list of the 30 most So I’ve been able to step through line by line in a dis discoverable way, an Async flow Haven’t had to use wrapper, promises, callbacks, none of that stuff. The console supports this out of the The interesting thing is we’re bringing this Async mentality to the debugger. One of the big e- changes you’ll notice here is when stepping into Async functions, until now if you stepped into an Async function like a set time out or whatever, we would pretty much just step >> JASON MILLER: So, let me show In DevTools last year, this is what step time out looked like If I click step into on this set time out call, step into next function call, >> JASON MILLER: Right Now in DevTools if I click step into on step time out I can step through the rest of the function. So, it is kind of an

Async aware step into. This is just one type of Async, though So, a lot of people right now are spreading their work out by by the main thread or the alternative is you push your work off the main thread into a background thread using a web order, let’s say One of the hardest parts of web worker development is debugging them. These are separate threads. >> PAUL IRISH: It is a little >> JASON MILLER: It’s painful. You’re switching context of the debugger going from one thread to the other, the sidebar the is changing, you have to wait for post message calls to resolve and you can really do anything We thought what about making all of that a little bit easier? So, now let’s see we have some worker — some main thread code, right, running on the main thread. With new Async aware step into, I can >> PAUL IRISH: Into the creation >> JASON MILLER: — and it drops me off on the first line of the worker that I created. >> JASON MILLER: Yeah. >> JASON MILLER: So we can step across a thread boundary in DevTools now. So, this, is really cool Another case would be, let’s say, I’ve created my worker, I’ve got this W variable with the worker in it, and I’ve got a break point there. I can step into the post message call to the worker and it will step across the thread boundary and into the worker. It will drop me off the first line on >> PAUL IRISH: What about in the post message from the worker back to the >> JASON MILLER: It’s funny you should mention that >> JASON MILLER: We can step into the post message call and get back to the main thread like there was no thread boundary at all and it drops us off in the message handler. This is super cool. You’re not debugging across the worker >> PAUL IRISH: That’s rad. All right So, before we finish up, I do want to bring it back to the console for a >> JASON MILLER: Got you. Got you >> PAUL IRISH: There is some stuff going on in the console I console there are functions that are avail lbl there and nowhere else We call them the au manned line API First up as spotted before, copy >> JASON MILLER: Right. >> PAUL IRISH: Take anything you want, throw it in the copy method, it copies it to the clip poured, it’s so nice you pass it like an element it will give you an outer html, anything else it will string au identify it as Jason and >> JASON MILLER: I did not know >> PAUL IRISH: It’s been there >> JASON MILLER: Longer than I have >> PAUL IRISH: But, anyways. Just a familiar, refamiliarize There are some other methods in there that have now just got an upgrade, too The debug method has been around, what you can do is pass a function it to and it will essentially pause right inside of the function that you pass But now you can pass in native function So, document dot query select ter would be an example, and what happens is it will pause right as soon as any JavaScript is about to use that data >> PAUL IRISH: You can also do it with alert, too, if you want to find out Similar story with monitor. It doesn’t pause but instead logs to the console stack trace and also the arguments that are being passed in. This you could say tell me about all of the code that is using set time out >> JASON MILLER: So we would have seen in my previous example of a function I pass to you and all this >> PAUL IRISH: Exactly. >> PAUL IRISH: It’s really nice. There is one more that is actually brand new, which is query objects. It is kind of cool. So, let’s start this out. We have a class. We create two instances of the class. The second one we just add an extra property so it is unique Now, later on, we’re like ah query objects we pass back into that class. And, we’re going to get in the console an array of all of the instances of the class. In this case pretty straightforward, but the really powerful thing here is that it is looking across the entire JavaScript and reporting back all the instances that we know about it It is like taking a big heap memory snapshot and summarizing it down to just Now, this is cool, because it’s not just what is currently available to JavaScript scope right now It is the entire heap You can look at, for instance, custom element constructors Or what are all my canvass contacts that are somewhere and get them dumped Okay. I think that is it To wrap up we covered offering, accessib, performance and Java scripts >> JASON MILLER: Paul, didn’t I see you literally backstage before we >> JASON MILLER: Is there something you wanted to tell the people >> PAUL IRISH: Yes, there was >> JASON MILLER: Oh God, are we >> PAUL IRISH: I think so

>> JASON MILLER: We’re getting a >> PAUL IRISH: You guys want to >> JASON MILLER: We’re getting a >> PAUL IRISH: I got one more >> PAUL IRISH: Jason, as you know, web flat form evolves quickly As it’s involves, we as developers need to feel comfortable working with these new APIs, these new objects and make sure that our code runs well against all them And, the console is our home base for this exploration So, we were thinking that console could use some more power here so we’re introducing a brand new feature that we think revolutionaries nationes how we work with the console We all call it eager evaluation >> PAUL IRISH: No other way to do it than with a demo. Let’s switch over to my machine We’ll just undoc this and go full >> JASON MILLER: Does anybody else >> JASON MILLER: He adds them as >> PAUL IRISH: Undoc. >> PAUL IRISH: All right. Here in the console I have a little line of code, right. We have a regular expression and we’re going do call exec on it and pass it a string. >> JASON MILLER: There is a music >> PAUL IRISH: All right. Let’s finish this off. Thanks Now, as I finish this expression, checkout what happens. We get the results of this evaluation just positioned right beneath me, and I did not hit enter or anything. >> JASON MILLER: What? >> PAUL IRISH: The cool thing here is, too, this is going to update as I type So, if I come back and let’s say does this rgeex work if I have parens around >> PAUL IRISH: No, it doesn’t Regx, a little brittle. Could use some improvement. One thing I do see as far as >> JASON MILLER: Yeah. >> PAUL IRISH: All right. We can use a digit instead. We do have multiple of these, so we could just multiple selection/D and >> PAUL IRISH: Matches too work good. Now one last thing Jason I wanted >> JASON MILLER: You’re sure this wasn’t just a reason to show off your >> PAUL IRISH: While we have these multiple cursers here, oh, I like that, what we’re going to do is add some >> PAUL IRISH: All right. All right Yes. Exactly And, so, let’s pull out the, is it 1? 1. Boom So n a few short moments with not too much work we manage to explore the API, find the bug, make an improvement and get the result that we wanted. And this speed of iteration is really nice. No going back in your history, changing that one thing >> JASON MILLER: We should do a >> JASON MILLER: You still have >> PAUL IRISH: Yeah. We’re still connected to the page with >> JASON MILLER: That guy never >> PAUL IRISH: All right. How about — okay >> PAUL IRISH: Let’s grab all of >> JASON MILLER: Like you’re a >> PAUL IRISH: Yeah, like I’m a >> PAUL IRISH: Okay. So, what do we need? >> JASON MILLER: Query selector >> PAUL IRISH: All right >> PAUL IRISH: Okay We have them now, I don’t know, we’ll grab the text with intertext or something. So, we’ll map over that. Map >> JASON MILLER: It is a node list. >> PAUL IRISH: Right >> PAUL IRISH: How do we change a >> JASON MILLER: What is the newest >> PAUL IRISH: Audience particip suggestions? I like it. I like it. Yes >> PAUL IRISH: Array literal with >> PAUL IRISH: Yeah >> JASON MILLER: Did that even work in Chrome? >> PAUL IRISH: I love this one. Now we got array. Maps the thing Maps the thing. All right >> PAUL IRISH: So, let’s grab the >> PAUL IRISH: Let’s clean this up >> JASON MILLER: That is a lot of >> PAUL IRISH: Wow Let’s get rid of that. >> JASON MILLER: Holding uptight >> PAUL IRISH: The classic one >> JASON MILLER: I like it still >> PAUL IRISH: Trim that Trim. No >> JASON MILLER: Some type — it >> PAUL IRISH: Go

>> PAUL IRISH: All right Nice. >> PAUL IRISH: So I was able to craft that, not mess up too much, and we >> JASON MILLER: Who knows what >> PAUL IRISH: The really powerful thing here is that eeg gar evaluation is done in a way that we can guarantee there are no side effects. And we did this by introducing a new mode into V8 where we attempt to evaluate code, but if we know it is about to cause a side effect that changes the outside state, your application, the page, other parts of the dom, we just bail and stop doing >> JASON MILLER: I tried for like an hour to break it. You can’t I am going to try more, though >> PAUL IRISH: The other nice thing is that eager evaluation in addition to kind of giving you this result right underneath what you type in the console enables some new completions in the console So, a completion is just a type of >> JASON MILLER: CloseClosure UID But, there are some tricky things here. Completions normally as soon as you introduce, like, some parens you are >> PAUL IRISH: Completions never continue to work. You just have to figure it out on yourself But, now we can try your evaluation on that same string and power more >> PAUL IRISH: So, to give you a show of how this kind of works, and >> JASON MILLER: You’re going to show off our thumb war stats >> PAUL IRISH: Okay. We’ve been playing thumb wears >> PAUL IRISH: I’ve been keeping >> PAUL IRISH: Touch Going. All right. 5 >> PAUL IRISH: A nice thing is, I’m keeping this here, but we were thinking it would make more sense if we >> JASON MILLER: Right we’re going >> PAUL IRISH: Thumb wars stats >> PAUL IRISH: If we pulled that >> PAUL IRISH: So let’s make this >> JASON MILLER: That is a bit >> PAUL IRISH: Yeah. And, we need to turn it back into an object, right guys All right >> JASON MILLER: My namesake >> PAUL IRISH: Now take a look I’m going to hit dot, and what is in >> JASON MILLER: What? >> JASON MILLER: We have a cameo >> PAUL IRISH: Check us out The cool thing is this is being evaluated under the hood and it was day determined to have no side effects so now the completions are upgraded. >> PAUL IRISH: Really powerful stuff. This is just a taste of some of the things you will be seeing What you can do is grab today’s Chrome Canary, pop into the console settings, right up here, turn on eager evaluation, it will enable all the stuff All right, Jason. I think that’s it Let’s go back to the slides >> JASON MILLER: I think we’re good. >> PAUL IRISH: Thanks guys (Applause) >> Thank you for joining this session. Brand ambassadors will assist with directing you through the designated exists. We will be making room for those of you who have registered for the next session. If you have registered for the next session in this room, we ask you to please exit the run and return via the Thank you Easy

#io18 >> All right. Hey folks, my name is Addy. This is Ewa. We

work on Chrome. And, today we’re going to give you a world

wind tour of trips and particulars >> EWA GASPEROWICZ: The Internet gits heavier and heavier every year. If we check on the State of mobile web we can see that on the page of mobile the weight is about 1 5 Megs with more majority of it being scaufy crypt and images Apart from the sheer size though there are other reasons why web page might feel splug I shall and heavy Third party code, network, latency, CPU, partial blocking patterns, all of this contribute to the complicated We have been pretty busy over past year trying to figure out how to fit this rich content of mobile web into the In this

talk, Addy and I are going to show you what we can do in this regard today, but also we will take a peek into the future and give you a highlight of what might be possible really, really soon. >> ADDY OSMANI: So, I’m not the biggest fan of suitcases. The last time I was at the airport waiting at the baggage carousel for my bags I looked around for my suitcase and all I could find was this. >> ADDY OSMANI: I too, laughed the first four times this went around. I was really happy to get this back, I Now, the experience that I had at a baggage kaur au sell is a lot like the experience our users have when they’re waiting for the page to load. In fact, most users rate speed as being at the very top of the UX hierarchy of their needs and this isn’t too surprising because you can’t really do a whole lot before a page is finished loading. You can’t derive value from the page, Now, we know that performance matters, but I can also sometimes feel like a secret discovering where to start optimizing. So, we’ve been working on trying to make this problem a little easier for a while, and today we have an exciting announcement to share with you We like to call it a Paul Irish in >> ADDY OSMANI: Well, >> EWA GASPEROWICZ: Well, we’re happy to announce an expanded performance of audits in Lighthouse. Lighthouse is part of the developer tools that allow you to make an audit of your website and also gives you hints on how to make it better It’s been around for a while, right, but as some of you have heard today in the morning, it’s been also act worked on. And, today I’m happy to share with you some of the newest audits that landed in We’ll try to show them to you in action during the rest of the talk As the Internet thing goes, fixing web performance is easy as drawing a horse, right. You just need to follow some steps carefully But, even though, well, there is some truth in the picture above, the good news is you can get pretty far by following some simple steps In order to show you the steps, addy So, Ewa and I are really big fans of Google doodles. Google has been doing some since 1988 favorite interactive doodles over the year Video (Video >> ADDY OSMANI: So, we kick it off and as you can see we can start browsing through the app plau taetion and look at doodles over the year and click through We’re not going to ruin the surprise. You can go and play some during your So, let’s begin our journey into the wild west of web performance and it all begins with one tool >> EWA GASPEROWICZ: Of course, Lighthouse. The individual performance of our app, as you can see on this Lighthouse report, was pretty terrible On a 3G network the user needed to wait for 15 seconds for the first meaningful pain or for the app to get interactive Lighthouse highlighted a ton of issues with our site, and the overall performance score of 23 mirrored exactly that. The page waited at about 3.4 megs This started our first performance challenge Fun things that we can easily remove without effecting the overall So, what is the easiest thing to remove? Usually nothing. And, by nothing, I mean things that do not really contribute to the code, like white space or comments. Lighthouse highlights this opportunity in the manufactured CSS and manufactured Java crypt audit In order to get communicate identification we simply use the agi plug in Communication is a common task so you should be able to find a ready made solution for whichever build process you Other useful odd it in that space is enable text compression. There is no reason to send uncouple pressed files, and most of the CDN’ We were using Firebase Hosting to

host our code, and Firebase actually enables GCP by default So, by the sheer vir oo of hosting our And, while GCP is a very popular way of compressing other mechanisms that are getting traction as well Broadly enjoys support in most of of the browsers these days and you can use binary to pre-compress your assets Okay In these two ways we made sure that our code is nice and compact and ready Our next task was to not send it twice, if not necessary The in efficient cache policy audit in Lighthouse helped us notice that we could be optimizing our caching strategies in order to achieve exactly that By setting a max stage expiration head ner our server, we made sure that on a repeated solicit the repeat user can reuse the resources as before You should aim at caching as many as possible for the longest period of time and provide validation tokens for All the changes we made so far were very straightforward, right. They required no code whatsoever. It was a really low hanging fruit with very little risk of breaking anything So, remember, always minimize the code, preferably automate go it with build tools by using the right CDNs or by adding optimization modules to your own servers. Always compress your assets and use efficient cache policies to open Tau Okay This way we remove the obvious parts of the unnecessary down loads, but what about the less obvious parts? As it happens, unused code can really surprise us. It may lininger in the dark corners of your code base Idle and long forgotten, and yet making it to the users bandwidth each time the app is loaded This has been especially if you work on your app for a longer period of time, your team or your dependencies change, and sometimes a I will gets left behind At the beginning we were using material components library to quickly prototype our app In time we moved to a more custom look and feel and of course we forgot entirely about that library. Fortunately t code coverage check You can check your code coverage stats in DevTools both for the run time as load time of your application You can see the two big red stripes in the bottom screen shots We had over 95% of our CSS unused and Lighthouse also picked up this issue in the unused CSS rules audit It showed a potential saving of over So, we go back to our code and kicked out both the JavaScript and CSS part of that library This brought our CSS bundle 20 fold, which is pretty good for a tiny tool two Of course, it made our performance score go up and also the time to interact got much better; however, with changes like this, it’s not enough to check your metrics and scores alone Removing actual code is never risk free Remember that our code was unused in ‘95%? Well, there is still this 5%, right. One of our components was still you’d using the slide from the library, the little go and manually incorporate those slides back into the buttons So, if you remember code, just make sure you have a proper testing work flow in place to help you guard against potential visual regressions So remember, code coverage in DevTools and in Lighthouse is your friend when it comes to spotting and removing unused code. Check regularly throughout the development of your app to keep your code base clean and tidy and to test your changes thoroughly before Well, so far so good. All of those changes made our app a little bit lighter It was still too slow to Addy, so he took it a bit farther? >> ADDY OSMANI: It went so, so good. Some light pages are like heavy suitcases. You have some stuff that is important and then

you have crap and even more We know the large resources can slow down web page loads that can cost our users money and have a big impact on their data Plan, so it’s really Now, Lighthouse was able to detect that we had an issue with some of our network pay loads using the enormous network payload audit Here we saw that we had over 3 Megs worth of code that was being shipped down. Quite a lot, especially on mobile. At the very top of this list, Lighthouse highlighted that we had a JavaScript vendor bundle that was 2 Megs uncouple pressed code that we were trying to ship down. This was also a problem highlighted by web pack. What we say is that the fastest request is the one that is not made Ideally you should be measuring the value of every single asset you’re serving down to your users, measuring the performance of those assets and making a call on whether it is worth shipping down with the initial experience, because sometimes these assets can be deferred or laysly loaded In our case, because we’re dealing with a lot of jaufy script bundles, we were fortunate, because the jaufy script community has a rich set of JavaScript bundle auditing tools. We started off with web pack bundle analyze esh that informed use uni code which is 1 6 megs of parsed Java scripts, so quite a lot. We then went over to our editor and using the import cost plug in for visual code we were able to visualize the cost of every module that we were importing. This allowed me to discover which component was including code that was We then switched over to another tool, bundle phobia. This is a tool that allows you to enter in the name of any NPN package to modify what G size isest mayed to be. We found a nice you will a, your Honorive for the slug md usual that only weighed 2 This had a big impact on our performance between this change and discovering other opportunities to trim down our Java crypt bundle size, we saved 2 We saw 65% improvements overall once you factor in the G zip and size of these bundles. We found that this is really worth So, in general, try to eliminate unnecessary down loads in your sites and apps. In the case of the ood Del theater, an app that has games and a lot of interactive multimedia content it is important for us to keep the application shell as lightweight as possible In ven [TORG] our Although large pay loads can have a big impact, there is another thing that can have a really big impact and that is JavaScript We all love JavaScript, but as we saw earlier, the median page includes a little bit too much of it JavaScript is your most expensive asset. On mobile if you’re send down large bundles of JavaScript it can delay how soon your users are able to interact with user interface components. That means they can be tapping on UI without anything meaningful actually happening. So, it’s important for us to This is how a browser processes JavaScript We first of all have to download that script, we have a JavaScript engine which then needs to parse that code, compile it and execute it Now these phases are something that don’t take a whole lot of time on a high end device like a desktop machine or a laptop Maybe even a high end phone But on a median mobile phone, this process can take anywhere between 5 and 10 times longer This is what delays interactivity. So, it is important for us to try Now, to help you discover these issues with your app, we introduce a new JavaScript. It told us that we had 1.8 seconds of time in boot up app While this was happening we were statically importing in all of our routes and components into one monolithic JavaScript bundle. One technique for work around this is using code splitting. Code splitting is this notion of instead of giving your users a whole pizza worth of JavaScript, what if you only gave them one slice at a time as they needed it. Code splitting can be applied at a route level or component level. It works great with react and react load able, UGS, annular, polymer, So, we incorporate a code slit splitting into our application We switched over from status imports to dynamic allowing to us synchronously lazy code it in as we need it. It was shrinking down the size of our bundles but also decreasing our JavaScript booting time. Took it down to 0 87 seconds making the app 50% faster In general f you’re building a jaufy script heavy experience

send code to the user they need Take advantage of code splitting, explore ideas like trees shaking, and checkout this repo we have with few ideas of how you can trim down your library size if you happen to be using Now, as much as JavaScript can be an issue, we know unoptimized images can So, over to Ewa to talk about >> EWA GASPEROWICZ: Images, Internet loves images, and so do we. In ood Del app we’re using them in the background, foreground, and pretty much everywhere Unfortunate Lou, Lighthouse was listen thoosy as particular about it than we were As a matter of fact we find on all three images audits. We forgot to optimize our images, not sizing them correctly, and also get some gain from using other image formats as So, we started optimizing our images for one of open Tau misszation route you can use visual tools like image optim or image convert To adds image optimization step with our build process with libraries for example imagery This way you make sure that also the images added in future get optimized automatically For example, Lackamy or third cloudy or fastly offer you comprehensive image solution So, to save yourself some time and headache, you can simply host your images on those services. If you don’t want to do that because of the cost or latency issues, projects like thumb bell or image flow offer Here you can see a single image optimization outcome. Our background P&G was flagged in web pack as big and rightly so After sizing it correctly to the view port and running in through image optimum, we went down 200 kilobytes, which is acceptable Repeating this for multiple images on our site allowed to us bring down the That was pretty easy for static images, but what about the animated content? As much as we all love gifs, especially the one [W-S] ones with the cats in them, they are expensive It was never intended as an image platform in the first place, there for switching to more suitable video format offers you large saving in terms of the In oodle app we were using a gif you saw earlier. According to Lighthouse we could be saving over 7 megabytes by swaechg to a Our clip waited about 7.3 megs Way too much for any reasonable website So, instead we turned it into a video element with two source files, an MP4 Here you can see how we use FFM tool to convert our animation jif into the format file. It offers larger savings, and for example image open Tau might the eye can We managed to save over 80% of our This brought us down to around 1 megabyte. Still 1 megabyte is a large resource to push down the wire, especially for the user on the restricted band witch bandwidth. We could use effective API to realize they’re on a low bandwidth and give them a much, much smaller J peg instead This interface uses the effective round-trip down and downing values to estimate the network type the user is using It is simply registering 2G, 3G or 4G Depending on this value if the user is on below 4G we could replace the video element with the image It does remove a little bit from the experience, but at least the site is Last, but not least there is a common problem of off screen images. Curse or, sliders, really long pages often load images even though the user cannot see them on the page right away. Lighthouse reflect this behavior in the off screen images audit and you can see it for yourself in the network panel for DevTools. If you see a lot of images coming while only few are visible on the page, it means that maybe you could consider Lazy loading is not yet supported natively in the browser, so we have to use JavaScript to add this capability Here you can see how we use lazy sidees to add lazy loading behavior to our oodle covers Lazy loading is smart because it does not only track the visibility changes of

the element but it also practically perfects elements that are near the view It also offers an optional integration of the intersection observe e, which gives you very efficient As you you can see after this change, our images are being fetched on Okay. That’s a lot of good stuff about images So, just remember, always optimize images before pushing them to the user Use responsive images techniques to achieve the right size of the image. Use lighter formats wherever possible, And, finally, lazy load, whatever If you want to dig deeper into that topic, here is a present for you A very handy and comprehensive guide written by Addy So, you can see it as images guide url >> ADDY OSMANI: Cool. Let’s talk about resources that are Now, not every bite that is shipped down the wire to the browser has the same degree of importance, and the browser knows this A lot of browsers have Hugh wrist sticks to decide what they should be fetching first Sometimes they will fetch CSS before images or scripts. Now, something that could be useful is us as authors of the maeth informing the browser what is important to us. Thankfully over the last couple of years browser vendors have been add ago number of features to help us with this Things like link connect or pre-load or pre-fekt These help the browser fetch the right thing at the right time, and they can be a little bit more efficient than some of the custom loading logic based approach So, let’s see how Lighthouse actually guides us towards using some of The first thing Lighthouse tells us to do is avoid multiple costly round trips to any origin. In the case of ood Del app we’re using heavily Google fonts. Whenever you drop in a Google font style sheet into your page, it is going to connect up to two sub domains The Lighthouse is telling us is that if we were able to warmup that connection, we could could save up to 3 milliseconds in our connection time. Taking advantage of the pre-connect, we ak effectively match that connection latency. Something with Google fonts where our fonlt face CSS is host owed Google APIs.com and our font resource rest host owed G static, this can have a big many packet. We applied this optimization and The next thing Lighthouse suggests is we pre-load key requests. It informs the browse their a resource is needed as part of the current navigation and it tries to get the browser fetching it as soon as possible Now, here Lighthouse is telling us that we should be going and pre-loading our key web font resources because we’re Pre-loading in a web font looks like this Specificly Rel equals pre-load you pass in as with a type of font and then you specify the type of font you’re trying to load in such as lof2. The impact this can have on your page is quite stark. So, nomly without using [PE] spell load, if web fonts happen to be critical to your page, what the browser has to do is it first of all has to fetch your html and parse your CSS and somewhere much later down the line it will fetch Using link well present load as soon as the browser has parsed your html it can fetch it early on. In the case of our app this was able to shave a second off the time it took for us to render text using our web Now, it’s not quite that straightforward if you are going to try pre-loading fonts using Google fonts. There is one got chau. The Google font urls that we specify in our font faces and style sheets happen to be something that the font teams update These urls can expire or get updated so what suggest to do if you want complete control of your font loading experience is to self-host your web fonts This can be great because it gives you things like link rel pre-load. We found the toog Google when helper really useful in helping us the web Now, whether you’re using web fonts as part of your critical resources or it happens to be JavaScript, try to help the browser identify your resources a soon as possible. If you are connecting up to mult mull origins consider use link Rel reconnect Use link well pre-load. We happen to have a good pre-load navigation, consider using pre-fetch. This has good support now in the Now, we’ve got something special to share with you today. In

addition to features like resource hints, as well as present pre-load, we have been working on a brand new experiment shul feature calling priority hints. This is a feature that allows you to hint to the browser how important a resource is It expose [AES] new exposes a new attribute with low, high, or auto. This allows us to convey lowering the priority of less important resources such as non-critical styles, images, or fetch API calls to reduce contention. We can also boost the priority of more In the case of our oodle app this led to one practical place where we could optimize. So, before we added lazy loading to our images, what the browser was doing is we had this image carousel with all of our doodles, and the browser is fetching all of the images at the very start of the carousel with a high priority early on Unfortunately, it was the images in the middle of the carousel that were most important to the user. So, what we did was we set the importance of those background images to very low, the for growbt ones to very high, and with what this had was a two-second impact over 3G and how quick rewe were able to fetch and render those images So, nice positive experience We’re hoping to bring this feech tour Canary in a few weeks, so keep an eye The next thing I want to talk about the typography. It is fundamental to good design. If you are using web fonts you don’t want to block rendering of your text and you definitely don’t want to show invisible text. We highlight this in Lighthouse now with the avoid p invisible text while web fonts are loading audit If you load your web fonts using a font face block you are allowing the browse tore decide what to do if it takes a long time for that web font to fetch Some browsers will wait anywhere up to three seconds for this before falling back to a system font and eventually swap it out to the font once it is download We’re trying the avoid this invisible text. We wouldn’t have been able to see the classic doodles if the web font had Thankfully with a new feature called font display you get a lot more control over this process Font display helps you decide how web fonts will render or fall back based on how long it takes for them to swap Now, in this case we’re using font display swap. It gives the font face a swreer so second block period and an infinite swap period. This is going to draw your text pretty immediately with the fall back font if the font takes a while to load. It is going to swap it once the font face is available In the case of our (? ) It allows us to display text very early on and transfer it to the web font In general f you happen to be using February fonts as the large percentage of the web does have a good web font loading strategy in place There are a lot of web form features you can use, checkout Zach whether lance Next up is Ewa to talk about render >> EWA GASPEROWICZ: Displaying text is very important, but we can go further than that. There are other parts of our application that we could push earlier in the download chain to provide at least some basic user experience a Here on the Lighthouse timeline strip you can see that during this first few seconds when all the resources are loading the user cannot really see any content Downloading and processing sheets is blocking our ren derpg process from making any progress Well, we can try to optimize our critical rendering path by delivering some of the slice a bit earlier If we extract the slides that are responsible for this initial render and align them in our html, the browser is able to render them straightaway without waiting for the external In our case we used MPN called critical in html in a build step. While this module did most of the heavy lifting for us, it was still a little bit tricky to get this working The truth is, if you are not careful or your site structure is vaelly complex, it might be really difficult to introduce this type of pattern if you did not Plan for offshore architecture from the very beginning. This is why it is important to take performance considerations early on. If you don’t design for performance from the start, there is high chance In the end, our risk paid off We managed to make it working, and the app started delivering content much earlier improving our first meaningful So, to sum up, to unblock the rendering process considering aligning critical starts in the document and For non-critical script, consider marking the attribute,

or lazy load them Okay. So, that’s the whole story of how we drew the horse and tried to put it in a >> EWA GASPEROWICZ: Let’s take a This is how our app loaded on a medium mobile device on a 3G network. Before and after the optimization All of this progress was fueled by us continuously checking and following the Lighthouse report All of the hinges you have seen today are LinkedIn Lighthouse tool so that you can check them in the context of your own application. If you would like to check out how we technically implemented all of the changes, feel free to take a look at our The performance score went up from 23 to 91. That’s pretty nice, right. However, our goal was never to make Lighthouse happy. We wanted to make the user happy. And, high level metrics included in the report, like time interactive or perceptual speed index are a good proxy Also the snapshots timeline gives you a nice visual feedback about how much shorter the waiting time for our This is the full story of little oodel app. Now let’s take a look at some much >> ADDY OSMANI: Ewa and I had the benefit of being able to think about performance very early on but how does this apply at scale to much larger Let’s take a look at NIKKE itch, they are jaw pans largest media company. They have a site that has 450 million users that are accessing it and they spend a lot of time optimizing their old mobile site turning it into a new PWA. The impact of performance optimizations were huge [WRE] able to get they were able to see 14 seconds user and their daily active users and page views also went up What did they do to optimize their performance? You take a look at what NIKKEI did, they will look familiar that’s because many of these things we covered today NIKKEI optimized bundles, and shrank them down by 3%. One big change they made was use web pack to optimize their performance in their first and third party ones They were Al to use link Rel fetch improving next page performance loading by 75%. In addition to this they also took a tip that Ewa was just walking through, critical-path optimization. This an on open Tau muzation where you’re sending down 14 kilobytes consent, and you’re able to xees in enough styles in there you can reduce it by quite a lot Here they were able to shake off a second of their first meaningful pain of Finally on NIKKE, they took advantage of the purple pattern Purple was a pattern first discovered by the polymer team, it stands for push, render, pre-cache and lazy load What they’re doing is pushing their resources and they’re rendering their main article content quite quiekly They’re pre-cachecaching their top stories using service worker. This gives them off line access to read art I can cuss as well, and they’re lazy loading code, and they’re also using skeleton screens to improve the So, we talked about how performance and how businesses to give users better experience, but on users we have one more thing that we think could help with the future >> EWA GASPEROWICZ: Well, as you are heard during the keynote, we believe that machine learning reports an exciting opportunity for future in many areas So, what if we could take this two words of machine learning and web performance and blend them together? Maybe it could lead us to some really, really interesting solution solutions. Today we want to tell you about our experience with machine learning and explain why we think it has a large Here is an idea that we hope will spark more experimentation in the future that real data can really guide the user experiences we’re creating Today we make a lot of ash bra Terry decisions about what the user might want or need And, therefore, what is being pre-loaded. If we guess right, we are able to prioritize a small amount of resources, but it is really hard to scale it to the At the same time, we have a wealth of data about typical

user behavior readily available >> ADDY OSMANI: So, we actually have data available to better inform our optimizations today Using the Google analytics reporting API we can take a look at the next top page for any url on our site. In fact we have a little tool for this that anybody can checkout. Here it is for developers.dpoog el.com/web. As we can see here a lot of the users that land on the Chrome Dev Tools documentation actually end up going over to Lighthouse so we could potentially pre-fetch that page And, we could use this data to improve our page load performance. This gives us the notion of data driven loading for improving the performance of websites, but there is one piece missing here Having a good probability model is important because we don’t want to waste our users data by aggressively over (?) And content. We can take advantage of that Google analytics data and use machine learning implement such probability models. This is a lot less subjective and error prone than man Julie deciding what we should be pre-fetching or pre-loading We can then wire this all up using link Rel pre-fetch inside of our sites so as the user browsees through the site we’re able to fetch and cache things they need We can go further than this Earlier we talked about code splitting and lazy loading but in a single page app we’re dealing with routes, chunks, and a bund ler. So, instead of pre-fetching pages we can go granular. What if we could pre-fetch a number of Well across all of these ideas we’ve been focused on trying to make some of these a little bit more low friction for web developers to adopt. Today we’re happy to announce a new initiative called guess JS. Guess js is a project focused on data driven user experiences for the web. We hope that it is going to inspire and poil laetion of using data to improve web perform skmans be object objected. That it is all open source and available on GitHub today. This is built in collaboration with the open source kmin it tee by NIKK Kevin, Carl Matthews from gat see, Katie Let’s take a look at what guess gs out of the box. The first thing it is Google analytics module. It has a parser for popular frameworks allowing to us make map those urls back We then have a comprehensive web pack plug in that is able to do machine learning training on some of that data to determine all the probabilities. We’re able to bundle the JavaScript for your routes and chunks and we’re then able to wire it all together for you so we can pre-fetch fetch those junks while the user navigates through Now, that is enough talk. What about showing you how this works in practice? So, here we have a demo of guess JS in action. What we’re first going to do is load up a wrap We’re in the DevTools, we’re in low end mobile I’m ma laetion over slow 3G. This app is rendered very quickly what we can see in the network pan sell the number of routes that are already starting to be pre-fetched because we have high confidence there that are going to be used. We can toggle on a visualization of this. If pink we voo pages high confidence the user is going to want. In green we have low confidence and in yellow we have mild confidence We can actually start to navigate through this application. Let’s say we go to cheese We can visualize this again and see cheesecake is a page that users will often navigate to As you can see it loaded instantly because it’s already in the users cache Contextually we’re able to display to you in this demo all the visualizations of confidence levels we have for different pages as we navigate through the site Even this last page, the custard loaded really, really quickly using Now, pre-fetching is a great idea in practice, but we want it to be mindful of the users data plans. And, this is where we use navigate ter.connection effective type to make sure this we’re only pre-fetching things when we think So, this is our demo of guess.js using gats bee. By the way, this also happens to be a PWA with a great performance score, so thank you to both minute co-and Kyle for Check ous guess Today we talked about quite a few things, but at the end of the day, performance is about inclusivity, it is about people, it is about all of us We’ve all experienced slow pays loads on the go, but we’ve gotten an opportunity today to consider considering to give ous users more delightful experiences that load really,

We hope you took something away from this talk. Remember improving performance is a journey. On checkout some of the things we talked about today, talk to us in the web Sandbox if you have got any That’s it from us (Applause) (Cheers) Building the Future of Artificial Stage 2 Greg core Dianne green Die Dr Green Fee screen)

Realtime Real-time captioning Realtime cap Realtime captioning on this screen) yank key )>> At this time, please find your seat. Our session will be

begin soon

>> DIANNE GREENE: All right Hello

>> DIANNE GREENE: Who is >> DIANNE GREENE: Me, too. Me three So, I’m the moderator today I’m Dianne Greene, and I’m running Google Cloud and on the board and I am going to briefly introduce our really amazing guests we have here I also live on the standard campus so I’ve known one of our guests a long time So, let me just introduce them. First is fay fay, Dr fay FeidFei Li. He is the Google scientist for Google Cloud, she also runs the AI lab at Stanford University, the vision lab, and then she also founded sailors, which is now AI4ALL, which you will hear a little bit later And, is there anything you want to add So, then the other — so now we have Greg Corrado, and actually, there is one amazing coincidence Both Fei-Fei and Greg were undergraduate physics majors at

Princeton together at the same time. And, didn’t really know each other >> FEI-FEI LI: We were studying >> GREG CORRADO: It was kind of surprising to go to undergrad together and then neither of us in Computer Science and then rejoin later only once >> DIANNE GREENE: Hall pass AI and neural networks.Anyhow. So, Greg is a principal scientist in the Google brain group. He co-founded it And, more recently he has been doing a lot of amazing work in health with neuronetworks and machine learning. He has a Ph.D in neuroscience from Stanford, and so he came in to AI in a very interesting way and maybe he’ll talk about the similarities between the brain and what Would you like to add anything else? >> GREG CORRADO: No >> DIANNE GREENE: Okay. So, I thought since both of them have been involved in the AI field for a while, and it wasn’t, you know, it’s recently become a really big deal, but it would be nice to get a little perspective on the history, you know, yours in vision and yours in neuroscience about AI and how it was so natural for it to evolve to what it is >> FEI-FEI LI: I guess I’ll start So, first of all, AI is a very basic field in the history of science, of human civil layzation this, is a field of only of 0 60 years of age. It is a simple request Can machines think. And, we all know thinkers and thought leaders like Allan Touring challenged humanity with that question So, about 60 years ago a group of very pioneering scientists, computer scientists like Marvin Menske, John mark car thee, started really this field, John McCarthy who founded Stanford’ AI So, where do we begin to build machines that think? Humanity is best at looking inward in ourselves and try to draw inspiration from who we are, so we started thinking about building machines that resemble human thinking. And, when you think about human in till intelligence, you start thinking about different aspect, ability to reason, ability to see, ability to hear, to speak, to move around, make decisions, manipulate So, AI started from the very core foundational dream 60 years ago, started to proliferate as a field of multiple subfield, which includes robotics, computer vision, national language And, there is a very important development happened a round the 80’s and 90’s, which is a sister field called machine learning started to blossom. And, that’s a field combining statistical learnings, statisit was noticestatistics with Computer Science and combining the quest of machine intelligence, which is what AI was borne out of, with the fools and the capabilities of machine learning. AI as a field went through an extremely fruitful productive blossoming period of time. In fact, fast forward to the second decade of 21st century, the latest machine learning booming that we are observing is called deep learning, which has a deep root in neuroscience, which I will let you talk about, and so combining deep learning as a powerful statistical machine learning tool with the quest of making machines more intelligent. Whether it is to see or is it to hear And, lastly I just want to say, three critical factors converged around the last decade, which is the 2000’s, in the beginning of of 2000 teens, which are the three computing factors One is the advance of hardware that Second is the emergence of Big Data Powerful data that can drive the

statistical learning allege algorithms. And I was lucky to be involved myself And the third one is advances of machine learning and deep learning [A*L] algorithms. So, this converging of three major factors brought us the AI boom that we’re seeing today, and Google has been investing in all three areas. Honestly earlier than the curve Most of the effort started even in early 2000′ and as a company we’re doing a lot of AI work from research to products >> GREG CORRADO: And, it’s been really interesting to watch the divergence and exploration in various academic fields and then the reconvergence as we see ideas that are aligned So, it wasn’t, as Fei-Fei says, it wasn’t so long ago that fields like cognitive science, neuroscience, Artificial Intelligence, even things that we don’t talk about much more like cyber net I cans, we’re really all oo lined in a single discipline, and they have moved apart from each other and explored these ideas independently for a And, then, with the Renaissance in artificial neuronetworks and deep learning, we’re starting to see some reconvergence So, some of these ideas that were popular only in a small community for a couple of decades are now coming back into the mainstream of what Artificial Intelligence is, what statistical pattern recognition is has really been delightful to see But, it’s not just one idea, it’s actually multiple ideas that you see that were maintained for a long time in field like cognitive science that are coming back into the fold. So, another example beyond deep learning is reinforcement learning. So, for the longest time if you looked at a university catalog of courses and you were looking for any mention of reinforcement learning whatsoever, you were going to find it in a psychology department or a cognitive science department But, today as we all know, we look at reinforcement learning as a new opportunity, as something that we actually look at for the future of AI that might be something that is important to get machines to really learn in completely dynamic environments, in environments where they have to explore entirely new stimuli how this convergence has happened back in the direction from those ideas into mainstream Computer Science, and I think there is some hope for exchange back in the other direction So, neuroscientists and cognitive scientists today are starting to ask whether we can take the kind of computer vision models that Fei-Fei helped Pioneer and use those hypotheses for how it is that neural systems actually compute, how our own biological brains see. And, I think that that is a really, it’s really exciting to see this kind of exchange between disciplines that have >> DIANNE GREENE: You know, one little piece of history. I think that is also interesting is what you did Fei-Fei with image net, which is a nice way of explaining, you know, building these neural networks where you labeled all these images and then people could refine their allege algorithms. Go ahead and explain that really >> FEI-FEI LI: Sure. About ten years ago the whole community of computer vision, which is the sub field of AI was working on the holy grail problem of object recognition, which is you open your eye, you can see the world full of objects, like flowers, chairs, people, you know, and that’s a building block of visually intelligence and intelligence in general. And, to crack that problem, we were building as a field different machine learning models. We’re making small progress but we’re hitting a lot of walls And, when my student and I started working this problem and start thinking deeply about what is missing in the way we’re approaching this problem, we recognize this important interplay between data and statistical machine learning models. They really reinforce each other in very deep mathematical ways that we’re That realization was also inspired by human vision. If you look at how children learn, it’s a lot of learning through Big Data experiences and

exploration So, combining that, we decided to put together a pretty epic effort of we wanted to label all the images we can get on the Internet, and of course we Google searched a lot, and we downloaded billions of images and used crowd sourcing technology to label all the images, organize them into a data set of 15 million images, organized in 22,000 categories of objects and put that together, and that’s the image project And, we de-mock craw ties edit to the research world and released the open And, then we starting 2010 we held an international challenge for the whole AI community called image net challenge And, one of the teams from Toronto, which is now at Google, won the image >> DIANNE GREENE: Yeah. Yeah. >> FEI-FEI LI: With the deep learning con voe lootion on the network model, and that was year 2012. And, a lot of people think the combination of image net and the deep learning model in 2012 was the onset of >> DIANNE GREENE: A way to compare >> FEI-FEI LI: Exactly. >> DIANNE GREENE: And it was really good. So, Greg, you’ve been doing a lot of brain inspired research, very interesting research. And, I know you’ve been doing a lot of Could you tell us a little bit about that. >> GREG CORRADO: Sure So, I mean, I think you know, the image net example actually sort of sets a play book for how we can try to approach a problem The kind of machine learning in AI that is most practical and most useful today is once where machines learn through imitation. It is an imitation game where if you have examples of a task you can perform correctly, the machine can learn to imitate this, and this is called So, what happened in the image recognition case is that by Fei-Fei building an object recognition data set, we could all focus on that problem in a really concrete, attract able way in order to compare different methods. It turned out that methods like deep learning and artificial neural networks were able to do something really interest anything that space that previous machine learning and artificial Artificial Intelligence methods had not, which was that they were able to go directly from the data to the predictions and break the problem up knew him smaller steps without having being told exactly how to do that So, that’s what we were doing before. That we were trying to engineer features or queues, things that we could see in the stim stimuli that then we would do statistical learning on to With artificial neural networks and deep learning, we’re learning to do those things all together. And, this applies not only to computer vision, but it applies to most things that you could imagine a machine imitating. So, the kinds of things that we’ve done, like with Google SmartReplay and now smart compose, we’re taking that same approach that if you have a lot of text data, which it turns out the Internet is full of, what you can actually do is you can look at the sequence of words so far in a conversation, or in an email exchange and try to guess what comes next. >> DIANNE GREENE: You know, I’m going to interrupt here a little bit and >> DIANNE GREENE: So, you’re talking about, you know, neural inspired machine learning and so forth, and so, you know, this Artificial Intelligence is kind of bringing into question what are we humans, and then there is this thing out there called artificial general — AGI Artificial general intelligence What do you think is going on here? >> GREG CORRADO: I really don’t So, there is a variety of opinions in the community, but my feeling is that, okay, we have finally gotten artificial neural networks to be able to recognize photos of cats. Right. That’s really great >> DIANNE GREENE: Fei-Fei was that >> FEI-FEI LI: No >> GREG CORRADO: So the kind of thing that is working well right

now is this sort of pattern recognize, this immediate response where we’re able to recognize something kind of reflexly, and we now have, I believe, machines can do pattern recognition every bit as well as humans can, and that’s why they can recognize objects in photo, that is why they can do speech recognition and that is why they can win at a game liking. That is only one small sliver, a tiny sliver of what goes into something like intelligence Notions of memory and planning and strategy and contingencies, even emotional intelligence, these are things that have just, we haven’t even scratched the surface. So, to me, I feel like it is really a leap too far to imagine that having finally cracked pattern recognition, after some decades of trying, that we are therefore on the verge of cracking all of these other problems that go into >> DIANNE GREENE: Although we have gone way faster than either of you ever expected us to go, I believe. >> FEI-FEI LI: Yes and no Humanity has a tendency to over estimate short-term progress and underestimate long-term progress. So, eventually we will achieving things that we cannot dream of, but Dianne and Greg, I want to just give a So, the definition of AGI, again, is an in tral spectrum definition of what humans can do. I have a two-year-old daughter who doesn’t like napg. And, I thought I’m smart enough to scheme to put her in a very complicated sleeping bag that doesn’t get herself out of the crib, and just a couple of months ago I was on the monitor watching this kid, two years old, where for the first time she — I was training her for napg by herself. She was very angry. So, she looked around, figured out a weak spot on the crib where she might be able to climb out, figured out how unzip her complicated sleeping bag that I thought I schemed to do really, you know, to prevent that, and figure out a way to climb out of the cryp that is way taller than who she is and managed to escape safely (Laughter) without >> DIANNE GREENE: Okay. Okay How about AGI equivalent to my cat or equivalent to >> FEI-FEI LI: If you’re shifting the definition, sure. >> FEI-FEI LI: But even cat, I think there are things that the cat is So, I do think that if you look at an organism like a cat from a behavioral level, like how cats behave and how they respond to their environments, I think that you could imagine a world where you have something like a toy that, you know, is for entertainment purposes that approximates a cat in a bunch of ways in that the sorts of behaviors that the humans observe, you’re like oh, it walks around, it doesn’t bump in things, it meows at me, I think But what you can’t do, you can’t take that robot and dump it in the forest and have it figure out what it needs to do in order to survive and make >> FEI-FEI LI: But it’s a goal >> DIANNE GREENE: It’s a healthy And along the way, like, you both, at least we all three agree that AI’ capacity to help us solve all our big problems is going to outweigh any kind of negative, and we’re pretty excited about that, I guess Like in Cloud you’re kind of doing some cool things with auto ML and so >> FEI-FEI LI: Yes So, we talk a lot, Dianne, about the belief of building be Nevada oh lant technology of human use Our technology reflect our values. So, I personally, and I know Greg’ whole team is working on bringing AI to people and to the fields that really need it to make a positive difference So, at Cloud we’re very lucky to be working with customers and partners from all kinds of vertical industries from healthcare, where we collaborate, to agriculture, to sustainability, to entertainment, to retail, to commerce, to finance, where our customers bring some of the toughest problems and their pain points and we can work with them For example, recently we rolled out auto ML That is the recognition of the pain of entering machine learning. It’s still a highly technical field. The bar is

still high Not enough people are trained experts in the world of machine learning, but yet, our industry already has so much need to, you know, tag pictures, understand imagery image rees as a vision So, we’ve worked hard and thought about the suite product called auto ML where the customer, we lower the entry barrier by relieving them from coding machine learning custom models themselves All they have to do is to give us the kind of — provide the kind of data and concept they need Here is an example of a ramen company in Tokyo that has many shops of ramens and they want to build an app that recognize the ramens from different ramen stores. And, they give us the pictures of ramens and the consents of their store, one store, two store, three, and what we do is to use a technique, a machine learning technique that Google and many others have developed called learning to learn, and then build a customized model for the customer that recognize ramens for their different stores. And, then the customer can take that >> DIANNE GREENE: You know, I can write a little C++, maybe some JavaScript. >> FEI-FEI LI: Absolutely. Absolutely We’re working with teams that they don’t have, not even C++ experience, and we have a drag and drop interface, and you can use auto ML that way. >> GREG CORRADO: That is, because I really believe that, you know, there are so many problems that can be solved using this technique, that it’s critical that we share as much as possible about how these things work. I don’t believe that these technologies should live in law gardens, but instead we should develop tools that can be used by Everyone in the community, that is why we have a part of very aggressive open source stance to our software packages, particularly if AI And, that includes things like TensorFlow that are available completely freely and it includes the kinds of service these are available on Cloud to do the kind of compute storage and model tuning and surveying that you need to And, I think it is amazing that we, the same tools that my applied machine learning team uses to tackle problems that we’re interested in, those same tools are accessible to all of you, as well, to try to solve the same problems in the same way And, I’ve been really excited with how much — how great the uptake is and how we’re seeing expanding to other languages, mentioning JavaScript, quick plug for TensorFlow >> DIANNE GREENE: You should >> GREG CORRADO: Yes. Exactly >> DIANNE GREENE: It does give a So, you’re doing — you’re building, I mean, with machine learning we’re bringing into market in so many ways because we do — we have the tools to build your own models, the tenants and flow. We have the auto ML that brings it to any programmer. And, then, what is going on with all the APIs and how is that going to effect every industry and what do you see going >> FEI-FEI LI: So, Cloud already has a suite of APIs for a lot of our industry partners and customers from >> DIANNE GREENE: Which are based >> FEI-FEI LI: For example, Bo x is a major partner with Google Cloud where they recognize a tremendous need for organizing customers imagery data to help customers. So, they actually use Google’ vision >> DIANNE GREENE: Yeah >> FEI-FEI LI: And that’s a model easily delivered to our customers >> DIANNE GREENE: Yeah. It is pretty exciting Greg, how do you think that is going to play out in the health industry. I know you have been thinking about that? >> GREG CORRADO: Yeah. So, healthcare is one of the problems that a bunch of people are working on at Google and a lot of people are working on outside, as well, because I think there is a huge opportunity to use these technologies to expand the availability and the accuracy of

healthcare And, part of that is because there is — doctors today are basically trying to weather information, hurricane, in order to provide care So, there is — I think there are thousands of individual opportunities to make doctors work more fluid to build tools to solve problems that they want solved, and to do things that help >> DIANNE GREENE: I mean, I think you were telling me that so many doctors are so unhappy because they have so much drudgery to do >> GREG CORRADO: Yeah Absolutely. I believe there has been a great, you know, when you go to a doctor you’re looking for medical attention, right, and right now a huge amount of their attention is not actually focused on the practice of medicine, but is focused on a whole bunch of other work that they have to do that doesn’t require the kind of insights and care and connection the real practice of medicine does So, I believe that machine learning in AI is going to come in to healthcare through assistive technologies that help the doctors do what they want to do >> DIANNE GREENE: By understanding what they do and assist testimony them >> DIANNE GREENE: Speaking of human. Fei-Fei, do you want to talk a little bit about why you’ve been so — you >> FEI-FEI LI: Yeah. Thank you So, if we look at the history of AI, we’ve entered Phase II, the first 60 years is AI as more or less a niche technical field where we’re still laying down scientific foundations, but starting this point on, AI is one of the biggest drivers societal changes to come So, how do we think about AI in this next space? What is the frame of mind that should be driving us has been on top of my mind, and I think deeply about the need for human centered AI, which, in my opinion, includes three elements to The first element is really advancing AI to the next stage and here we bring our collective background from neuroscience, cognitive science, you know, whether we’re getting to AGI tomorrow or in 50 years, there is a need for AI to be a lot more flexible, nuanced, learn faster, in more unsupervised, semi supervised one shop learning ways to be able to understand emotion, to be able to communicate with humans. So, that is the more human centered The second part is the human centered AI technology and application is that I love what you’re saying that there is no substitute for humans. This technology, like all technology, is to enhance humans, to augment humans, not to replace humans We’ll replace certain tasks or replace humans out of danger or tasks that we cannot perform, but the bottom line is we can use AI to help our doctors, to help our disaster relief workers, to help decision makers. So, there is a lot of technology in robotics, in design, in natural language practicing that is centered around human The third element of human center AI is really to combine the thinking of AIs a technology, as well as the societal impact We are so nice ant in seeing the impact in this technology, but already, like Dianne said, that we are seeing the impact in different ways, ways that we might not even predict. So, I think that is really important and it’s a responsibility of Everyone from academia to industry to government to bring social scientists, philosopher, law scholars, policy makers, ethicists, and historians at the table and to study more deeply about AI social and humanistic impact. That is the three elements of human >> DIANNE GREENE: That’s pretty >> DIANNE GREENE: — I think we here at Google are working as hard as we You know, you mentioned, you know, what we need to be careful about out there with AI and regulatory What are some of the barriers to, you know, I think every

company in the world has a use for AI in many, many ways I mean, it’s just exploding in all the verticals, but there are some impediments to adoption, for example in financial, the financial industry, they need to have something called explainable AI. And, could you just talk about some of the different barriers you see to being >> FEI-FEI LI: We should start >> GREG CORRADO: I think there are a bunch of really important things to consider. So, one of the things is, of course we want to have machine learning systems that are designed to fit the needs of the folks that are using them and applying them And, that can often include not just giving you the answer, but telling me something about how that was derived So, some kind of explain ability So, in the healthcare space for example we’ve been working on a bunch of things in medical imaging, and it is not acceptable to tell the doctor something looks fishy in this X-ray or this pathology slide or retinal scan. You have to tell them, well, what do you think is wrong, but more importantly, you actually have to show them where in the image you think the evidence for that conclusion lies so that they can then look at it and decide whether they concur or they disagree or oh, well, there is a spec of dust there and that’s what the machine is picking And the good news is that these things actually are possible and I there has kind of been this unfortunate mythology that AI and deep learning, in particular, is a black box, and it really isn’t We didn’t study how it worked, because for a long time it really didn’t work that well But now that it is working well, there are a lot of tools and techniques that go into examining how these systems work, and I think explain ability is a big part of it in terms of making these things available for a bunch of applications. >> FEI-FEI LI: So, in addition to explain ability, I would add bias I think bias is a issue we need to address in AI, and I see bias from where I sit two major kind of bias we need to address One is the pipeline of AI development starting from the bias of the data to the outcome of the bias And, we have heard a lot about if the machine learning algorithm is fed with data, that does not represent the problem domain in a fair way we will introduce bias, whether it is missing a group of people’ data or bias related to a skewed distribution These are things that would have deep consequences whether you are in the healthcare domain or finance or legal decision making So, I think that is a huge issue, very nicely that Google is already addressing that. We have a whole team at Google working >> DIANNE GREENE: Yeah. This is true. >> FEI-FEI LI: And, another bias I think is important, is the people who are developing AI >> FEI-FEI LI: And, the lack of >> DIANNE GREENE: It is so important, and that kind of brings me to maybe some of our — we’re getting close to the end, but if, you know, where is AI going? I mean, how prevalent is it going to be? We look at universities and these machine learning classes have 800 people, 900 people There is such a demand. Every computer Science graduate wants Where is it going? I mean, will every high school graduating senior be able to customize AI to their own purposes? And, howwhat does it look like five, ten years from now? >> FEI-FEI LI: So, from a technology point of view, I think that because of the tremendous investment in resource, both in the private sector, as well in the public sector now every — many countries are waking up to investing AI We are going to see a huge continue development of AI technology. I am mostly excited at Cloud or seeing what Greg’ team doing, AI being delivered to the industries that really matter to people’ lives and work quality But, Dianne, I think you are also asking is how are we educating more >> DIANNE GREENE: Both making it easier to use and educating

them, and what is it going to look like? >> FEI-FEI LI: That’s a really tough question, because at the core of today’s AI still calculus, and that is >> GREG CORRADO: So I think that from the kind of — from the tech industry perspective or from the computer science education perspective, I think that we’re going to see AI and ML become as essential as networking is, right. Like, no one really thinks about oh, well, I’m going to write some software and it’s going to be stand au lon on a box and it’s not going to be a TCPI connection, right. We all know that you’re going to have a TCPI connection at the end of the day somewhere and Everyone understands the basics Networking stack, and that is not just at the engineering, the level of engineers, that is the level of designers of executives, of product developers and leaders. And, the same thing I think is going to happen with machine learning and AI which is that designers is going to start to understand how can I make a completely revolutionary kind of product that folds in machine learning the same way that we fold in networking and Internet technologies into something everything we built I think we are going to see tremendous upkeep and it become be Annie vase I have background of the technologies. I think in that process the ways we are using AI are going to evolve. I think right now you’re seeing a lot of things where AI and machine learning adds some spice, some extra little coolness on a feature, and I think that what you’re going to see over the next decade is you’re going see more integration into what it means for the product to actually work. And, I think that one of the great opportunities there is actually going to be the development of artificial emotional intelligence that allows products to actually have much more natural and much more fluid human interaction. We’re good beginning to see that in the assistant now with speech recognition, speech synthesis, understanding dialogues and which exchanges, . I think this is still in its infancy. We’re going to get to a point where the products that they build interact with humans in the ways most humans find >> FEI-FEI LI: I spend a a lot of time with high xol lers. I really believe in the future We always talk about AI changing the world, and I always say the question is who is changing AI And, to me, bringing more human mission ond thinking into technology development and thought leadership is really important. Not only for the future of our technology and the value we instill in our technology, but also in bringing the diverse group of students and future leaders into the development of AI So, you know, at Google we all work a lot on this issue, and personally I’m very involved with AI4ALL, which is a non-profit that educates high schoolers around the country from diverse background, whether they’re (? ) Or students of unrepresented minority groups, and we bring them on to AI, university campus, and work with them on >> DIANNE GREENE: And, at Google we’re just completely committed to bringing all our best technologies to everybody in the world, and we’re doing that through the Cloud, and we’re bringing these tools, we’re bringing these APIs, and the training and the partnering and the processors, and we’re pretty excited to see what all you guys (Applause) Design Actions for the Google Assistant

: Beyond smart speakers , to phones and Smart Displays Thank you for joining. Our session will be begin soon

Design Actions for the Google Assistant

(Realtime viewing screen) >> Hi husband and I have been in a somewhat long distance relationship for a few years now We talk on the phone and message a lot. But, perhaps what is most enjoyable for me is when I visit him, I leave little notes all over his apartment. And, sometimes when I’m home alone, he’ll start changing the color of the smart lights in my living room to let me Our human conversations take a lot of different forms. We don’t just talk verbally, depending on where we are and what we’re trying to say, we might write, hold hands, or even This is human conversations that have inspired us to create conversations with technology It’s no surprise, then, that we’re starting to incorporate

more modalities But, with this increased number of devices and complexity of interactions, it might feel overwhelming to design for the Google Assistant If you look at the Google Assistant today, I can talk to it not just through the speaker in my living room, but also through my car or my headphones I can tap on my phone or my watch. But how do we design for an increased I’m Saba Zaidi and I’m an interaction design other the Google assist tantd team. I will be talking about how you can design actions across services and give you some frameworks and some tips gear and I can tell you using the design Before we get started talking about how to design services across services we need to get a better experience of Let’s start with a journey through a user’ day. You wake up in the morning to the sound of an alarm ringing on your Google Home without even getting out of your blanket you can say, hey, Google, stop You get up, get ready, and as you’re about to head out you want to make sure you don’t need an umbrella, so you turn to the Smart Display in your hallway and you ask, hey Google, what is the weather today? You’re able to hear a summary and also As you’re walking to your car, you take out your phone and tap on it to ask the Google Assistant to order your favorite from Starbucks. It is able to connect to you Starbucks As you’re driving, you want to listen to your favorite pod cast or the news, and you can ask Google Assistant for help in a hands freeway and it can connect you, for example, to NPR news update You go about your day and when you come back home it’s time to make dinner for your family You turn to the Smart Display in your kitchen and you ask, for example, Tasty for help with a recipe, like in this place, pizza bons. Tasty comes back and you can hear and see on the screen step-by-step After dinner, it’s time to unwind with your family in the living room, you decide to play a buzz feed personality quiz, and it’s a great way for your family to get together around a shared And then you head to bed. You say, hey Google, good night It’s able to start your custom bedtime routine, which includes for example, setting your aur alarms or telling you As you can see users were able to interact with a Google Assistant similar to human interactions in that there were And a although there were a lot of different devices, there are some overarching principals First, the experience was familiar whether it was morning, evening orca mute, users can access their favorite Google Assistant actions whenever, The system was available in different contexts. You saw it being used on the go and at home, up close and from a distance, and So, as you’re thinking about your actions, think about all the different And lastly, different devices lend themselves to different modes of interaction Some are voice only, some are visual and some are a mix of both. And, we’ll talk a bit more about the strengths of and weaknesses of each of First, let’s take a deeper look at a couple of those devices and see how these principles apply You have already heard about Smart Displays now. They were announced early this year, and even though it was a new device users can expect it to feel familiar. It is essentially like a Google Home with a screen It is designed to be used at home from a distance, and as a shared device And, as you can see in this queue inspiration action example, even though will is a screen, users still interact with the device through voice They don’t have to tap through complex app navigation. Instead the visuals are designed to be seen at a distance. And, of course, the user can walk up to the screen and touch it if they want Next, let’s take a quick look at phones. One thing to note here is that we’re making phones more visually assistive, as well, similar to smart die replies, allowing for a greater focus on the content. These devices, as you know, are great for use cases, on the go, up close, and in a private

environment. And, users can interact with the Google Assistant on the phone through So, hopefully that gave au better sense of what the experiences on Google Now, in order to divine for so many devices, it helps to have a vocabulary to help categorize them At Google we use this divine framework called the multi modal spectrum It helps us categorize devices based on their interaction types. On one end you have voice only devices like the Google Home and other smart On the other end you have visual only devices, like a phone or a Chromebook that is on mute, and most watches. So, you have to look at these devices And, in the middle, you have what we call multi modal devices, in that Cars and smart displays that rely primarily on voice but have optional visuals are known as voice forward Phones and Chrome books with the audio on, which can use mix of both voice and visuals are known as enter So, now we have a vocabulary for categorizing these devices, but before we can start designing for them it helps to understand the zr*ents and weaknesses Let’s talk about voice first Voice is great for natural input We’ve been using it for melony au, whether you’re a kid, a scene your, someone who is not tech savvy, it is still really intuitive. It is great for hands free far afield use cases Like, setting a timer in the kitchen And it helps reduce task navigation. So, for example, if you were out on a run, you could ask your favorite Google action that is about fitness and ask it about your work outs. Instead of having to pull out your phone, navigate to that app and search Similarly, could you ask the Google Assistant to play the next song without having to mess with any controls So, voice has a lot of benefits It’s great. But, it does have some limitations Think about the last time you were at a cafe. You probably walked past all the pace strees, looked at the menu and made eye contact with the cashier. Think about how difficult that interaction would be if it was just through voice Have a listen at what the menu would >> Espresso, latte, vanilla latte, Kappa theen oh, moek kau, American oh, flat white, hot chocolate, black cough pee and tea >> That feels pretty overwhelming, right, it is like waismg all the options go by and catching the right one. It is like looking at a particularrer voice isy ferm moral and linear and that makes it very difficult to hold a lot of By contrast, the menu is a lot easier to scan if it looks like this You can imagine that the problem gets compounded if you also today compare price xes calories. So, visuals are great for scanning and We aum so use them a lot to reference objects in the world. So, I can look at all the baked goods and then point to the one that I want. Instead of, for example, having to hear or say out loud something like sma small sugar cookie with a chocolate tciz So, voice and visuals both have their benefits, and it often is useful to use both So, in this example, we usually prefer to look at the menu but then talk to the Similar benefits to using both voice and visuals exist in the real world, as well. Or in the digital world, as well. And, that is what makes multi modal devices, like phones and smart displays such a unique opportunity by leveraging the best of both voice and visuals, they’re able to provide really rich interactions One thing to keep in mind, au again is to start with a human conversation. You might have an Apple ready, but avoid the temptation to duplicate it Instead, try to open serve a relevant conversation in the real world or role play it with a colleague. And, write down that dialogue. You’ll realize that not everything that is in your app does well as conversation or vice versa Instead think of a reaction that is a companion to your app. I

won’t go into detail into how to write good dialogue or create per Sony, but I highly recommend you checkout our brand new conversation design website at that link there. It goes into great detail into how to get started So we’ve learned that we need to create spoken dialogue and add visuals So, let’s take a look at an example of how to do that and how that helps us So, if you haven’t already, I would encourage you to checkout the Google io 2018 action to help you learn more about this event. We started by writing a spoken dialogue as for a voice only device like a Google Home, and it includes terms like this one, a user can say browse sessions and we respond with a spoken response like here are some of the topics left for today. And, so on Now, in order to take this dialogue and scale it, we need to take every turn like this, and think about all the ways we can incorporate visual components to it. This would include, for example, display prompts, cards, and suggestions In our example we can accompany that spoken protht with a display prompt like which topic are you interested in? This helps carry the written conversation on a screen. We can add a list of sessions as a card, a user could tap on that for example, and we could have a suggestion chip like none of these. And, this helps a user know how to Once we’ve constructed our response to have spoken and visual elements, we can then map that response to the multi modal spectrum from earlier. So, depending on whether the device has visual or audio capabilities or how important voice is, we can choose the You already saw what our response would look like on a Google Home We would simply have the spoken prompt. Let’s take a look at what the response A Smart Display is a voice forward device, so we still need to show all the spoken prompt and make it carry the whole conversation We don’t really need a display prompt anymore, especially if we’re going to have better visuals, like the list and the chips. A phone on the other hand, is enter modal device, and we need to have both the voice and the visuals carry the conversation. In this case, you might notice that we shortened the spoken prompt because we can direct the use tore look at the screen for more details And, Ulas will talk more about how you can do that. And, of course the rest of the visual And finally, if your phone was on silent, we would simply ignore the spoken prompt and the visual components are able to carry the complete So now we’ve learned that in order to scale our dialogue, we need to write spoken prompts and add visuals to them But, how do we know what kinds of visuals to add? I would like to leave you with five tips for how how can incorporate visuals into your dialogue. For that let’s take up this made up assistive action called National Anthem player on a Smart Display. As the name suggests the user can ask for a country and it will come back with the National Anthem for that country. When you invoke this action, it gives you a welcome message that sounds like (Audio clip: Welcome to National Anthem player. I can play the National Anthems from 20 different countries, including United States, Canada, and the you nighted kingdom Which would you like to hear? >> Saba Zaidi: As you can see the device is currently writing on the screen what it is saying out loud. This is a missed opportunity especially by now we’ve learned that visuals have some strength over voice and that smart displays are great for So, tip No. 1 is to consider cards rather than just display prompts In this case, we’ve swapped out the words for a carousel. Users can quickly browse through the list and select the country that they want You’ll notice this is kind of similar to the menu example we looked at in the cafe where visuals are helping someone Additionally, things like maps, charts, and images are also great on visual devices, because they’re difficult to describe through voice, similar to the Second, consider varying your spoken and your display prompts This is particularly useful for devices that are enter modal that might have a display prompt next to a card. And, some of that information might be redundant. So, in this case, we’re stripping out the examples for the countries in the display prompt because the card already Third, consider visuals for suggestions

Here we know that the user is a repeat user, so we’re reorderering the list so that they’re most frequently visited countries show up first. We’re also using suggestion chips to allude to the user how they can follow up or pivot a conversation. This kind of discovery can be quite Next, you can use visuals to increase your brand expression We used to allow you to change your voice and choose a logo, but now we’re also going to be allowing you to choose a font and the background image, and Ulas will talk more about how you dock that. As you can see here, the experience looks a lot more custom and immerse I Lastly, visual devices are great for carrying conversations that started on a voice only device So, for example, if I use this National Anthem action on a Google Home and I wanted to see the lyrics, the action can send a notification after a few steps to my phone and I can take out my phone and read them there So, hopefully those five tips will help you in incorporating more visuals Let’s summarize what we’ve learned so far on how to design actions for the assistant. First, users interact with the Google Assistant in a variety of different ways and contexts, so this could include i home or on the go, up close or at a In order to design across so many modalities, it helps to keep in mind the multi modal spectrum and think of your responses of having visual components as And lastly, learn and leverage the strengths of each of these modalities. We learned for example that visuals are great at scanning, brand expression and discovery. So, instead of just showing on the screen what you’re saying out loud, try to use cards instead All right. Now I’m going to hand it over to my colleague Ulas, who is going to talk about how you can develop these actions >> Thank you, [SAB] >> Ulas Kirazci: As we said this type runs on many assistants and in the future will run-on many others. It will run well on all these devices today, as well as all the devices dmt future To walk you through this, I’m going to use a test action I created called California surf report that gives wave height and weather information information in beaches in California for surfers. So, currently I only have spoken responses. No visuals yet. So, let’s see what the sounds like on (Audio clip: What is the air for most of today, waves will be from two to three feet in the morning to three to four feet in the afternoon Expect waist swell in the morning with northwest winds Shoulder high surf in the afternoon >> Ulas Kirazci: Great. Pretty informative. Now let’s take a look at what this sounds like and looks like on a voice forward device, like Smart Display. Switch the demo, please Okay, Google, talk the California surf report (Google: Okay. Let’s get the test version of Welcome to California surf report >> Tell me the surf report for (Google: Surf insanity fau crews beach looks fair for most of today. Waves will be from two to three feet in the morning to three to four feet in the afternoon.) >> Ulas Kirazci: I cut it short As you can see spoken responses are a good way to get started and working well on many devices, but, when we have is a screen we can make it a lot better. One of the best visuals we can add and the easiest once to add is a basic card And, here is an example from the note JS client library of how to add a basic card to your responses. So, we start with the spoken prompt, as usual, and the second statement of ask as a basic card. A basic card can have a title, subtitle, body text and optional image. . So, let’s see what this looks like on our Smart Display again. Let’s switch to the demo, please Okay, Google (Google: Surf in Santa Cruz beach looks fair for most of today. Waves will be from two to three feet in the morning to three to four feet in the afternoon

Expect waist swell in the morning with northwest winds Shoulder high surf in the afternoon with southwest winds >> Ulas Kirazci: Okay. Great Now it looks much better. We have a nice visual for it rather And, here is some other kinds of cards you can use. There is a carousel list card that allows you to display a set of things that the user can choose from There is also a newly introduced table card Another great way to add visuals to your action is to use suggestion chips. Suggestion chips allow the user to understand what they can do in this turn in the conversation And, also they simplify user input. You can use more about responses at And, by the way, all of this, like I promised, works equally well on enter modal device, like a phone As you can see we formatted the font sizes and layout to fit the enter modal Okay. Great So, next maybe what we want to do is shorten the spoken response a bit, because it’s a bit repetitive with the information that is already on the card. Users can just look at the display for these So, how do we do this? We have a feature in the API called capabilities So, instead of thinking if Google Home do this, L Smart Display do that, think about what capabilities the surface that the user is interacting with you on had Does it have a screen? Can it output audio? The capabilities of the device are reported to you in every web protocol so you get to, though, what this is on every conversation term. And, here is a sample list of capabilities that we support, and there is a — you can learn more at the link So, in our use case what we’re looking for is the screen output capability This indicates that the user device has a screen so we can show them a card. By the way, if you don’t want your responses to differentiate between ones with displace and once that are (?) Only you can always add a card and we will strip it out for you lently. So, this makes it easy for you to And, here is, again t no JS client library snippet that shows how to use this In the first if statement determined that the user device did not have a screen. So, we have the full content in the spoken response In the else clause we know that there is a screen so we shorten the spoken response and end it with a phrase like here is the report to lead the user to the screen, and then we append the basic So, let’s see how this looks like Okay, Google, show me the shortened (Google: Surf in Santa Cruz beach looks fair for most of today with two to three foot waves in the morning and three to four foot waves in the afternoon. Here is the report >> Ulas Kirazci: So that sounds a Another way you can use capabilities is to require that your action only run-on devices that have the capabilities This is what we call static capabilities. And, you can configure these through the actions on Google console as you can see here But, only use this if your action absolutely makes no sense without that capability So, for example, the National Anthem player action that Saba talked about would not make sense on a device without audio. So, this would be a good place to use that However, for the surf report app, it equally works well on voice only and display only devices. So, it wouldn’t be a good place to use this. You can configure all of this using Another high quality and easy way to target multiple surfaces is to use Google libraries we call helpers So, I’ve been asking California surf report, the surf report with the beach name, but if I don’t say the beach me what I can say. It doesn’t tell me which beach this action actually supports, so we can fix that with a helper called ask for carousel What ask with carousel does is it presents the user with a list of options to pick from, and associates visuals with each item

In addition, when the user utters the query to select an item, Google does the matching of the query to the item so we can deal with variations in how people So, let’s make our prompt better with the ask with carousel helper And, again, the note JS library snippet here. We start with the spoken response with the prompt, and we add a carousel to it And, carousel is made up of items, and each item has a list of phrases that you think the user might say to match this item. And, visuals associated with each item so the user can understand what they’re So, let’s switch to the demo and see what this looks like Okay, Google, show me the surf (Google: Which beach do you want to see the report for? ? >> Ulas Kirazci: So this is the example where I’m a little confused as Okay, Google, show me the beach (Google: Which beach would you like to know about? ) >> Ulas Kirazci: Now it is great. It lets me know what the possible options are and even tap on one if I And, we always continuously improve the experience with these helpers, so this is one of the advantagees of helper is that we continue to modify them and Now, since we launched the card at API last year we’ve since come up with smart displays. aus noticed on smart displays, each conversation turn takes up the entire screen. So, given this fact, maybe we can make our visuals more branded and give them a little bit more flare So, we’re introducing styling options this year So, here is how it works. Let’s switch to the demo, please So, here is a new tab in the actions on Google console called team customization. You can modify the background color, primary color, so that’s like the font color of the text, and the typography and even set a background image Let’s say we want to make this cursive, and let’s add a background image What do we got here? So, this is the landscape pass image, and then we want to add a portrait image, as well All right Now all we have to do is save and then we click test right here to update our test version All right. Now let’s see what this looks like on the demo Okay, Google, show me the surf (Google: Surf in Santa Cruz beach looks fair for most of today with two to three foot waives in the morning and three to four foot waves in the afternoon >> Ulas Kirazci: I think now that looks really beautiful Smart displays are coming out later this year; however, you can start building your action against these vish yums today using the updated simulator So, we’ve added a new simulator device type for smart displays, as you can see here And, we’ve also added a display tab which shows you the full screen version of what you would get on a Smart Display And o the left side, as usual, you have the spoken responses, as well as One last thing. We said the assistant is in many Places So, if the users interacting with your action using a voice only device, maybe they also have a device that has a display on it. For example, a phone So, what if in your current turn in the conversation you really want to have your response display something So, for example, in the surf report action, the user might

ask us for the full report, and we want to return the hour by hour wave height graph. So, how do we do that? There is a feature in the API called multi-surface conversations, and here is how it works In each API web book we report not only the capabilities of the device that the use user is currently using, but the union of the capabilities of the all the devices the user owns. In this example what you see is that the current user device only has a voice app capability and has no screen. But in available surfaces we can see the screen output capability. So, the user seems to have another device with a screen on it. So, how do we use this? We, again in the client library, we have a function to help you to inspect if the user has a certain capability Now we determine that the user has a device with this capability. How do we transfer the user to the other device? We have a function, ask for new surface that does this. And, you can give it a notification that will appear on the target device. In addition to the list of capabilities that you’re require for continuing your conversation I’m not going to demo this, but here is what it looks like. So, let’s say the user said show me the full report and they’re talking to you on a Google Home So, you would call the ask new surface function that I showed you earlier and we ask the user permission to send the conversation over to the users phone And if the user accepts, then there is a notification sent to the new device, the conversation ends on the current device and when the user taps the notification, then they resume the conversation from where you left off. Like this Note that this is not just for single responses This works equally well when you want to continue the conversation. So, we bring the full context over so So, to sum up, we’ve built a lot of features in the API for you to add visuals to your responses. So, please use them And, we make it such that we take your responses and optimize them as best as possible to all these surfaces, and surfaces in the future without extra And, if you wanted to customize your responses, always think of capabilities and not individual device types This way we can run your action on new devices without any extra work from you >> Saba: Thank you. So, I would just like to end with an invitation for you Next time you order coffee at a cafe or do a presentation like this one, start to notice all the different modes of interactions you use every day, and let the richness of those human conversations inspire how you design actions for users and help us evolve what it means to have conversations with Here are some links to resources we mentioned, and how to give feedback. So, also come just talk to us. We will be with our team at the assistant office hours and Sandboxes ready to answer your questions and showing off some of the devices ( >> (Applause)


IntroducingIntroducing Introducing AIY

Do-it-yourself Artificial Intelligence

IntroduceIntroducing AIY Do-it-yourself

Artificial Intelligence

>> At this time, please find your seat. Our session will

begin soon

>> You can see, right?

>> Good afternoon, everyone Welcome to the AIY session

How is everybody for I/O so far? Everybody had a good lunch

today? Yes? Okay. As you can see on the screen, the name

of the session is introduction to AIY. What is AIY? AIY is a

line of a product Google

makes for developers like you, makers all around the world, to

make applications for Artificial Intelligence by yourself

That’s the name of AIY, where that came from

Today joining me my colleagues in development relations, my name is Bill Luan Sebastian is joining me and Dushyantsinh joining me Before we begin, I will introduce my colleagues >> DUSHYANTSINH JADEJA: Hello, everybody. My name is Dushyantsinh I am excited to show demos on the AIY. Before we start, I would like Sebastian is give you a quick overview of what is AIY And to set the context for the talk >> SEBASTIAN TRZCINSKI-CLEMENT: Thank you, I’m Sebastian, with this very complicated last name A joined Google recently, just about 11 years ago. With my teams, what we do around the world, especially in major markets is help developers and startups build better mobile and weapon applications Today, I want to talk about AIY Before that let’s talk about DIY, do-it-yourself. Do you know what this is? It is not a UFO It is actually a musical instrument which makes nice sounds. But no, it was not invented in prehistoric times It was actually created in the year 2000 in Switzerland Schweitzer is land is the country where I am officially based, which is not one of the other UFOs, planes, because I travel quite a bit to see my team This is also a window into DIY culture. I will take you through a brief history of humankind’s appetite for DIY If you read the book, the history of humankind, you will know what I am talking about Our human ancestors were building tools as part of weapons, instruments, using stone, wood. If we skip a few generations of humans, we’re still using wood to create things This is one of my brothers, mattias and sister Lidia 30 years ago I’m sure you and your siblings just tinkered with stuff, whatever you can lay your hands on Let’s fast forward to the electronics age Do you remember in it the ’90s, desktops — not even laptops — desktops were so expensive Show of hands, who built computers. I’m the only one who is too old for this? This animated image, you can see how I used an overhead projector and dismantled LCD screen to build a cheap homemade projector. My mom was annoyed everything was all over the living room. She said go outside, fresh air That is my outside reusing garbage bags $1 for a garb — garage bags to build a balloon. Almost strong enough to lift me up My other brother, Lauren started tinkering with 3D printers My other brother — the one on the right. The left is a Forto in a museum As I said DIY is in our D.N.A from early on. It is also in Google’s D.N.A Four years ago, you remember the engineer from the Paris office — I am actually French This engineer came here at I/O to show the VR platform he had built using cardboard, during his 20% innovation time. This is the Daydream at VR platform Here’s what’s interesting One hand, it is objects and culture This DIY and hack it culture You may have megaspaces around

you. If there are none in your city, then you should start a megaspace. The great thing about the megaculture is that you learn through doing with your hands You make mistakes, so what! It doesn’t matter other. Figure out new applications that you can create with that technology It doesn’t matter what your background is In fact, it is better when you can mix and match different domains to find new applications, other technology Let’s take a look at what has happened in the past few years You now have macrocontrollers,- microcontrollers like ras berry pie — Raspberry pie And did you know that costs has decreased for lithium batteries And the Cloud and Artificial Intelligence. All of this sounds complicated It doesn’t have to be. This is with AIY. Do-it-yourself Artificial Intelligence that we introduced last year, you can have fun. The idea of this megaculture is before anything else, to have fun Using in this case, Artificial Intelligence We want to put AI in your hands so you can solve real problems The AIY kits are open source, have hardware and software and integrate directly on the device themselves. The essence, as I mentioned previously, is to keep it low cost, using readily available components you can find almost anywhere on the planet The kits are able to assemble Dushyantsinh how long does it take to assemble? >> DUSHYANTSINH JADEJA: 15 to 20 minutes >> SEBASTIAN TRZCINSKI-CLEMENT: I will say not more than an hour to assemble a kit Dushyantsinh and Bill will demo the kits So stay with us, the voice and vision kit. Stay with us, in the coming months there will be more to announce in that space Let’s take a look — I will finish up with this — at what some of you have created I will show you in a few seconds, a 16-year-old with no programming experience has come up with the following use case Play the video for about 20 seconds >> Shop for New York Yankee baseball hats. What is actually happening now? >> Done >> She said done? >> Yeah. What happened is when it said done, it sent links to my e-mail after finding it on eBay. It will find results online, send links in an e-mail to me, I will receive those so I can quickly click and buy something >> SEBASTIAN TRZCINSKI-CLEMENT: What happened here with Lawrence and the high school kid was shopping for a product I don’t know if you heard that at the beginning. I’m shopping for baseball caps Searching on eBay and e-mailed when the device was done. Let me take you through the code Just a few lines of code Not a hundred or thousand lines of code. Just 20 or 30 lines I want to show you, even if you are not an experienced programmer, you can learn from that I will keep it simple. The first line you can see here is this action triggered when the user presses the button on top of the device. Let’s examine what happens next. We take the words, store them in the local variable at the keyword at the bottom. What happens next? Import a range of libraries You can be like me, you don’t have to understand the libraries, copy, paste it, it works, you try to understand afterward what library does what. It is by learning about example When I said learning by doing, this is what I mean. Copy and paste the libraries you need, the few lines here After, that we will call the eBay API this is the line in the middle. With that line, we get results back from eBay We will read the results as you see on the bottom Finally, what we do, you may remember in the video, all the results are sent to the e-mail Here’s the code. We will go through the result, the first three lines of code up there, format all of this in an e-mail You can recognize, even if you are not an experienced programmer, from, to, subject line That’s it. Nothing more. The last thing we do is connect to the server, the e-mail server and send the e-mail and get the device to say “done.” That’s it, nothing more You can literally copy and paste that code. It will work It is your turn to do things with AIY If you are not inspired enough, I have Dushyantsinh now to come on stage and do a demo with the voice kit. Thank you for listening Welcome, Dushyantsinh, back on stage (Applause) >> DUSHYANTSINH JADEJA: Thank you, Sebastian

My name is Dushyantsinh Jadeja development team at Google Like everybody, we all are excited about the possibility of AI in front of us, correct? This only becomes more meaningful and more impactful when all of us have access to this wonderful platform At Google, we have been sharing our work and advancement in the field of Artificial Intelligence by building products, sharing through research papers and now with the AIY kits, we are taking to even more people in a more playful way Yet, it is strong enough for you to solve some really good problems Let’s see how we can kind of solve some of those things When we’re thinking about this kit together, we are thinking, you know, what makes sense? What would be a right way of putting this in front of people woman and of course, voice was one of the things that was very evident Correct, we all like talking? Imagine, we have a power of playing around with the voice, how fun it would be It was mentioned in the keynote yesterday, the wave net. You can create different voices So in this demo, I’m just going to talk about the capabilities of AIY kit, and Google Assistant, but of course, there is much more than this Voice kit was launched sometime last year, I believe It got a lot of attention The voice kit has a combination of few software and some of the hardware there It was marketed along with the (? ) Foundation to see what are the feedback we can get from the existing community and try to improve our offering We launch the second version of voice kit in January, to get and incorporate the feedbacks and make it all in one box component kit. This is what you get today, if you go to AIY website and look at voice kit, this is what you will probably find there. But what does the kit contain? From a hardware point of view, it has, you know, ras Perry pie — Raspberry pie zero And the processing for you The voice bonnet, it has a mic, microcontroller, general input, output, and to basically give you visual indication of what is happening with your voice kit at any point in time And from a software point of view, it basically runs on ras bianilinebianiline — razz Berrian Linux This is just scratching the surface of the voice kit How to get started. Step one, in terms of assembling the kit Step two, setting up the device Step three, now you set up everything, you can build some solution by yourself So if you open the kit, this is what is looks like First thing, ensure you have all the right components, you have cardboard, all the different hardwares mentioned on the website, just verify that you have all of the components ready with you before you start assembling together. Once you verify, the next step for you to do is build an engine, think of it as building a car Building an engine, putting Raspberry pie and the engine together Now you are building the chassis You know, a structure for your car or for the voice kit You take it, cardboard and give it a structure of a speaker Once you build the cardboard, you put the engine inside it You put the Raspberry pie, voice bonnet in it. That’s it Super easy, super fun. For me, when I tried first time, it somewhere around 15 to 20 minutes, I’m sure you are much, much advanced than me and you can probably do it even faster The key is not fast, but do it in a nice, fun way out of it Yeah, that’s it You can assemble your voice kit Step two, set up your device Now I have the device, the voice kit is in a nice shape and form,

let me give a voice to it. See if it is working fine. I have an already built voice kit here I will switch back to my voice kit and see if everything is working fine there Can we switch to demo? Give it a second. Should be yes? Oh, … sorry for the glitch I think I am booting up my machine here Yes. And it’s live. So before you start, you want to kind of know — ensure that you have everything running for you. So things like let’s say you are connected to network. This could be either a wired network or wireless network or tething to your phone. To have a connection Voice kit needs to be connected to Internet It is using the Google Assistant and speech API for recognition purpose So let’s see if my — sorry This is my Wi-Fi, but I’m not using Wi-Fi Let me see if I am online And Good, it looks like I’m all set from an Internet point of view I will see if the speakers, if the audio is working fine The kit audio is working fine >> Front, center >> SEBASTIAN TRZCINSKI-CLEMENT: Seems okay I will see if it rec — >> DUSHYANTSINH JADEJA: Let me see if it recognizes my voice Testing, one, two, three >> Testing, one, two, three >> DUSHYANTSINH JADEJA: Seems to be okay I am almost set. The good thing, voice kit comes with some of the prebuilt demos as well It is good to kind of know — see if those are also working fine before you kind of start building yourself So if you go to the Pi directory and look at voice examples Which is here and I’m just going to try this library demo Ok Google, what’s time right now? Okay? Okay. Let me try one more time Or let me try with this one Hey Google, what’s time right now? >> SEBASTIAN TRZCINSKI-CLEMENT: Just a bit shy >> DUSHYANTSINH JADEJA: Okay We’ll try one more time, otherwise, I’ll walk you through what it does at this point in time Whenever I say something — okay Hey Google, what’s the time right now? >> >> Google: It’s 1:50 >> DUSHYANTSINH JADEJA: Thank you. So you can try bunch of things. What I was doing, I was just testing if it things are working fine Now, of course, at this point in time, you can ask a bunch of other questions, like, you know, hey Google, what’s the distance to moon? Okay. But at this point in time, what it would have done is

it would have you know, looked at some of the services available online, try to FEFRN an answer for you and display to you — fetch an answer for you and display to you. Things are seeming to be okay Let me see if it could — if it can build something I will go back to presentation Can we go back to slides? Yeah, the device — it is always good to see if — if device is working fine, before you kind of start building by yourself I have built a typical shopping experience. Hopefully it should work fine. But let’s see Given my setup was not optimal But let’s give it one more shot Can we switch to demo? The Google was showing mostly about, you giving a come up to Google Assistant saying shop for something. It was trying to identify a text, calling the API from eBay, I believe, and search the result and send it as an e-mail. I tried to mimic similar behavior And let’s see if I can — if demo gods are with me I will ask for maybe shop something. Mother’s Day is coming up, I won’t be in town, I want to send something to my mom. I was thinking what is an interesting thing? The smartwatch is a big thing. I thought maybe I will give her a smartwatch. Let’s see if I can find a smartwatch using my voice kit Hey Google, shop for smartwatch Cool >> Google: Done >> DUSHYANTSINH JADEJA: If you look at what it did, it recognized my voice or my command and kind of connected to the particular service and displayed some of the smartwatches available for me to purchase. At this point in time, I could take this output, send an e-mail like what Sebastian was mentioned or if I have a smart display device, I could basically, you know, post it there or give a nice visual, list, okay, these are some of the things or engage it more in relation to that. Can we move back to slides? I was trying to emphasize, there is a lot of things possible with the voice, that you can play around with it You can also not just look at the software as a one side of it, but also experiment with some of the things on the hardware Let’s say, we have seen people now taking old toys and putting the voice kits or other AI kits and revive the bodies or bring life to the toys We have seen people customize voice kits, there are people° gimmicks, giving voice to a rumba vacuum. Tell them don’t go to this side of the house, but go to different house section Or can completely custom voice actions. Tons of things are possible. If you want to know more about what is there, you can follow us on Instagram or Twitter or follow the discussion on reddit With that, I will invite Bill to know more about what is happening on vision kit (Applause) >> Bill: Thank you I will continue on with the second part, that is the vision kit The vision kit is a new product It is being released end of last year, we had a new version

update the first quarter of this year The latest version is 1.1. It launches in December of last year It has the latest Raspberry Pi 0. And it is on the board It doesn’t have the early version to solve yourself. It is much easier It comes with the Raspberry Pi version 2 and the Google made visson bonnet That is where it is all put in the product Most of the unique things of the particular product, it doesn’t require you to connect to Internet. You can work with the vision recognition software in the box by itself alone. Before we start, go through the list of materials that Dushyantsinh did with the voice kit. Similar thing It has the Raspberry Pi zero And Google has this vision bonnet, it contains the Intel vision recognition processor, which has the power to help you do the vision recognition In addition to that, the camera support and the connections to the Raspberry Pi cable connector, and has the general purpose input, output, GPIO connector to allow you to do more things, which I will cover shortly And also, it has the cryptochip to help you encrypt and add more securities in it terms of the application There on — it also has the button and the cardboard form to allow you to fold the device On the software side, the operating system the same, it runs on the Linux ras bian system It includes the flow, with the inception, and mobile net software It is on the device which allow you to build AI models to work with the device to do applications in AI It has the built in software for facial recognition and the general object recognition All of the software is on the device ready to use And in terms of its component, let me get into more detail, because this is a relatively new product. On the hardware level, at the bottom, the Raspberry Pi zero As I said, it has the built in Wi-Fi and Bluetooth support So you can connect to the Internet if you want, without a cable, with Wi-Fi connection to the Internet It has GPIO connection which allows it to connect directly with the flex cable to the Google vision bonnet vision bowl That connects to the camera and the accessories like L.E.D and push button and the buzzer There is a piece of buzzer on the device that as your application necessary you can have it generate sound On the software side, it runs on the same ras bian Linux system It has the Python interpreter on top of that In Google, the software we put together contain three different modules, if you will, in terms of the vision process The first is the TensorFlow module, that does the inception and then the software and then the object recognition software. With the software and hardware, you have the application interface with the components. This is how everything put together Okay? Then let me go through the same process Dushyantsinh mentioned with the voice kit First you assemble it. In terms of assembly, very similar You have the box, it has parts, cardboard, whatever. As you can see on the screen, in terms of processprocess building, it is easy You hook up the Raspberry Pi, with the vision cable, hook them together, stick them together, fold the cardboard box Almost like in the engine in a car, hook up the button, this is it First time I build, 40-something minutes. Second time, only 20 minutes. Very simple. I have a product for myself. I made a video tutorial for how to assemble it on YouTube. For those that don’t want to read the instruction, you can watch

the video. But it is very simple, very easy. Okay? So to assemble that, you can have in less than an hour build a device like that Okay In terms of setup, this is very simple. As I said, this does not necessarily need to hook up to Internet. So the connection is all you need is a power supply. Have a power supply connect to it, and you are ready to go And because of the facial recognition software built into the device, after you power it up, it will automatically run the facial detection software. Right now, on the desk, I have this vision bonnet with me And as you can see, the top L.E.D blue light, the button is lighted up If I point this to my face, as you can see, the blue light indicating the camera is capturing my face If I show to the camera with a smiley face, the happy face, the light color of the L.E.D will change to yellow And if I am showing a frowning face, it will go back to blue Let me demonstrate this Can everybody see? It changes the color. Let me say. This is not some kind of magic. It is AI and working What happen, it is a software inside with the AI TensorFlow model with the facial recognition that I will get to in a minute, to make a facial recognition software just like that. It doesn’t have much of a set up. Just power it on, hook up the power, you can do the facial recognition. This demo is called joy detection It is part of a project thatthat — part of a product we ship with the device Set up is easy, how to make your own solutions. There is a lot of software you can do. Let me do another demo. Please switch the screen to the output of my other device. Do we have output? We lost the signal. Don’t power down I’m plugged in if All right. I apologize for the connection glitch As you can see now, I am running the Raspbian software system, it is a window showing the software. Let me, by starting a simple demo, which is the object classification. What you can see is the right-hand side, the camera, video image coming captured by the camera And on the table, I have the apple, banana and Coke can Random object I put together You can see on the left side of the screen, the output stream out, starting put on the screen, which is by the object classification software that we put on the device I’m going to explain all of that in a minute. First let me do a demo. I will point this to banana You should see on the left-hand of the screen, it will say — it detect this object, a banana, I hope Right? Now, if I point this to apple, not only it will recognize it’s an apple, in fact, it should say granny Smith, it actually recognize the type of the apple. And I tried it earlier I point it toward the Coca-Cola can If it is a punching bag, it is red color It looks like this. Let me stop the application, make this window bigger And as you can see on the left side, the first number, it is object recognize, it says banana The number after that is the confidence score How confident the AI think this is banana or similarly an apple, which is granny Smith apple This confidence level tells you the recognition part, it is

truly the object it detects These numbers, this feedback from the software, you can use to make a lot of applications Okay? Let’s switch back to the slides So this image on the screen, you can see very similar It recognized the object along with a confidence score. You can use this number to help you design your applications So exactly what can you do with a vision kit? Number one, you can see it already does the object detection And it has the facial detection, it is all software built into it So through API you can leverage these It takes photo, and send the output It has the ability to tell difference between apple, banana and bunch of other stuff. Most importantly, as demonstrate, you can run your own AI machine learning software in this device, by building your own customized TensorFlow model. Some may think, how do I do that? It is a bit of confusion with the TensorFlow model, powerful stuff, how do you do that? Let me introduce to you the process of how do you do your customized model. First of all, the first, number one, every machine learning model you build in TensorFlow, you are building a model to train Specify and train your own model is the first step. The second, just like any other TensorFlow model, you get the end results, you do an export model to a so-called GrecianGrecian — frozen graph There is there own format That is not understood by the vision bonnet hardware. Okay? So somehow we need to make a match That next step is the so-called compiler Google provide this Google model compiler You take the frozen graph, binary code run it through the compartment you generate called your own customized computer graph The artifact is the end result is going through the three steps you have a binary file That file defines your machine learning model On the hardware side we have Raspberry Pi and vision bonnet, there is the vision program, you use the API, send the graph to the vision bonnet, next step, write code, you can look at our tutorials and example models, coding a model function, basically, send this computer graph into the vision bonnet. In this case, the computer graph and vision bonnet match, they understood what is going on Then you have a camera coming in the stream of the inputs The inputs are coming as a binary number. Because of the computer graph, we arrange these inputs into whatever the machine learning test is Which is basically a bunch of multidimensional arrays So next step is you need to match this tensor, a bunch of arrays, to the image with your model This is what we call the computing. Basically, the step is you are writing a program, you have this vision bonnet hardware and it does the calculation and will send the signal to the output Here’s the code How do you define your face object, as you can see? The face object is defined with the bonnet box, the score You write the argument code You send it as a structure and say, in my model, I need this four. This four number matches the machine. And finding the application code, you say let me get results from this tensor, where is the score, the face score, whatever, you take that to your end result You can make a decision, you can say the score is based on some number, above some number, change my color light This is exactly what you do, in terms of using your own model with the vision kit All right So with that, talk about extending your project. We talk about software, how about hardware? On the AIY board, there is the area, with the additional pings With the pings you can control output, turn on lights, turn on fans, something is making noise, whatever you can do. With these things you can expand your project and with the controller on board, you can do those things So hardwarehardware-wise, we can do those things We go through these, talk about how to assemble, set up and do

your own things. Most importantly, I want to say to use AIY, the power really is connected to many of the services Google provide, like Google Assistant, TensorFlow, leverage the power of the service behind you AIY, part of global open source community We publish this, Github, Raspberry Pi, work with others, immerse yourself with online communities around the world. This is part of the fun of the building makers. I want to take this opportunity to tell everybody, the audience, those of you from the U.S., you can join a contest going on right now Organized by texter with the administration of China, the young U.S. China makers contest The winners get an all-expense paid trip to China. Those of you in the room, you are American makers, please join this contest And more details on hackster website All right. The key take away from our talk today with Sebastian, Dushyantsinh mentioned in their part of the talk and mine Key take away is resource-wise, it is this address. The kits can be purchased at target store in the U.S And also, next weekend, in the Bay Area, there is a makers fair. Those of you in the area, join in the makers fair, you can get the kits from the makers fair Learning, using Raspberry Pi and Python programming is part of the thing you need to get yourself involved to expand your applications And using TensorFlow learning model to learn the power of AI into the application. That is the power of AIY Finally, to summary — I will say call to action, things you can do. Everyone can do Get a kit and start building, having fun, of course And learning many Google services At Google I/O, we have sessions on Google Cloud, Google Assistant, and TensorFlow, learn those knowledge and skills Get different sensors, controls, hook up to the ports on the devices. You can build a lot of applications, in the leverage of the AI capabilities Lastly, joining online communities We will develop a global exchange, and put your device on that, so with that, along with Sebastian and Dushyantsinh, we want to thank you for coming, joining our session today, start your AIY journey today Thank you very much (Applause) (Session concluded) >> Thank you for joining this session, brand ambassadors will assist with directing you through the designated exits We’ll be making room for those that registered for the next session If you registered for this section in this room — for the next session in this room, we ask that you exit and return to the lines outside. Thank you Android Jetpack. What’s new in Android Support Library

>> Good afternoon, everyone

Thank you for joining us at Android I/O

You are looking at what’s new in the Android library. If you were looking for sandwiches, you didn’t go far enough. And everyone is still here I’m Alan Viverette >> AURIMAS LIUTIKAS: I’m AurimasAurimas >> KATHY KAM: And I’m Kathy Kam >> ALAN VIVERETTE: We’ll talk about what you can look forward to in the future. On this beautiful day, we will talk about spring cleaning and talk about technical debt building up in the Support Library We’re on Twitter, stock overflow, on reddit, maybe on reddit a little too much. We noticed basic aspects of the Support Library that were a little bit messy. So we reached

out, spoken with developers, tried to find out ways to improve the basics of Support Library and lay a strong foundation for future work. We got a lot of good feedback Some high level, some very specific, we know about the issues with the showing and hiding the I, me We drilled down to the basics We have got feedback on the maven packaging. The artifact package names. The Java packages have become confusing as Support Library has aged In general, we built up a lot of technical debt. So like last year, we will talk about what is old in the Android Support Library first We 20ed on 2011 with — started in 2011 with support before We provided backward support for SD4, the first version We grew We had watch components, car components, testing, for SDK13, 4, 11 We have a lot more than just backwards compatibility now But we still got the artifact names. So we have support V4 Everyone is familiar with that V13 Who knows what is in support V13? Literally less than 10 people. Aurimas knows So support V13 at this moment contains nothing Because the minimum SDK for everything is 14. It redirects to V4 and that redirects to other components. These are umbrella artifacts on maven Why do we have all the weird versioning names in the maven artifacts and package names? They’re kind of hard to change, but these are confusing. It is not a great place to start if you are new to Android development. We also have a lot of versions of all of the libraries Recommendation is something that was added in 2003 and hasn’t changed We have 30 versions of the basically exact same library What does the versioning scheme mean here? Well, 24, means it was released when SDK 24 came out We had an Alpha one corresponding with the first public release Alpha 2, beta 1, DP3 It doesn’t make sense to correspond to just the I/O releases. Wouldn’t it be great to have Alpha and beta releases for each and wouldn’t it be great to not have to do that for every library, even if it didn’t change? You may have seen a 24.00, that didn’t have beta testing. Maybe you found bugs that should have been in Alpha. We have been better with testing, wouldn’t it be great if the dot-zero releases wouldn’t still Alpha quality? Today we will focus on fixing how we structure our libraries, how we handle changes to them, and how we shift them to developers. We use this to form the foundation for Jetpack, and we’re calling it Android extension libraries, or Android X for short Welcome to what’s new in AndroidX, we’ll talk about foundational changes, new features, what to expect from us in the future First, I would like to talk about the relationship between Jetpack, which everyone may remember from the keynote and AndroidX So Jetpack is a set of guidance, recommended libraries, tools and this is going to teach you how to create good apps. This may include libraries that are in AndroidX This may eventually include libraries not in AndroidX. As guidance changes and evolves you may see some things in AndroidX are deprecated and no longer part of the Jetpack recommendations. Also it has this cute logo AndroidX on the other hand is the libraries themselves These are guaranties about versioning, API, dependency structure, we do not have a cute logo. Let’s dive into details What is going to be changing? We’ll have logical, smaller, more scoped artifacts. If you are looking for view pager, it will be in the view pager artifact rather than support V4 You may remember the split last year, where we split to core UI and a number of other artifacts We have done that again this year, we have smaller artifacts, if you need view pager, just pull in view pager, not a bunch of other widgets you may not need This is nonbreaking, like the split we did last year

If you pull in support VV4, you get the core view libraries, and UI, you get everything else But you have the option of pulling in just exactly what you need Here’s a split of the libraries This is just a sample You see that support compat is broken down into selections This is a pure Java library This is a jar instead of AAR No resources, you can use it with host tests. Because it doesn’t have any dependencies on the Android package Core is the backwards compatibility that you are used to from support V4. You will see less of the compat moniker in the future, as AndroidX is becoming the primary development surface for a lot of framework API So here you can see, if you need swipe refresh layout, you pull in exactly that, you don’t get anything you didn’t need We also move to versioning that makes more sense Instead of monolithic releases tied to Google I/O, we will reset from 28 to 1.0. And the major version number now actually means something So previously, we would break binary compatibility on any minor version bump. If you are using a library that depends on a specific version of Support Library, this can be really problematic. You may not find out until run time that some method signature that a library depends on has changed or move to strict semantic versioning, means you can expect the version number to indicate binary compatibility Anything in a 1.4 version of the library — anything with the 1.4 dependency on a library, for example, would be compatible to 1 5 up to 2.0 Instead of monolithic release, if we have a bug fix for recycler You only pull in one new artifact, and if you don’t need it you don’t have to pull it in It will be low effort on the part of the developer All right. We want to make it easy to know what is inside each artifact As I mention, the maven artifacts are finer scoped, correspond to features, rather than broad swaths of — for example, all of support V4 We have consistent scheme of AndroidX feature, package and class according to layer and functionality The maven naming scheme reflects this Group I.D is the corresponding to the Java package If there is a subfeature, for example, recycler view selection There is the recycler view, colon, recycler view, dash, selection We moved all the V7, V4 explicit backwards compatible or explicit combile SDK compiles And we make the heavy use of the requires API annotation. There may be a return new object. May see something that returns compat and something with the actual object and you can recall that if you are on a newer platform All right. So let’s dive into an example of that Here’s an example of some libraries that you may already be using Unifying UFRG on the AndroidX top level package. Everything that is under that support is in the droid extension library Everything is that is in support, is in the Android extension library Android afternoon persistence room is just room. When you are looking for room, you can find it quickly Dig down on support compat and cardboard specifically Build compat is moved from an explicit V4 support to Android core OS build compat In the future you will see less of the compat suffix on classes Card V7 is card viewed widget, you will see other supporting classes in card view dot utill, expect. So hopefully this isn’t too shocking I think this is a very long awaited refactoring we have been wanting to do for a long time You might wonder, how do we get that? We will hand over to Aurimas, he’ll walk you through what it looks like to migrate your application >> AURIMAS LIUTIKAS: Hello there. Thanks, Alan. I will walk you through the migration story and how to get to the AndroidX library usage First things first, if you are using Android Studio, we will provide an automated tool for migrating over This tool is available starting with 3.2, canary 14 that shipped

yesterday The automated tool — well this automated tool will be in the existing refactory menu that you probably, hopefully love, what we added is a new option called refactored to AndroidX This single click will go and identify all the usages of Android Support Library, old classes and pull them up in the review pane, where you can see what has changed, what we’re about to migrate for you After review, you click do refactor and we will refactor What this will handle, it will handle your source code, encoding the classes, handle simple build scripts, unless you have something more complex, in which case we will publish maps of old artifact to the new artifact, you can do migration in a more manual way. Migration will handle resources such as layout files And finally, but not least, what you will have is we will handle migration of binary dependances, AR and JARs Many of you use third party libraries that department on support, glide and many other libraries. To help you with that, we wrote a tool called jetifier This tool performs binary translation using ASM, which jumps into the jar, goes and rewrites the uses of the old Support Library to become the new Support Library This handles code inside of the jar, it handles XML and proprogard files and we will publish a stand alone jar to run manually, if you would like checked in versions of the prebuilds that you depend on. Now let’s jump to the laptop, where I will give you a demo of how this tool works Hopefully the demo works beautifully Here we go. Jump to the laptop Right Here, you’re looking at Topeka Android app It is available on the Google samples Github page. Nothing super amazing, it is using standard components and showing examples of how it works. I am running this app It works, you can click on things. Now, what we will do, we will jump into the refactory menu, hit the refactor to AndroidX What it will do is jump in and find uses of the old support classes and all the other places we migrate. After it is done searching, it will present all the things that it will suggest for you to migrate. In this case, I am looking at the specific sign-in fragment class, you can see uses of Support Library. What will happen when I do refactor It will rewrite all of these, including build script files, everything Of course, Gradle will need to sync again because we have the dependences and the clasp apps will be loaded Twiddle your things, wait for studio notes to do its thing Now we have new stuff. When I hit build and install, hopefully in a few seconds, at home you would grab a coffee, here we can’t do that on the stage, it will install it on the emulator It will install later by saying other words. Ta-da! Now you see this is the same app, usationion — using a brand-new AndroidX library And the migration was fairly painless. Let’s jump back to the slides (Applause) All right, some of you are not usingusing Android Studio For those we will provide a giant file for the mapping from the old class to the new class That with the jetifier tool, you should be able to hook up in the build system and IDE to do the migration manually if you are using studio. In summary, we’re providing the tools in Android Studio 3.2, canary 14, and jetifier is already in Google maven However, this is coming in really hot. Even the demo that I was using is actually not using canary 14, it is using canary 15, shipping next week, because we found bugs when trying to do the demo. So please wait until canary 15 to start using this. But when you do, in canary 15, please take a look at it, try to migrate your projects. If you find issues, file bugs. We want to make this as easy as possible and want you to migrate and start using all the new stuff However, you know migration takes time We will still ship Android Support Library 28.0, alongside AndroidX However, note, this is the last

feature release. This is kind of like a little bit of a time line for you to kind of move forward If you want to know more about how this works behind the scenes inside of Android Studio, there will be a talk about Android build system at 6:30 in this room. Hopefully you can take a look at that So it is not all about refactoring, we added new features. I will talk you — walk you through some of these The first feature I want to talk about is the ricycler view selection. This is a library that will allow you to handle item selection easier It will help you handle motion and touch events and convert them into selection in the recycler view. This is a flexible library where it allows for custom layout managers and custom actions to do. Let’s jump through and see how you use it As you can imagine, you have new dependency to the build dot Gradle file. The important thing is we are using the AndroidX artifact This is the same stuff that Alan was talking about. So for the set up, what you need to do, you want to create a new layout and new adaptor The adaptor, important thing is use stable I.D We are using a stock grid layout manager We set both of these are on the recycler view. No selection code yet, we’re doing the basic recycler set up The adaptor, as I said, nothing super exciting The important thing is doing stable I.D This allows consistent mapping from the I.D. to the item Next, when we jump back to the activity, we set up selections library key provider. This, in conjunction with the stable I.D., will allow for a quick mapping between the I.D. and the items that will handle the selection by the selection library. And now, what we need to set up is the selection tracker, which actually is the actual machinery behind the scenes And we pass in recycler view, the key provider, both of these we created and my details look up This my details look up is a simple class. Override one method On the side of that one, you return item details. That returns the position and the selection key for the item that is for the given motion event Finally, recycler view has no default selection mechanism What you have to do is handle it on bind There you might want to change the background of the view or use it by setting activated state. To get activated state working, you use background for the view. And the background is a selectable drawable that has activated state, which will allow to indicate to user that the item has been selected. So this is the basics. This library has a lot more You can set up band selection, you can add custom selection areas You can have items that are not square You can have circular handling, stuff like that There is a lot you can do with this library. The important part of this slide, that is my dog Jack All right. Another thing we added to recycler view is list adapter. What this does for you is helps you to work the recycler views that change the content over time. What you need to do, all you need to do is submit the new list We run the dif utill tools in the background and run the animation based on how the list is changed This is handled simply via VI I will walk you through You have the dif call back It has to implement two methods First, make sure the items are the same where you compare the item I.D On the second one, check that the content is the same, where you do a deeper comparison Essentially equals in Java And in there, if there is a change, we’ll know how to process your item from one to the next. And then the adapter, what you need to do is call get item, and you do your regular binding And that is all you need to do to get the animation working And then, a highly complex code in the activity. Call submit list. The list. That is it, you are done. Note, this works really well, the live data and RX Java observables So if you need to do slightly more advanced dapter, we have a base adapter called list adapter. Go into that. If you want to know about list dapter and similar utilities, see managing lists, the recycler page talk on Thursday at 2:30 Another thing we added is AndroidX web kit library What it allows you to do is get the API that we added in web view on the older versions in the backwards compatible way This library works on API and newer

Take an example of safe browsing, that we added this API 27 This prevents loading of malicious URL in the web view Previously could only use it on API 27 and newer. Now you can use it on older devices So similarly, we will add a Gradle dependency to get it working. Again, looking at AndroidX artifacts. Hopefully you got this by now. Then we will check if the feature is available If it is, we will start safe browsing. As simple as that You get the experience of safe browsing Similarly, many other APIs we have added will become backwards applicable available to the application You will hopefully check this out Another library we real estate — renamed is custom tabs. This will work with browsers that implement this, Chrome, Firefox, Samsung, all of them use this If you use it, it will continue to work The cool thing, we added a feature inside of the library, called browser actions. It allows you to hook into the context menu of the browser For example, now your reddit app can finally open links in a cog nitto tab, which can be handy It works in Chrome with 66 and other browsers when they adopt it. To use this, it is fairly simple You set up intents Pending intents for browser action items. This is optional, if you don’t need extra ones in the dig — dialogue, you can skip this part. Another option to set up is browser action tracking. So what this allows you to do is see what user ended up selecting inside of the dialogue And finally, you just fire up the browser action dialogue And you end up with something like this And you can do, you know, additional actions for yourself, or hook into the browser, whereas previously, you weren’t able to do this via simple attempts because the browsers weren’t interested in the functionality This is HEIF writers We introduced this in Android P We are launching this library. It is allowing the writing to the file. It is not super useful, but we’re working on a back port to allow you to use it on older versions Again, usage is simple Fire up the builder to create a new HEIF writer You can set up options, image size, quality, hit built, and once you have it, write into it. You put in bit maps, you write it out to disk. You call stop, the time-out can be 0, if you want an indefinite wait. The important part here is you want to do this work off the UI frame because you do a disk. To tell you more about other features in AndroidX, I invite Kathy Thanks (Applause) >> KATHY KAM: Thanks, Aurimas So the next feature I want to talk about that we added in AndroidX is slices Slices is a feature to allow you to display content outside of the app The goal here is to have one reuse able API that the system and other apps can request content from your app Today, we have already integrated a search I will look in integration with — even home screen in the future This content is templated and interactive. It is templated so when you have kind of live content, you can display it in a rich and flexible layout. The content is interactive, because we allow you to add existing controls like sliders, toggles, scroll viewer And you can have live data or deep links into your app You can choose to integrate your slices with search So a user can see — display app content by searching the app name or even general terms that you register. This is a win-win for users and the apps because users can get rich live data immediately And for your app to reach millions of users. Because it

is implemented in AndroidX, it is usable immediately up to API 19. Let’s look at how you can use it So as you would expect, we have to first import the libraries Three libraries to import. Here we import from AndroidX. The first is the slice builders, includes methods to build content in a templated format The next library is the slice view This contents methods to present the content The last library to import is slices core, that contains methods for permissions. To build a slice, define a slice, implement the slice, handle the slice action. Let’s look at how we can do it The first thing to do is let the plot system or other apps know you have slices to provide You do that by implementing your slice provider You register your slice provider in the Android dot manifest file Next, extend from the slice provider and implement your slice provider You can have multiple slices for your app. This is where the business logic happens When a platform or other app wants to get your slices you get a call on unbind slice There, you get a call with the URI of the slice being requested and you have to return your slice immediately. So any content that needs to be loaded should be kicked off. You will return it in build slice. So let’s take a deeper look We will construct the slice here And we are able to construct it with several builder classes, including rope builder, red rope builder and list builder. Take a look Here we’ll use a list builder for a simple header You will see it adds the header To build on top of it, we will see the grid row builder First the latest weather information and we can loop through it, adding a cell to the grid row builder, by calling add grid row at the bottom, with the rest of the slice to the header What you get here is that based on screen real estate, if it is small, it will show a shortcut slice With the shortcut slice pick up an image from the slice Because you have a weather image it will pick it up, if you didn’t, it will show the app icon If it is only a small slice, that is the head shown, finally, with enough space, it will show the full slice. To learn more about the slices, there was a topic this morning that you can review. But I only covered very basic of it, you can learn more about templates, permission, integrating search in the other talk You can also meet the team at the office hour tent tomorrow morning at 10:30. The next topic I want to talk about is material components We launchlaunched material components for Android in library 28.00 Alpha in March and launched for AndroidX yesterday So as you know, material theming is designed for great user experience and made a lot of improvements since 0. One of the first things we have done as part of the AndroidX refactoring is stead of just an Android dot support design, it is moved to Google material. We have done a lot of extensive usability studies on how to make the widgets helpful. We have updated styling, so you can better express your brand and include a new UI component Let’s take a look Here’s an overview of the theming capability On the right, a very brand agnostic baseline On the left, the Google brandedbranded. All the components pull from the theme to have the super easy app white theming. Let’s walk through some of the code. As you expect, we have to import the library Here, note that it is coming from Google material And to use a baseline theme, you define kind of set the theme to be material components dot light This is for brand agnostic theme to start with. Then we provide a bunch of attributes that you can override so all the widgets within the app can pick up Here, we’re define primary color, text in theme dot XML On top of that define attributes for different text styles. With some that comes out of box but you can also define your own Although custom widgets, if you have those that use the attributes, it can also pick up the theme Look at the components updated First up is text field. We

improve the search and use the touch target making it easier for input and more usable and accessible And add more states, focus, error and text counter. We have kind of think through all of it from the start to make your life easier Next up is button. You can tell, you can use button the way you use today by setting the material component light theme, it will inflate this to the material button, to understand themes and pick up the attributes you set previously in the theme file You can use custom attributes with these and we provide two updated bars for you. The first is the bottom app bar The bottom app bar allows actions into your app. What we have done here is allowed you to — we have done research and saw the phones are bigger. We want you to allow you to position your apps anywhere you want on the app bar. Here is a fab that is centered and can be animated to be right aligned The other bottom bar we have updated is the bottom navigation bar. So just to clarify that Bottom app bar for action and bottom navigation bar is for different section in the app We don’t recommend mixing the metaphors, we provide both to choose from to build your app Pulling all of this together is the material card view that is a wrapper on the existing card view in the Support Library or AndroidX Simplified how it is built Less elevation, and shadow and pulls from the theme and color So here, you can see all the elements coming together with the text button and all of that in the material card view. With this component, you have to define the user material card view explicitly, looking to see where we can integrate it as well. With that, you can learn more about this in stage 8 at 4:30 at how to incur what’s new with material design in your code base And AndroidX is only one part of Jetpack Jetpack is a set of components, tools, guidance to help you build great Android apps quickly and easily So we’re at this talk, but we have four more talks to learn more about Jetpack. With that, thank you Alan, Aurimas and I will hang out at the Android tent over there Hope to see and talk to all of you. Thank you (Applause) >> Thank you for joining this session. Brand ambassadors will assist with directing you through the designated exits We’ll be making room for those who registered for the next session. If you registered for the next session in this room, we ask that you please clear the room and return via the registration line outside Thank you (Session concluded) ML kit, machine learning SDK for mobile developers

>> At this time, please find your seat. Our session will

begin soon

>> BRAHIM ELBOUCHIKHI: Good afternoon, everyone. My name is

Brahim Elbouchikhi

We, as a team are very excited to tell you about ML kit. You have heard about it from Dave at the main keynote and probably looked at our documentation. In this session, we’ll tell you more, some of the behind the scenes stuff we have been working on So let me get started I think it is important to look back a couple years, look at machine learning, what has been happening, the context And I try to quantify this the best way I could One way is to look at goggle — Google trends, the most reliable source of data You can see there is a 50X increase in deep learning. That is not a new technology. That has been around since the 70s What has changed is we can deliver on the promise of deep

learning. We have enough compute, memory and power on the devices to actually run the models that have been developed for some time I want to give you, as a product manager, a simplified view of product management Essentially deep learning uses layers and tries to mimic how the brain functions, tries to activate different neurons, based on the specific input it is getting so they can ultimately arrive at the answer of whether this should a dog, cat or hot dog or not And in this case, if you can see on the image, the output of the particular outcome is dog. What we have before the source of algorithms was rule-basedrule-based engines, you had to configure the rules, if, this and this, and this, that’s a dog. Which does not scale So because of that deep learning has allowed us to get into so many more use cases and solve so many more problems that we could before purely with rules engines That’s cool In particular, over the past seven years or so, our ability and the machine’s ability to perceive the world around it has gotten good In this case, 2011, a 26% error rate in identifying that animal as a cheetah On the right, as of now, essentially, the error rate is less than 3% Which is better than what a human can do. That is pretty awesome The fact that an algorithm, a machine, can now perceive the world in that way, opens up lots of new use cases But of course, as a team, our mission is about bringing machine learning to mobile devices and mobile apps. So when we start researching this product, we went out there, we talked to many developers, both internally at Google and externally to try to understand what are you doing with machine learning on devices? And how does it work today? So I’m going to tell you a bit about that The first thing, one of the first we talked about is Google Translate. Of course, you hear a lot about Google Translate because it is a delightful experience, and incredibly useful. But what I like the most about Google Translate is the fact that it strings together multiple types of deep learning, machine learning models and technologies to deliver this experience So it does the on-screen character recognition to extract the text it is looking at. It does the actual translation itself and ultimately, it could do text to speech so it can speak the results back to the user And we think that when you can string these things together, when you can use machine learning in multiple ways within a single experience, where relevant, obviously, really cool stuff happens That is one example The other one is the app called you CICIAN It allows you to play an analog instrument, it listens while you play and tries to interpret how well you are doing. It left hands of to the notes you are playing, it is listening to how well you are playing them, timestamping them Echo cancellation and noise cancellation and finally, personalize in the learning experience All of this is done with an on-device machine model, with this team, they built their own C++ run time to do the inference on the device as efficiently as possible This predated TensorFlow light and other things we have now that could have helped with that particular process The other folks we have talked about, among the few others are ever note. So ever note launched a feature called ever note collect. The insight behind ever note collect is the fact that we collect so much of our information in a visual manner We’re taking screen shots of things we care about Pictures of receipts, of whiteboards after the meeting, askingasking someone to transcribe them Ever note tries to avoid that It tries to extract the text, tag that, and make it more useful. That is super cool The overall theme that we heard through many conversation was that it is doable. On-device machine learning is doable, but it is really hard It is hard for three specific reasons. The first is acquiring sufficient data in both the quantity and quality that you need. Think about it Say you are training an OCR model, you can label the data for your own language You can get an actual training set, say I’m going to label what this data says myself because I understand that language. But

in a global audience, when you have users all over the world, how do you create an OCR model that actually works for all those languages? That’s really hard Even harder, if you are a music music-learning app, you need to hire world-class musicians to record the perfect note so you can train against it That is expensive. The other aspect is developing models that are optimized for mobile inference. This has many dimensions. This could be in terms of battery life, in terms of compute, and in terms of size of the model. And what I have learned in machine is a really hard challenge to solve essential to machine learning You can’t do one without the help you do issues Of course, it is the beginning of a long road and journey, but we think there is some exciting stuff for you today I want to first talk about our machine learning SDK. Our machine learning stack At this case, the very bottom is the Android neuronetworks, API and iOS is metal On the neural network it is lunched with ML1 and an hardware Internet extension case We build drivers for the neural networks API. I’m excited to show you results. With the P20 series device, we’re seeing 10X improvement in latency, inference with incention V3. What is cool, if you don’t know inception V3, it is a large model. It wasn’t built for mobile devices at all It will built for server side inference The fact that we can run that kind of model at 10X performance and run it efficiently on a mobile device that is not connected to a power plant is actually super exciting This means we have more headroom to do more with machine learning on a device top of course, there is another model of this where we have models that are built for mobile, from the ground up, and built to be highly efficient, all of that. So that work continues. When you pair these up together, we think there will be a lot of cool stuff happening here We’re continuing to invest on Android neural networks API and relying on metal on the iOS side The next aspect here is TensorFlow lite. It was announced last year, shipped around November It is a machine learning set of tools and library It works on both mobile devices and embedded devices and built from the ground up, as it says to be lightweight. Now, I’m not going to steal any of the team’s thunder They have a session fully dedicated to TensorFlow lite tomorrow, so I highly recommend to go see it if you are at all interested in on-device machine learning Which I assume you would be , if you are here. Then we get at the application layer This is where we looked around, we said there isn’t really an easy way to access machine learning technologies at that layer You have to either go interface directly with the run time and build your own models or you really had to build your own step So that is where ML kit comes into the picture. ML kit is in beta, as of yesterday. So you obviously all can go use it today And it is essentially Google’s machine learning SDK Our aim is to bring Google’s 15-plus years in machine learning, all of the technology we developed, bring it to mobile developers through this SDK So let me tell you more about it Well, first, let me show you the stack again with some L kit on the top This is our on device machine learning stack Now, I hope I get to tell you about it. Okay. So the first thing that is really important is that ML kit is both on iOS and Android This was really important to us because when we talk to developers, we don’t think about Android machine learning and iOS machine learning. We think about machine learning We want to deploy similar models to users on both platforms There is not a fork there Say it was important that we have a consistent SDK for both And in fact, every one of our features

is available on both Android and iOS We offer two types of rough buckets of features One is what we call base API These are backed by Google models As far as we’re concerned as a developer, there is no machine learning involved. The other is a set of features that help you use your own custom-trained models I will tell you more about that in a bit ML kit offers on-device and Cloud-based API. Again, this is important On-device API give you the real-time and offline abilities But they have limited accuracy in comparison to Cloud However, on-device API are free of charge But we also wanted to give you a consistent interface for the Cloud API But in many cases you do need that level of precision and that level of scope. We will talk about the distinctions between the two in a little bit And finally, ML kit is deeply integrated into Firebase. This is another important point for us. We aim to make machine learning to make it nonexceptional. We don’t want it to be special We want it to be yet another tool, just like you use analytics or crash lytics, or performance monitoring, Cloud storage, like you use any parts of the Firebase, we want machine learning to be right there, right then. What this also does is works well with other features on Firebase Again, we’ll tell you more detail about that in a few minutes So that’s the high level about ML kit So what base API support today? First, text recognition This is available both on Cloud and on device The second is Malaysian labeling,s in is bar code scanning, face detection and landmark recognition. The four API on the left are available on device. Meaning you can use them for free and in real-time and offline We also have two super-cool features coming up soon One is a high-density face contour feature. This is over 100 points in real-time. And the other is a smart reply API This is what we like to talk about at Google we work together really well It is part of Android P, there is a feature to now insert response suggestions within the notification shade directly ML kits with the smart reply API is something helpful in populating the chips This is same in iOS and not API This is in where OS, Android messages, so that is cool Sometimes you simply need to build a custom model. If you are trying to detect that particular type of flower, you can — it is hard to build a generic model. You can build one, but it will be super large As you detect every flower, every dog, every species of everything You have base custom models, sometimes. We wanted to help with that as well. The first feature we have here is dynamic model downloads What this means is you can upload your model to the Firebase console, and have it be served to your users dynamically You don’t have to bundle the model into the APK. This has a bunch of benefits First, it reduces the APK size You don’t have to put that 5 or 10 megabyte model into APK, meaning you have to take a hit when the user is trying to install the app. The other important insight for us was we decouple the process, the ML, the release process from the software kind of traditional app release processes. We learned that the teams are typically slightly different teams Your machine learning team is probably a different set of people than the ones building your core software experience This gives you the flexibility to deploy each at different times Now, I really cool benefit of this also is you can now do AB testing on different models with literally a single line of code This is the coolest part for me If you were to do this today, maybe before ML kit launched you have to bundle two models into your app, you are stuck with the two same models for the duration of the app’s life cycle and you have to upload all the metrics

back, do all of that work. This makes it trivial. And given how important it is to experiment as part of the machine learning, this is we think, a real game-changer for the ability to use machine learning models And finally, we talked about the optimization challenge of building models that are made for model. We’re excited that we’ll have a feature that is coming soon that will allow you to convert and compress full TensorFlow models into lightweightlightweight TensorFlow lite models. We will talk about the magic — we call it also technology — behind the compression flow That is ML kit. Google’s machine learning SDK available on Android and iOS I want to take a moment, as always to thank our partners We’ve worked with every one of these partners and many more to launch ML kit. They worked through so many bugs, so many challenges, given us so much feedback and the product wouldn’t be where it is today without their help. I want to thank them a lot. And in particular, I want to highlight a couple of things I worked with pics art and they deploy a custom model to deploy the magic effect What is cool is they use ML kit on Android and iOS. We also work with intuit. If you know U.S tax dates, tax day is around April, so it is really pressed for time to get the feature out So we worked with them to integrate ML kit in record time That was super-awesome as well All right So before I hand it over to Sachin to tell you more about ML kit, I wanted to make a commitment to you. We’re going to go out there and we’re going to knock on Google’s research teams’ doors, every one of them We’re going to go out there and ask them to bring their technologies to you to be part of ML kit We will focus on vision, speech, text models And we’re also continue to make use of custom models as easy as possible. So that’s it I will invite Sachin to come up here and tell you more about how ML kit works >> SACHIN KOTWANI: Thanks, Brahim Hello everyone, my name is Sachin Kotwani. I work on Firebase I was practicing at home with my three EERLD. Every time we finish she would say again, I’m not sure if she was telling me to practice more or if she enjoyed the content We’ll find out When we set out to build ML kit, we had two main objectives, the first was to build something powerful and useful, Brahim talked about about that And if it is fun, I will tell you about that if you use Firebase, you are familiar with storage, remote config, crash lytics, analytics, AV testing, more Now, there is a new addition to the family. Starting with week with our launch, if you head to the Firebase console, you will see the ML kit Clicking on that will take you to the main screen, with the Firebase API you will introduced to Mostly they’re vision focused if territory now, we will tend to add to it in the future Let’s look at one specific use case Say I’m building an app. It needs to determine what is the content of an image. What the theme is, what things are in it, how would you use the image labeling API. As you can see there are two icons here, this indicates the API is available to run on device and in the Cloud. On device is free It is low latency, no network required because everything runs on the phone It supports roughly 400-plus labels. If you need something more powerful, something to give you more high accuracy results, you would use the Cloud-based API That is free for the first 1,000 API calls per month and paid after that, but it supports over 10,000 labels. Let’s look at an example If you were to feed this image to the on-device API, you get labels like fun, infrastructure, neon, person, sky, if you feed it to the Cloud one, you get Ferris while, amusement park, night, you can see it is more accurate Right? Okay Remember, I told you it is not

just fun, it is easy to use You have to hold me to it Say I want to implement this API on iOS. I would just include these three libraries in my pod file, similarly, if I do Android, I would put the three libraries in the build dot Gradle file Next, if you am doing iOS on-device image labeling, I would insatiate, and get it back and handle and handle the extracted entities On Android, the pattern is very similar You instantiate the detector, detector detect an image. Pay attention to the highlighted boxes in gray over there. This is on device If I want to do the same thing but call the Cloud API instead Not much changes. It is just a few class names. The pattern is very similar The detector, detect an image, handle, extract an image. Demo time I was warned not to do a demo But my wife says I don’t listen So here’s me not listening Let’s see if this works I will show you the image labeling API. This is typically used for things like tagging photos, if you want to know what is in the content of a picture, still picture, usually. I thought it would be cool to show a live dome — demo with a live stream It says toy, car, vehicle, tire, bumper, is it picking out all the pieces there. Okay Oh, it says crowd, too, event, I didn’t get to test this Because when I was practicing, it was no crowd, it was just empty chairs. Okay. Face detection. Switch to this Okay. So there is a box around my face, as you can see. There is left eye, right eye The numbers next to the left and right eye are how open they are So you can tell that I am awake Happiness is detected for the smile. So look at how that changes And this works with multiple people, actually. So just, you know, again, you shouldn’t trust me, you should ask me to prove it to you So I need a few volunteers here Okay. Multiple faces detected Eyes for everyone Smiles for everyone. Huh? Pretty cool? Right (Applause) . Okay I have a couple more things Lose it as mentioned is one of our partners. They worked on this really cool feature. I will enter here Say I am logging what I had for breakfast And, you know, normally, you can select foods that are already in the application, you can enter it manually Say you want to enter a new food. Apparently this is not considered food so I don’t think it is in the application. I will try it So like I said, you can enter it manually, let’s — ooh, it detected it faster than what I expected. Let me try that one more time. There you go, nutrition label found, here’s all the information, the calories, fats, saturated fats All right. You want one more? Okay. This is stuff that is not available yet. But I think it is pretty cool This is our face contours demo It detects over 100 points, processes them in 60 frames per second, you can see my lips, my eyes, the entire face contour. This will be coming soon. A sign-up link, if you are interested You know, we look forward to having it in your hands so you can play with it All right (Applause) Thank you. That was pretty cool Hopefully you find the base APIs useful. There are use cases that you might have that are very specific to your application. What if you wanted to detect different types of flowers or like you youSCICIA inspect you might want your own custom model M L kit helps with that as well The first is the ML kit provides a layerer to interact with the TensorFlow model You can get inputs and outputs Second, upload the tensor lite model to the Firebase console First, Brahim alluded to this earlier, you can bundle your

model with your application if you choose. If it is big and you want to reduce the install size, actually just leave it in the Cloud and download it dynamically. The initial install size is smaller. The third one, because it lives in the Cloud, you can dynamically switch the model You don’t have to switch a new APK or bundle to the app store or Play Store. Here is a quick snippet on how to load that model, how you refer to it Let’s say I called my model V1 I would put this snippet and it would retrieve it from the Cloud. Let’s take a step back When I started, remember, I mentioned that there are a lot of the Firebase products that are very useful And one of them, the favorite one is remote config It allows you to dynamically switch values inside of your app. Typically used for switching the color, background, you can also use it to switch call to action strings You know, it is really useful for that sort of thing It turns out it is also useful for ML kit I went to the Firebase console created my model. I created three different target populations. One, for people who speak English, another for people that speak Spanish and a default value. What I am trying to do is targeting a different model to the different populations And once you do that, instead of hard coding a model name, like here, you just change that static string for a call to remote config and every device, depending on the population they belong to, they will get the respective model This is just a very simple example You can, you know, think of using AV testing in analytics, you can test out models, pick the one that performs best and choose that. Experimentation, as Brahim said earlier, is important in machine learning All right Before I wrap up and hand it over to Wei, I want to talk about the models You need a TensorFlow lite model to run on a device. We have a feature for that. It is coming soon. You upload the TensorFlow model, with the data, once it is done processing, you will get a bunch of TensorFlow lite models to choose from. You can see, these are compressed They have tradeoff, different accuracy, different inference latency and different sizes, depending on what you are most sensitive to, you can pick your needs. This flow is only for image classification models, but we look forward to adding more in the future You pick the model that works best for you. You publish it, it is available like any other custom model that you would upload on your own I know I make this seem super easy with beautiful UI and three steps, but this is actually really hard to do It is an active area research It is almost like magic, and to tell you more about that magic, I would like to introduce our resident wizard Wei, please come up on stage >> WEI CHAI: Thank you, Sachin Hi, my name is Wei Chai My team has mix motion learning experts and mobile developers It is a lot of fun to be part of it and build something we all believe can be useful, for example, model compression. Now, I would like to go deeper into the technology behind the magic First of all, let me explain why we want you to support model compression Running machine learning on the Cloud versus on mobile, one big difference is mobile environment has very limited computational resource This makes the model size and performance speed critical per for today’s hardware limit, most mobile applications require very small models, ideally less than a couple megabytes So for like — so like — um In hand, if we look at the model architectures, for limitation, to attain higher accuracy the machine learning models can go deeper and larger Sometimes hundreds of megabytes for certain applications After talking to a lot of the mobile developers, we realize

that how to make machine learning models small and efficient enough to fit our mobile phones is one of the big pinpoints With ML kit, we would like to address this issue by providing model compression, tooling, and support So a model compression service or tool takes a large model as input and automatically generate models that are smaller in size, more memory efficient, more power efficient, faster in inference speed with minimal loss in accuracy. As Sachin just mentioned, this is still an active mission learning research area. Our compression service is based on learn to compression technology developed by Google research And it combines various state-of-the-art model compression techniques For example, one method called pruning reduces the model size by removing the last contributing ways and operations in the model We found that for certain on-device compositional models, pruning can further reduce the model size by up to 2X Without too much drop in accuracy So another method is trying to reduce number of bids used for model weights and activation For example, using eight-bit fixed point for model weights and activation inside the flow, can make the model inference run much faster, use lower power, and reduce the model size by 4X Specifically, with TensorFlow lite, switching from mobile net to content mobile net can speed up inference by 2X or more on Pixel phones The third model is a student model with distilled knowledge from large model. A teacher model So the student does not only learn from the ground labels, but from the teacher Typically, the student models are very small in size, with much less waste in the model, and use more efficient operations for the benefit of inference speed For example, for image classification, the student models can be chosen from mobile net, squeeze net or any other state-of-the-art model architectures compact enough for mobile applications We can further extend this distillation idea to simultaneously train the teacher model, and multiple models with different sizes in a single shot One thing to mention is that very often, for all of these techniques, we need a fine-tuning step for the best accuracy. So in this case, we do not only need the original model for the compression process, but also your training data So for ML kit, we’ll provide a Cloud service for model compression For now, we only support image classification use cases, but will soon expand them to more So the reason that we support model compression as a Cloud service, as I just mentioned, model compression is still an active research area with new technologies and new model architectures, specifically for mobile applications invented very fast For example, from mobile net V1 to V2, it took less than one year to invent So our compression service will automatically incorporate the latest advances in technology for you Another reason, the compression process typically takes quite some computational resource It can grammatical hours on GPU We will run our Cloud service on Google Cloud to use the computation

power there So what we need from the developers include a pretrained TRENSor flow model in safe model or checkpoint format. And your training data in TensorFlow example for that, for the fine tuning step I just mentioned What we generate will be a set of models with different size and accuracy tradeoffs for you to choose from Since ML kit is running on top of TensorFlow lite, all the generated models will already be in TensorFlow lite format for you to download or serve through our model hosting service So with ML kit model compression service, we’re aiming to compress a model up to 100 times smaller, depending on your use case and original model to give you a real developer use case as example Fish brain is a local fishing app. They already have their model to identify fish species and the model is currently running on the Cloud With our model compression service, the original model provided by the developer with 80 megabytes and 92% accuracy can be compressed to much smaller models with different size and accuracy shown here As you can see, in this particular case, the accuracies of the generated models were even higher than the original model Which is not always the case, but is possible, and it is great To summarize, with ML kit, we would like to make mission learning accessible to all mobile developers to achieve that we would like to help with every step in the mission learn ing work flow on how to build the model but to build and optimize your model Now, I would like to conclude the talk with a summary of what we’ll provide We’re launching in beta the base APIs for both iOS and Android Including text recognition, image labeling, bar code scanning, face detection, and landmark recognition We’re also supporting custom APIs with TensorFlow lite model serving Please checkout these features at the Firebase website Meanwhile, we’ll have a set of new features coming out soon, including high-density face contour API, smart reply API and model compression conversion service. We’ll soon start to white list developers to try them out If you are interested, please use this link here to sign up We’re super-excited about ML kit and how it potentially can help developers build cool machine learning features Look forward to your feedback and we’re committed to making it great Thanks for coming. And if you have questions, we’ll be available right after this talk at the fair Sandbox Q&A area and we’re having relevant sessions and talks for you to checkout Finally, please leave your feedback about this session for us to improve for the future. Thank you (Applause) >> Thank you for joining this session. Brand ambassadors will assist with directing you through the designated exits We’ll be making room for those who registered for the next session. If you have registered for the next session in this room, we ask that you please clear the room and return via the registration line outside Thank you (Session concluded)

use Lighthouse and Chrome UX report to optimize web app performance

>> At this time, please find your seat. Our session will

begin soon >> VINAMRATA SINGAL: Good afternoon, everyone I’ve hellohello >> VINAMRATA SINGAL: My name is Vinamrata Singal, I work on Lighthouse and other initiatives at Google I’ve my name is Rick Viscomi and I work on transparency tools and the courtroom experience reportreport >> VINAMRATA SINGAL: Rick and are here to tell you how to use Lighthouse and others for the application. I wanted to tell you about myself before we get started so you know who you are talking to Fun fact about me, when I was growing up, I have grown-up in four different countries, India, Saudi Arabia, New Zealand, and United States What was interesting, as I was growing up across the countries is having access to information

was really challenging. Either because it was restricted for me and my family or because no one around us had that information That really changed when I started learning how to use the web It made it so much easier to have access to that information That is why I care about making sure the web is available for everyone and is a great experience for all But even as I was learning how to use the web, there were pain points with that experience. This is a picture of me as a kid in Saudi Arabia, on DSL Internet, if you remember those days, learning how to use the Internet It was really, really, really slow What is interesting is the slowness and performance pain hasn’t go away for a lot of users. At Google, we have seen it has real implications for businesses as well. So we have been collecting a bunch of data to show the impact performance can have for businesses. You might have seen the study we did with double click, for sites that load in less than five seconds, they receive a bounce rate that is 53% lower compared to sites that load in more than five seconds Shoppers that have trouble with site performance, 79% don’t come back. Additionally for every second there is a delay in page load time, there is a 7% drop in conversion rate. I want to emphasize that again For every extra second our site is slower, you can lose 7% of your users This is all to say that if you are a business that is online, you should care about performance You might say to me, Vinamrata, I get it, what are the guidance, tools, and metric? For that, I say you have come to the right place We hope to answer these questions and more To give an overview, we will start off giving information into performance metrics and kind of walk you through the overall tooling and guidance landscape, Rick will take over, tell you about the Chrome user experience support and I will wrap us up with Lighthouse Let’s get started with performance metrics When we talk about performance metrics, what we’re talking about is collecting data There is two sources you want to collect The first is lab data known as metrics from the lab. This is from a controlled environment It is useful for getting granular information from your site. The second type is field data, metrics from the dial or RUM, ownership — or real user metrics. This is how real users experience the performance of your site This is helpful in understanding the ground truth of the performance of your site You might look at this image and think do I use one or the other? In reality, you use them together You use field data to understand the users, understand the baseline performance of the site, where users are coming from, what devices are they on? What networks are they on? That helps you calibrate the lab environment. Once you understand the baseline and whatever business or key performance indicators you are trying to optimize for, then you can set goals around, you know, the targets for whatever you want the performance to be After you understand that, you want to start looking at the lab data, calibrated from what you saw in the field to drill down, understand what parts of the performance you need to improve, as well as implement the optimization to make the site faster. Once you make improvements in the lab, go back to the field to make sure you see the improvements in the lab, in the wild and monitor the performance of the site for regressions Now that we have spent time talking about where the performance data comes from, you might be asking, what data do I collect? When we think about the performance of a site, we quantify how does the real user experience the performance of the site? There are two buckets of user experience. The first bucket is what we call visual metrics This is measuring how fast is stuff getting painted on the screen? And there is the second bucket, which is interactivity metrics, that is essentially measures, how quickly does your site become usable for users? Under the bucks, we have specific metrics We have metrics like first speed and pain index and time to interactive To really understand what the metrics mean, I think it is important to visualize them in a time line

We are trying to see how real users experience this. In this left, you have the first consent This means when the first piece of context appears on the screen, text, image, SVG This is pivotal in the user experience. This is where the user is like okay, the site is using, doing something. Very end is timed interactive. For the user, that means everything is loaded, they can interact with any part of the page and it just is responsive, it works And everywhere in the middle, you have what we call speed index Which is essentially measuring how quickly does the rest of the page load? Speed index will reward pages that load a lot of stuff earlier, because that is when the user feels very fast We have actually seen a lot of developers have success with the tool kit of metrics I want to share an anecdote of users. A lot of you heard of Pinterest We helped them upgrade the mobile web experience to progressive web app and they wanted to improve the overall interactivity of the experience They improved the time from 26 to 5.6 seconds As a result of this, they saw a whole set of improvements across the business metrics. So they saw an overall 40% increase in the number of users who spend more than five minutes on the site They saw 44% increase in the amount of user generated ad revenue They saw 50% increase in the ad click-through rate and 60% increase in core engagement All in all, this is awesome work. But what is interesting is I think we can still do better to help developers understand interactivity experience. Specifically, I think the place to make a lot of impact is right now, time to interactive is only a lab metric I think we can do a better job of helping developers understand interactivity in the wild How are users first starting to understand the interactivity experience, especially when they come to your app for the first time I think in order to make this more concrete, I want to tell you a story Imagine for a second you are not a developer, you are a user So pretend you just heard about this new site, say it is Vinamrata cool store, you go to it on your phone It is loading, you are interacting with it, you click on the button, nothing happens, you keep clicking on the button, nothing is happening. Raise your hand if you ever had that happen to you. Yes, like everyone KEER — keep your hands up Keep your hand up if you felt frustrated by that experience It is like everyone, for those on the live stream. Again, it is a frustrating experience Putting our user hat down, again, putting a developer hat back on, we can probably understand why something like this would happen One explanation for this is the browser is done rendering, all the HTML, SS, because of the Javascript, you browser is take time to input the browser script. When the new interaction is coming through, it doesn’t have time to interact because of the Javascript it has. This is the user experience we want to minimize with the metric called first input delay, which we are introducing today. First input delay is measuring the latency with the user’s first interaction with the page. This means that from the time the user interacts with your page, the time when that happens to the time when the code that you wrote to respond to that interaction actually runs And we think really focusing on the first user experience is really critical. As you saw a couple of slides ago, a lot of users leave if the first experience is terrible You want to optimize business metrics like bounce rate and conversion rate, you want the first experience to be solid We’re working on the definition of the metric and messagemessaging. We encourage you to give us feedback There is a link to docks and Javascript polyfill to use and put on the website. Check it out, please give us feedback on the polyfill repo Now that we spent a bunch of time talking about metrics, let’s transition to talking about the overall tooling and guidance. So this is an overview of all the tools published on Google about site performance. One of the most common questions we get is like, Google you put all the tools how, how do I navigate this? How do I use this? What tool should I be using? Great question Ironically, we built a tool to help you understand this Not ironically, it is awesome This is the e-tools — speed tools overview. It helps you understand, for if I want to build a business case for

performance so my company invests in it, to oh, like, I want to get nitty-gritty into performance or all the use cases in between help you understand what tool to use I encourage you to check it out, you know, give us feedback about things you think are missing We’re looking forward to hearing more from you So now that we have covered a little bit about tooling, metrics, we thought it would be really helpful to do a deep-dive into tools in particular. I will hand it off to Rick now, who will tell you about the Chrome user experience report >> RICK VISCOMI: Thank you So when we talk about metrics from the field, most people think of traditional real user measurement or RUM, where you measure things like performance, page road, page views, conversion. This is great for understanding the user experience on your own site, but how do we understand the user experience on the web as a whole? How do we know if it is getting faster or usable? How do we know if the use is normal? For that, we need a different set that represent the web at large The Chrome UX report, known as crux helps us understand the web It is data from real Chrome users. Let’s see how it works When you have a data set that covers the web at large, it enables analyses that were never before possible. You can compare real user performance across particular sites and see how you stack up against competitors. In this example, we can see that competitor B is slightly faster than my website performance. It is one thing to know how fast my site is, but it is another to see how much faster or slower it is compared to the competition Additionally, the Chrome UX report is updated monthly You can track your performance benchmarks over time. As I optimize the performance of my site, I can see the gains in each iteration of the report Alternatively, I can see my competitor is investing in performance and I need to catch up approximate So what exactly does the Chrome UX report measure? It includes visual metrics Vinamrata mentioned earlier First first and first content pink And visual like on content and download My download is first content and paint. It reflects the real user experience of feeling like the page is loading It includes useful dimensions for slicing the data Like the form factor of the users device, desktop, phone, tablet, and the effective connection time, 3G, 2G, 4G, this is the effective speed, not advertised network. A user on slow Wi-Fi would be considered 2G. Another dimension that early users requested is the ability to slice by country With it, you can see how the user experience varies by geography. It can help you understand how much slower the user experience is in places farther away from the web service There are two ways the data is aggregated, by URL and by origin. How does the origin differ from the URL? It starts with the protocol, either it is secure or not secure The subdomain, for example, WWW, mail, no subdomain at all. And of course, the domain, Google com or example com which includes suffixes like dot com, what it doesn’t include is the path after the domain Index, productslash one, two, three All of those are developed at the origin level This has grown 400X since it was first announced at the Chrome dev center. It is your website that may or may not be included, depending on popularity Okay, you are probably wondering how do you start using it? The raw data is available on Google Vickary You can write queries for information about specific origins or web as a whole The first content full paint is less than one second It is broken down by two hypothetical competitors Developers and Chrome.Google.com. The results are informative, but hard to understand by looking at it What we’re trying to understand is which origin is faster? How

are they trending? To help answer that, let’s visualize the data We can integrate with two tools, sheets and data studio You will notice on the BigQuery results, there is a button to sheets. That will create a new sheet, populate it with a table of data. From there it is easy to create a simple chart to visualize the performance data It is clear from this chart that developers at Google com, there is the first density of content full paint. It fluctuates. Why? Not just because it is faster or slower It could be the user population The users on faster or slower connections could be changing You can integrate with with Google studio. And you can build interactive dashboards, get data from the BigQuery. This shows one way to monitor the distribution of performance metrics over time What if you don’t want to work to get this data, you just want high-level results now? There are a few Google tools built on the Chrome UX report to give you insights quickly The first is page insights, used to include the performance data Alongside the performance population recommendations The insights are built on the URL aggregated dat app approximate — data This is more than you can get on BigQuery. Here is the UIL This is the first down full paint and metrics and indicator in how fast or slow it is in relation to the rest of the web It turns out the IO web page is considered slow. You can get a sense for how skewed user experiences really are In this case, we see most content full paints are fast But 82% of down content loaded experiences are considered slow Another Google tool built on the Chrome UX report is the speed scorecard This tool allows you to enter multiple websites and stack rank their performance and enables you to target specific geographic regions to see how performance varies It has something I love, a way to correlate performance metrics with business metrics like revenue It estimates how much more revenue can be earned if nothing else changed but site performance You use the slider for the performance and predicted effect on revenue This is another great motivator to invest in performance not just for developers, but especially business leaders who think in dollars Last, I wanted to share a case study of how the Chrome UX report is used outside of Google. Impulse is a real user measurement product from ACMI It is using the Chrome UX report to care about real user measurement. For example, some companies don’t have any RUM tools and may be relying on lab testing like web page test which only gives them a narrow view of their performance ACMI can show the companies the real user performance in the Chrome UX report which may be vastly different from the performance in the lab Just image the surprise to learn that users experience your site many times slower than your lab tests indicate ACMI is looking to integrate this data with the M pulse product Recall with metrics from the web as a whole, it is possible to see performance of competitors With M pulse, you can see the real user and competitor performance alongside their own Site owners can understand how they stack up in specific markets The charts shown here indicate how different sites perform, especially on mobile This is exactly the kind of thing that makes the Chrome UX report so powerful Understanding how your site’s performance fits in the larger picture of the performance on the web as a whole. This is a small taste of the power of the Chrome UX pore power setting We had the April data set released this week, looking to add more user inpit, like first input We’re working with others to get this integrated into more third party tools. I will hand it back to Vinamrata to tell you about Lighthouse >> VINAMRATA SINGAL: Thank you, Rick. You guys can clap (Applause) (Applause)

Awesome. More audience participation here. Can you raise your hand if you ever used Lighthouse before? Oh, awesomeawesome yay! Those that haven’t, Lighthouse is a developer tool you can run today on your website. It basically gives you personalized advice about what you are doing well and what you can do better across five different categories Performance, progressive web apps, accessibility, best practices, search engine optimization or SEO. You can run it through four different ways You can run it as a stand-lone Lighthouse extension, in the audit panel, Chrome developer tools, MPM module and the command to install it, if you choose to And you can run it on web page test Now, I want to take a step back and tell you a little bit about the journey we have taken on the product side to give you a little bit of context of where we’re going This is a screen shot of Paul Irish a tech lead announcing Lighthouse at developer summit. We understand there is information sent to developers on things to do to create good web experiences We want it to be the one-stop shop to go to, to get all the advice in one place. We have been hard at work on making this tool as comprehensive as possible. We have launched over 100 audits into the report Including a whole new section, focused on search engine optimization this year You have seen a lot of doings in the community Specifically in 2018 alone, you have seen over 500,000 users running over two million-plus audits, which is great We hear stories from developers about success they’re having on house. I want to share a favorite This is Milan This is a fashion e-Commerce retailer, based out of China they used Lighthouse to make performance improvements and progressive app They improved the Lighthouse PWA score by 57 points to 100. They improved the Lighthouse performance score by 35 points to a 65 As a result of just making the two improvements, they were able to see a 10% increase in the mobile conversion rates as well as a 65% decrease in their ad bounce rate. So pretty awesome stuff. We also know that on the Lighthouse team, there is a lot to do to make the product even better for y’all to use. So I will summarize this as the top user request. So the first user request we get especially from people who are looking at the Lighthouse report for the first time is hey, like, can you make this report easier to understand? Because it is kind of cluttered, kind of confusing So that is one request we get The other request we get is I love Lighthouse, but the performance metrics and scores change from run to run, even if I am running in the same way Can you make Lighthouse more stable? And the third is I love Lighthouse, it is awesome. But I don’t want to run it on one site. I want to run it on hundreds of thousands of sites Can I conduct a bolt analysis using Lighthouse? Today I will walk you through how to do all three of those with stuff we’re working on. Go to the first one, to make the report easier to understand We take this concern seriously and we did an entire usability study earlier part of this year Taking results from the study, we came up with the new UI, which you can see today. You can see this in the mobile web Sandbox, if you try using Lighthouse there The personal favorite did which you can’t see from the screen shot, when you hover over the metrics, a tool comes up, if you don’t see first CPU idle, you can get a better understanding of what it is This is launches with 3.0, which we are planning on releasing this week You can get 3.0 today, through MPM, with Lighthouse at next It is the default way to run Lighthouse on the Chrome extension, which we are releasing later this week and coming to dev tools in Chrome 68. If you want more information about everything else coming in 3.0, checkout that doc over there. I will run through more updates, but that has all the information Let’s move on to talking about the second concern folks have with Lighthouse which is around the performance score These are scores that I got for a website that I will not name For their performance score of their site. This was just consecutive runs one after the other. This is a problem we have known for a very long time And there is — we had a concerted effort since last year to work on it, it is called project lantern

It has the best logo ought of all the logos created. To understand what it does, I want to tell you how Lighthouse computes metrics today. When Lighthouse is computing the performance metrics of the site, it is throttling your page, trying to emulate it as if it were a user on a mobile device on a 3G network and computing it Landern changes this, it tries to estimate based on the inputs you give it based on what network and device the user is on to compute the performance metrics itself Using this approach, we have seen a 50% reduction in the amount of variability in the Lighthouse performance score while maintaining accuracy with the dev tools throttling. It reduce the amount of time to warm up and run Lighthouse by 50%, which is annoying to some people You might ask, how does lantern do this stuff? I will give you a high, broad level walk through of how it does what it does The first thing lantern does is gets a static list of the assets that are necessary in order to render your page. In this extremely contrived example, it is the index in HTML, along with the blocking, Java script, it knows the weight, it takes into act the RTT, the time to transport one packet based — in this case 150 milliseconds This number is estimated based on two variables on the right, which it is the network throughput and CPU throttling Using the RTT and knowing the size of the assets required to be transferred in order to render the site, lantern can predict the performance metrics. You can see the things to occur in order to open the network are taken into account It gives a high level of how lantern works. It is the default for Lighthouse starting with 3.0 If you want to get started on understanding lantern and using it, get 3.0, the way I showed you earlier I want to move on to talking about scale. Specifically scaling Lighthouse. And today, I’m happy to announce that we’ve been working on an API, the Lighthouse API, an API to get results from any URL. I am excited. I think it unlocks use cases people have been talking to us about The first is it allows you to build dashboards, contract your home page, landing page and site pages you might have Second, you can build tools on top of Lighthouse and integrate Lighthouse into whatever continuous integration test feed that you might have Finally, conduct large-scale data analysis using Lighthouse itself. This is an overview of what the API currently looks like, for those of you who use Lighthouse on the command line, you can see it is pretty similar to what the command line would give you, except we made it more ergonomic for people’s use The current status of the API is it is in private beta. We are working with a couple of partners to iterate on the definition. If you are interested in intrigued by the API I will have more information later on in the talk on for the access We’re planning to launch it as a free public Google service very soon I want to take a little bit of time why I am personally excited about I API. One of the things I was mentioning earlier, we have all the tools and we’re trying to align our tooling story together, but one of the problems we’ve had in that is we don’t have a common infrastructure that all the tools can rely upon in order to align the tools. I think that Lighthouse API gets us one step closer, because it kind of creates a pipeline that a bunch of other tools can use in order to align all the tools together I’m sure you will see more tooling alignment initiatives in the future So now you might be wondering, as an external developer, who can Lighthouse do for me? I want to tell you what two partners did with the Lighthouse API The first is mobify That is a platform to build customer first shopping experiences through apps and mobile pages Some include can comb, car parts As part of the lan comb, and the analytics dashboard for business leaders and retailers to monitor performance and customer engagement In order to make this process easier in monitoring customer performance they integrated API into the dash board, this is what it looks like. This helps the business leaders to get an understanding of the metrics reported to what they might care about

This is a live example of how one of mobify’s clients, a letting British retailer actually uses the data provided by the API to correlate the scores with bounce rate, a metric they care about I’m excited for this use case because I think it helps business leaders understand the return of investment that they’re getting for putting work into performance and makes it that much more obvious So the second partner that I want to talk about is duda They’re currently over 12 million sites built using duda Duda wants to make sure whatever sites are being built that they’re fast. Especially as they add new functionality to the platform. They’re actually using Lighthouse to add progressive web app functional to the platform Now with API this is easy because they integrate Lighthouse into the continuous integration pipeline Duda uses check ins to do the continuous integration test They added Lighthouse as a component They’re able to set threshold for the developer, if it doesn’t meet the threshold, they fail to build, in this case, a build is failed and they dig into the report to understand what audits are passing, what are failing Again, this unlocks a use case we heard from a lot of developers with Lighthouse, which is once you do all the work, to make your site fast and how do you keep it that way? I think API makes it easy, because you can integrate it into whatever continuous integration system you have, duda planned once it is launched to release the plug in to the checkins it is that much easier to get started from Lighthouse I want to thank duda and mobify for all the work to pull the demos together. They’re awesome I’m even more excited to see what you do with the API If you are intrigued to the demos and you want early access to the API, sign up on the interest form over here A member of the team will be in touch with you very, very shortly I wait until the phones go down Oh, okay Still going. Awesome Okay. So I’m going to wrap up the talk. Rick and I talked about a lot of stuff I want to wrap it up with some key take aways, of which there is only one. But the main take away that I hope you get is that there are two kinds of performance data that we care about. There is lab data and field data. Lab data is essentially data that you collect in a control lab-like environment, helpful for drilling down into specific performance information and getting granular and nitty-gritty. Field data is useful if you want to understand the source of truth about the performance of the site. It is essentially data you collect to understand how real users are experiencing the performance of the site Lighthouse is a tool that can give you lab data and the Chrome user experience report Rick talked about can give you field data A talk by a project manager wouldn’t be complete without action items I will list four. Take out your phone, take pictures, I will go through this quickly The first action item is about input delay, the main activity input in the wild If you are interested in how real users are dealing with the interactivity of the website, check first input delay Here is a link to the dock and the polyfill. Give us feedback The second is around speed tools interview This is a tool to help you understand what tool to use when you think about the performance of your site Finally, the Chrome user experience report Rick talked about. If you want to understand how to baseline your performance statistics, do competitor analyses and all the really cool use cases, checkout Chrome experience report. And always making it better Finally, Lighthouse, you can run it on the browse if you use Chrome Here’s the early interest form And that’s it. So thank you so much, everyone, for coming. I know it is like 5:00 p.m. on a Wednesday. So really appreciate all your time We are going to stick around for a little bit if you have additional questions, but you can find us at Sandbox G We’ll be there most of tomorrow Come say I had I’m Vinamrata, have a great rest of your I/O. Thank you so much >> RICK VISCOMI: Thank you (Applause) >> Thank you for joining this session. Brand ambassadors will

assist with directing you through the designated exits We’ll make room for those who registered for the next session If you registered for the next session in this room, we ask that you please clear the room and return via the registration line outside Thank you (Session concluded)

Droid Jetpack easy background processing with Work Manager >> At this time, please find your seat. Our session will

begin soon

>> SUMIR KATARIA: Hi, everyone

My name is Sumir Kataria and work on the Android team. I work with Architecture Components and I want to talk about WorkManager and background processing in general on Android Let’s talk about background processing in 2018. What are we trying to do these days This morning, I was trying to send a picture of my lovely wife and beautiful son to the rest of my family. That is an example We’re syncing logs, processing data, syncing data That is work being done in the background. In Android, there are a lot of different ways to do this work. Here’s a lot of them You can do things on threads, executors, using job scheduler, async tasks, et cetera. What

should you use, when should you use it? There is a lot of Android battery optimizations we used over the last couple of years We introduced doze mode and Marshmallow We have had app stand by buckets. In Oreo we restricted background services. All of these have to be taken care of as a developer Finally, we always have to worry about backwards compatibility If you want to reach 90% of Android devices you need to have at least Kit Kat. Given all of this, what tools do you use, when do you use them? And the trick behind this is that you have to look at the types of background work you are doing I like to split this up in two axes. The vertical is the timing of the work. Does the work need to be done right when specified or can it wait for a little bit so if your device enters doze mode, you can still do it after that Also, on the horizontal axis here, how important is the work? Does the work only need to be done while your apps in the foreground, or does it absolutely need to be done at some point? So for example, if you’re taking a bit map and you decide you want to extract a color from it and update your UI with it, that is an example of foreground only work. You don’t care once the user hits home or back, the work is irrelevant If you are sending logs, you always want that to happen, that is an example of guaranteed execution. So for things that are best effort, you really want to use things like thread pools, RX Java, core routines For things that require exact timing and guaranteed execution, you want to use a foreground service. So an example of this would be that the user hits a button and you want to process a transaction and update the UI and the state of the app based on that. That needs a foreground service. That cannot be killed by the system while that’s happening This fourth category is very interesting. You want guaranteed execution, but you’re okay if it happens later. Doze mode can kick in There is a variety of ways to solve it On your API you will use job scheduler If you want to go farther back, use Firebase job descriptor You will probably use alarm managers and broadcast receivers If you want to target all of those things you will use a mix of those things and that is a lot of API. A lot of work to be done WorkManager falls here It is guaranteed execution that’s deferrable So WorkManager let’s talk about its features. I mentioned guaranteed execution is also constrained aware So if I want to upload that photo I talked about, I want to do it only when I have a network That is the constraint. It is also respectful of the system background restrictions If your app is in doze mode, it won’t execute. It won’t wake up the phone just to do this work It is backwards compatible, with or without Google Play services The API is per able. If you have queued some work, you can check what is the state? Is it running? Has it succeeded or failed? These are things you can find out with WorkManager It is also chainable You can create graphs of work You can have work A depending on work A and C, which depends on work D It is opportunistic, it will execute the work in the process when the constraints are met without needing job scheduler to intervene or call you and wake you up. It doesn’t work for job scheduler to batch your work if your process is up and running already. Let’s talk about the basics and talk through the code So I just described the example I want to upload that photo. So how would I do that using WorkManager? Let’s talk about the core classes. There is the worker class. This is the class that does the work This is where you will write most of the business logic and there is a work request class, which comes in two flavors one-time work request or things that need to be done once and periodic work request for recurring work These will both take a worker I will show you now class. It overrides the do work method. This is a method that will run in the background We’ll take care of that on the background thread, you don’t need to put it in the background thread See, simply do your work, this is done photo synchronously In this case, let’s say we succeeded

The worker has three values, success, failure, fairly obvious and retry says I encountered a transient error The device lost network connect so retry me in a little bit Now that I have this, I can create a one-time work request using the upload photo worker and then I can queue it using WorkManager in get instance in queue. Soon after it is queued, it is running, you will upload your photo I just talked about this What if you lose connectivity before this, or whatever if you never had connectivity, you want to use constraints in this case An example of the constraint here you want to use is make a constraints builder and you say set required network to be connected. You need a connected network Connection, you build, it you also set the constraints on the request that you just built By simply doing this and queuing it, you make sure it runs only when your network is connected So let’s say we want to absorb the work now that we have done it. I want to show a spinner while this work is executing and I want to hide the spinner when it is done. How would I do that? As I said, I will enqueue this request Then I can say get status by I.D on WorkManager using the request I.D Each request has an I.D This is a live data of the work status. If you Architecture Components, live data is a life cycle aware observable. Now you can hook into the observable. You can say when the work is finished, hide the progress bar So what is the work status object you were looking at, at the live data? It has an I.D The same as a request. And it has a state The state is the current state of execution There are six values here in queue, running, succeeded, failed, locked, canceled We’ll talk about the last two later. So let’s move a step up in concept here. Let’s talk about chaining work. I promise that you can actually make directed graphs of work How would you do that? Let’s say this is the problem. I’m uploading a video. It is a huge video I want to compress it and upload it These are time-intensive things Say I have two workers compress photo worker, upload photo worker, both defined to the do the things I said. You can make work requests from them, you can say WorkManager, begin with compressed work, then upload work and enqueue it. It ensures the compressed work executes first. Once it is successful, then upload work goes If you were to write this out, because that was a very fluent way of writing it, what happens behind the scenes here is that begin with returns of work continuation. And work continuation has a method called then, that also returns a work continuation, a different one So you’re using that to create the fluent API. You can use the work continuation and pass them around, if you want, et cetera So let’s say I’m uploading multiple photos No one takes only one photo of their child How would I upload all of these in parallel? Well, say I’ve got work request for all of them I can literally say dot enqueue and upload all of them. These are all eligible for running in parallel They may not actually run in parallel and the executor being used, all of that, but they could be Let’s choose a more complex example. You want to filter your photo. You want to apply some gray scale filter or some filter to them, you want to compress them, then upload them How do you do this? WorkManager makes it simple First you say do all the filter works in parallel After those have all completed successfully, then do the compression work After those have completed, and that completed successfully, then you do the upload work and don’t forget to enqueue at the end So we have talked about all of that, but there is a key concept I want to cover that is very much related to chaining This is inputs and outputs. So let’s talk about this problem that I have here It is a map reduce It is really a good way of explaining a map reduce is to give an example. I love reading I have loved reading Sherlock Holmes novels since I was a kid writing. What are the top 10 words he uses in his books? How

would I figure that out I would go through each book, count the occurrence of each word, combine the data, sort it to find the top 10 of those This is a distributed problem that we can call a map reduce a data. The data is a simple class that is a key value map under the hood The keys are strings, the values are primitives and strings and the array versions of each This is a bundle or parsable, but it’s its own thing It is limited to 10 kilobytes in size. We’ll go more into that part later How do we create a data? In cogland, you can make a map You map the — that is the novel I will look at. I will convert it to a work data This is a data object. Once a create the work request builder, I can set the input data on it This is the input data on that map Inside the worker, I can retrieve this by getting the input method, and I can get the string for the file names, now I have the file name I can say count all the word occurrences in this file name, that is a method I have written somewhere else. I can return my success. You don’t want to do just that, you actually want to have outputs, right? You have done all this work It should do something. It should be an output for it Say that method that we have returns a map of words to their occurrences We can convert that map to a work data and we can call a method called set output data that sets this data So get input data, set output data The key observation to know here is the worker output is the output for its children What happens is the five top 10 words worker, which goes next, its input data is coming from the previous worker So in this case, you can pass the data all the way through, find the top 10 words and return out The data flow for one book becomes like this. Count all the word occurrences in that book. I will pass it to the find top 10 words worker Its input data will be whatever I pass through and it will do the sorting or whatever it needs to do. But here’s a really tricky thing What happens when you have multiple books? What is the input for find top 10 words worker? You are passing multiple pieces of data, but I have only been able to get one input data What happens to the rest of them? How do they combine? For that, you want to look at input mergers Input merger is a class that combines data for multiple sources into one data object And we provide two implementations out of the box, overriding input merger, which is the default and array creating input merger. You can also create your own Let’s talk about these two First, overriding input mergers We have two data objects here Each with their own keys and values. And what does overriding input merger do? It first takes the first piece of data and just puts everything in a new data object. It is an exact copy of this. It takes a second piece of data and copies it over. So overriding anything that is the same key So in this case, the name Alice becomes Bob and the age of 30 becomes three days. Not that the date changed type. The number became a string here The scores key was new, it just got added. Note if we did this in reverse, instead of Bob, you would have Alice as the final output. So this is something a little tricky You want to make sure overriding input merger is the right tool for the job, but it is very simple. What about array creating input merger? This is the one that takes care of the collision case So in this case, let’s go through this key by key The name becomes an array of Alice and Bob, color becomes Singleton array of blue because it is only defined in one of them Scores, notice that there is one integer and one array of integers, these combine and order is not specified, all the values come through. What happens for age? There is an integer and there is a spring. — string This is an exception, we expect it to be the same basic value type So let’s go back to that example I was telling you about Sherlock Holmes

Implicitly, there is an input merger before this stage. We combine all of that data Which input merger do we want to use. We don’t want to throw away the calculation we have done. We want to use an array creating merger to preserve all of the data and get it through How do we do that? We say set input merger on the request builder off the find top 10 words and it creates the array data input merger. You say begin with account words worker, do the top 10 words worker and then enqueue For example, if the first book had 10 instances of the word Sherlock, five Watson, 30 of elementary, you will get arrays like this 10, 12, Sherlock, elementary, 30, 5 You would sum that up, sort them, find the top 10 That is the output. I just said that is your output You can actually observe the output in the work status using the live data. You can get that output data That is super useful because you can put it in the UI. How do you cancel work? I decided to send up a picture, I am like wait this is not the picture I meant to send up How do you cancel that upload? Very simple. You say cancel work by I.D. but do note cancellation is best effort The work may have already finished These are all asynchronous things that may be happening in the background before you have the chance to do the cancel work it may be already running or finished It is best effort Okay. Let’s talk more about tags Tags are solving this problem I.D s are auto generated not human readable They’re under the hood. You can’t understand them They’re not useful for debugging. What work is running, I don’t know a big number. I don’t know what that is. Tags solve this issue Tags are readable ways to identify your work. Tags are strings identified by you Each request can have zero or more tags. You can query or cancel work by tag Look at an example So I used to work on the G plus team here The G plus apps are multilog in Each user could do several kinds of background work. You can get favors, preferences So if you have three users logged in on your phone, they’re doing two kinds of work, you have six things happening, how do you identify what you are looking at, at any given time? You can use tags So for example, in this work request builder, you can add tags to say this is user one, this is the get favorites operation You can actually identify that work If you want to look at the status, you can say give me the work for user one This will return a list of work statuses as a live data Each tag will correspond to more than one work request This is a list of work statuses Similarly, you can also cancel all work by tag Cancellation is best effort, again. Cancel all of one particular kind of work in this case Tags are also useful for a couple of other reasons Tags, name space your type of work, like I told you. You can have tags for the kinds of operations you’re doing, get favorites, get preferences, et cetera But they also name space libraries and modules A library owner or module owner, you should tag your work so you can get it later So you have a library and you move to a new library in the app. Maybe you can cancel all the work you had, you can cancel all the work by your tag Always use tags when using a library. Work status also has tags available So if you are looking at a work status you can get the tags for that work and see what you yourself called it in the past when you enqueued it One more thing I wanted to talk about is unique work Unique work solves a few different problems One problem apps have is syncing You want to sync 12 to 24 hours and sync when the language changes Maybe you have a version of your data in a different language You want to sync at that point, too. You are doing all the

syncing, but you want one sync active at a time You don’t want four running Which one wins? You don’t know You just want one. Unique work can solve this. It is a chain of work that can be given a unique name. You can enqueue query, cancel using that name and there can only be one chain of work with that name Take a look at sync example If I say begin you — unique work, with my name, say sync in this case. The next argument is the existing work policy If there is work with this name, “sync” what should I do with it? In this case, I say keep it. I want to keep the existing work, ignore what I am doing now. The next argument is actually the work request, in this case a sync go. You enqueue it. If you there is work with the name sync already in flight, it will keep that. If there isn’t, it will enqueue this and execute it This is how you do your syncs Here at Google, we love chat apps Maybe you are updating your chat status. You say I am bored And 10 seconds later, I’m watching TV. I’m bored again, I’m going to sleep You are in a bad network connection state. You have bad Wi-Fi, maybe the first thing hasn’t gone through The second one should win and the third one should win over that. You want to make sure the last one wins. How do you solve this? This is a simple function, you don’t need to read the rest of it. It is the last line you care about. It is begin unique work. It is your name is update status and you choose the replace option Replace cancels and deletes any in-flight options of the name The last one does win If you have two update status calls, the last one will win Finally, I love music, I love the food fighters. I was building a playlist with all their songs, there is a lot of songs. 150 or 200 sounds. I was doing all of this I was adding a song, shuffling two songs around, moving something from the bottom of the list, deleting a song because I had it somewhere else These are all things I want to do using WorkManager, but how would I do that? These all have to execute in order So since the order is important, we provide the ability to use the append existing work policy that says do this work at the end of the list of update playlist operations. So just append this to the end of the thing, so everything else must successfully execute before this executes So you can add operations to the end So existing work policy, as a summary, there are three types Keep, replace, append A few notes about periodic work It works similarly to everything you have seen so far. Just a couple of notes on it. The minimum period length is the same as job scheduler. It is 15 minutes It is still subject to doze mode and OS background rec STRIKZs — restrictions like other work we talk about. We can’t have delays. That makes good API sense. It is much more reasonable to think of it in those terms All right. So we have talked a lot about code. Let’s talk about how it all works under the hood So you have got a work you enqueue it, we store it in the database. What happens after that? Well, if the work is eligible for execution, we send it to the executor right away This executor, you can specify it, but we provide one that is default. But let’s say that your process gets killed Well, what happens then? How is it work woken up, how does it run again API plus, send it to scheduler, you wake up the process, go to the same executor, that is where it runs If it is an older device, and you use Firebase job dispatcher and use optional dependency, you can send it to Firebase job dispatcher, same thing, invokeinvokes IPC, runs it on the executor. If you don’t have that or not using a Google Play service device, you are using something else, we have a custom alarm manager and broadcast receiver implexation. Same thing, using IPC, wakes up the app when the time is right and runs the job A couple implementation details Job scheduler, Firebase job dispatcher through Google Play,

they provide central load balancing manager for execution If every app on the device is trying to run jobs they’ll load balance them device and burning up your battery. The alarm manager implementation cannot do that because it is only a variation of your own app Your concepts like content URI are only available at the levels they’re introduced at Those manages will be marked with the appropriate API level We have pending wake locks This is true for the alarm manager Don’t take wake locks in your workers you don’t need to do that, we take care of it for you Finally, let’s talk about testing. You want to test this app. We provide a testing library. It has a synchronous executor Use WorkManager as normal to enqueue your requests. We provide a class called test driver which executes enqueued work with constraints We can pretend the constraint is met Periodic and display measures are coming. We don’t have them yet. If you want to look at the code, you can use the task force manager and get the test driver Create and enenqueue your work as you would with a constraint in this case. You can tell the test driver, hey, all constraints are met for this work Your work executes at that point, synchronously and you can verify the state of your app and make sure everything is right I will want to talk about best practices before I end here It is very important to know when to use WorkManager WorkManager is for tasks that can survive process death, it can wake up your app and your app’s process to do the work It is okay to use it to upload media to the server also okay to parse data and store it in the database. It is not okay for the example I gave earlier, extracting the pallet color from an image, and uploading an image. That is foreground only It is not only when you parse data and update a view It is not work that needs to use WorkManager. It doesn’t need to survive process death It is not okay to process payment transactions in it, if they care about timing right then If you click bye and you want to update the state of the app, that needs something else. The last one needs a foreground service, the other two may use thread pools or RX Also WorkManager is not your DataStore Instances are limited to 10 kilobytes It is meant for light, intermediate transportation of information You can file your eyes or key database if you want Put simple information to update your UI If you want to use full DataStore, I recommend using room. They would be happy I’m saying this. It is an awesome database Finally, be opportunistic with your work. The filter compress upload example again The reason these are not one big job is because they all have different constraints So they can execute at different times Say I’m getting on an airplane and I upload a bunch of images and running this chain of work I go into airplane mode, maybe I don’t have network for the next 12 hours because I’m flying across the world. The other work can still execute and it should. If you architect like this, you can do that. This also, by the way, makes your code more testable. You can write a test for filtering that isn’t conflated with compression and upload All right. I want to talk about a few next steps for you If you need to reach us and talk about WorkManager, we are in the Android Sandbox, just behind us, I think, over here. That is more information on the official developer website about WorkManager These are all the Gradle dependencies. The first one is required The second one is if you use Firebase job dispatcher include that A testing library and of course cogland extensions as well WorkManager is part of the Architecture Components in Android Jetpack. And we have a bunch of talks here Tomorrow, navigation controller 8:30 a.m. I hope you make it there And thanks for being part of this talk We look forward to hearing back from you soon. Thank you (Applause)

>> Thank you for joining this session Brand ambassadors will assist in helping you exit through the designated exits We are making room for those registered for the next session If you are registered for the next session, we ask that you clear the room and return via the registration line outside Thank you (Session concluded)

What’s new with the Android build system >> At this time, please find your seat. Our session will

begin soon

>> XAVIER DUCROHET: Good afternoon, my

name is Xavier Ducrohet >> JEROME DOCHEZ: My name is Jerome Dochez SOID we’ll talk about the improvements in Android since last year. It shipped out in the fall The whole concept is all of the variants of the module can depend on other modules Debugs on debugs and release on release It is working well with it on each side We have seen a lot of developers, over the last six, nine months, see an enormous stage like this and don’t know what to do with it We want to explain what is going on, help you solve such a problem First thing to look at, this

line does not publish library It is the project that is consuming it is in record by project app the name of the configuration It is the run time plus pass The error is able to find a matching configuration. The way it is working really, the library project is publishing a bunch of artifact for different buy in at the integrations It has attributes associated with it so it can match it. It is a request with similar attributes, trying to do a mapping and needs one exact map Here it can’t find a match. If you go through the configuration, you know, you can look at all the attributes and basically, by process of elimination find out what it needs What we see is two have an API used by combile test and the other two are run time The attribute, they request Java run time, some use Java API, so we can ignore them If you more atribe tributes — it is on both sides, we can ignore it, we use to match different type of plug in, versus library and features, just so ignore it The other one buy out attribute. It is one you can ignore We find final attribute, which is incompatible, we ask for staging, it is finding other values The name of type of attribute here, it is, you know, the key part It is basically self-explanatory, this is the build type attribute, we are request being staging, it is not finding it. It means the library does not have the build type, you can train it there or use the DSL to cradle for that You say if the staging is not found, use debug You have the same exact principle if you have a missing dimension value Here, the error message is different because there are multiple flavors, this is the flavor dimension, same problem Not finding trial when you look for it the fix is the same. We do that on the consuming side, the app side of the library. If it is a library that depend on the library, you can have the consuming side another type of problem, it says cannot choose between the following configuration It will find compatible matches It will only show the two configuration that are not matching, tell hide the others That is easier. If you look at it, there are attributes that are compatible, you signal them You are down to the two It says it found an attribute on the library side, but not consumed by the consumer What happen is the library has a dimension not defined in the app of the consuming module and doesn’t know how to choose this one. It has the flavor dimension in the app or consuming model or go and say, you know, missing dimension strategy, name of the flavor dimension, name of the value All it is doing, really, is adding the attribute to the request You can add this either to the default config, which is all the buy ins To a specific flavor or to a particular variant >> JEROME DOCHEZ: Thank you Talk about some of the notable changes we have introduced in the variation pipeline The first thing we have changed lately is aapt2 This is a new incremental divider. This is dividing into the two phases The phase where it changes resources into binary format And there is a final linking phase where it will have the final I.D If you want to use the latest platform features, use aapt2 In theory, it is compatible with AAPT1. It is stricter You might run into issues from 1 to aapt2. We have given you a

flag so you can use it if you have issues You should be aware the issues may not be in aapt2 itself. It may be in your project. So you should find bugs, if you find issues with aapt2, but you should do your homework first, look into maybe the problem is in my project, so let’s look, maybe you have invalid resources, and reference stuff Aapt2 will catch those So we’re going to phase ought AAPT1 soon, so it is important that you switch and find bugs, if you still find some Next thing is desugar This is a tool to use old synthetic sugar that was in Java 8. We have two different versions The first version was external process It used a lot of resource, was slow With V8, it is integrated into the pipeline. It is better. It is faster. Again, we’re going to phase out the old version in theory, you should all be switching to the integrated version, shouldn’t be seeing any difference If you do, please file a bug It is probably our fault this time, probably not your fault File a bug We will phase out of the old external process version soon Talking about dex. It is interested in 3 1 It is default from 3.1 The bugs are responsive at fixing issues Again, we’re phasing out old DX soon. We would like you to be capable of building your applications, file bugs We introduced 8, which is the shrinker, but it is like a different thing than it used to be, 8 takes over the entire pipeline from class files to dex files It is the desugar, minimization and dex fall files If you use it, see if it works well. File bugs, we will stop supporting the old shrinkers, not soon, but midterm, I would say These are the flags you can use to enable or disable R8 and D8 Use this if you run into trouble, also file bug, okay? Let’s talk about the performance we know well it is a subject you are extremely interested in, so are we. One of the problem we have is that we have a lot of trouble reproducing issues that may happen in the (?) We don’t have that many applications internally to use However, here is an example using the nest mobile app You can see a net will build as improved about 13% from 3 to 3.1. And then from 3.2. This is if the keep the same features. Not changing the application, just the plug in If you have the Java A, things vary, obviously Now, if you look a little deeper at performance, things are more complicated. In particular, when you look at incremental builds you can see Java C for the full build was 10 to 15% of the build In incremental build, it jumps to a really bad 38% of the overall build type Really, we can see that in incremental scenario, Java is the dominating vactor. It is not incremental because people are using annotation processors and so far they’re not incremental capable So we started a joint application with an note, it is called incap It is ready, shaped shipped in 4.7. Now we’re trying to make all the annotation process tors to be incap capable If you have the processes, make them incap capable. It is not complicated For the simple processors, it will generate something from one

source, it is a need to see the entire world, adding few manifest entries. That is simple That will eventually, in all of the annotation processes are incap capable, we’ll be able to switch the compilation to be incremental, that will have a huge impact on the performance of the incremental builds (Applause) >> Thank you Let’s talk about Jetpack. You have heard about it a lot. How does it work? Let’s take the first example — if you have to the pure AndroidX project. It has the old support, what you do is use the refactoring tool in studio to move the new stuff One issue is we add some dependency to the project, you use legacy, data binding, we will add automatically, the multidex to elaborate If you move to the pure AndroidX world where you now suddenly have the AndroidX constraint instead of the old support, we need to know that we must use the AndroidX multidex library We need an indication of which one we want to inject You need to tell the plug in, this is for this flag, which version you want us to use So if you want to move to AndroidX world, use this flag If you are in a hybrid world, which is who most likely will happen to most people Yes, you moved your application, all the projects to use the AndroidX stuff, you use an external library that you have no control over, it uses the old libraries, you have no choice or control about the sources, what can you do? We will provide a build time translation facility that will just basically change automatically the dependency from the old support to the new one And will change the collaborate itself by changing all the import statements to the new ones This is not like we will publish the library, but locally you can have the illusion of the pure world where everybody is depending on the AndroidX library There is a flag jetifyjetifier >> XAVIER DUCROHET: Let’s talk about the app bundle. We’ll tell you some greater things The first thing on the application plug in, not the library or feature plug in, but the application plug in, on top of the tasks you see here, we have bundle tasks You can quickly without change, go and build a bundle, remember the, bundle is a way to apply to the Play Store, get out of the bugs, with the API and smaller (?) Time Locally, you can build an APK the way the Play Store does it in the APK. If you want to verify something, saying I have a device, I don’t want to build an APK directly I want to build an APK the way the Play Store would from the bundle. You can do that. The site is more involved than directly building the APK You first build a bundle, then use the bund will — bundle tool to build a bundle, and then based on the device configuration, we can create the APK. In general, we don’t necessarily want you to create that, you know, manually using the greater task, however, a lot of studio Android cannot use that flow when you need it For example, the instant task, if you have the features it will build the Pacific If you create I can nammic features — dynamic feature, which we will talk about upon it will go through the bundle to make sure we can install on on any device If you have a dynamic feature and you install on device, with the API less than 21, it needs to be done together to install it. We don’t want to be different than the Play Store We want to be the same as the Play Store If you try to run tests from the line, the coding check, it will go to the APK It will work later to add the ability to run the test in the bundle to simulate better exactly the type of APK that are created by the Play StoreStore

Deploying from studio, you have a choice, directly from APK or APK from the vendor Here the run configuration inside studio has a check box to check. We recommend to not check it when you do your regular development. You know, as we saw earlier, building an APK from the bundle is much more complicated than building it directly. When you deploy from studio, we look at the device you want to target, and then we inject that information into Gradle And if you have the X, to take a while to build, when to device 21 or above, we will use the index that is faster We can’t take that shock yet when we build a bundle You have two, create one with and one without the option You the APK as often as you can Use the other one only when you have to, when you want to use the APK developed by the Play Store. If you push to a device that is pre-21, we’re going to use that option anyway and always going through the bundle, as soon as you have dynamic features So you may may doing manual APK and the DSL and the (? ) To compete conversion feet for the APK that is generated. If you are doing that now and you want to switch to the bundle, you can erase all of the DSL It is unnecessary Nothing to do, you call the bundle task, get your bundle, you are done. On the service side we will do all of this by density, API by default We have a DSL to allow to disable it, if you are running into problem We expect that would be very few case where you need to do that, but you have the able, if you really have to, if you have that also file a bug and talk to the Play Store team, see if you can sort it with them I mentioned dynamic feature earlier. What are dynamic features? They’re small APK installed on demand It is no need to install for 90%, have it for on-demand The way you organize the code, it is in Gradle. You have a plug in called dynamic feature The layout is different compared to the library, normally in the library, the application depend on the library and refer to the code of the library. Here, the features are actually sitting on top and depend on the base. We can see horror the two base feature, the two dynamic feature in pluapply to dynamic feature Have a dependence on the project app. That is base one Those have to refer to the dynamic feature. It needs to list them. I will show you it, it is complicated. It is unusual. You need to list the features. It is the greater path for all of the dynamic features If you used instant app since last year, it probably looks similar It is you know, the feature plug in, used for the features and base feature, except you have a small flag using base feature equal true The base feature have to refer to all of the features, using a slightly different DSL, which will probably migrate to the dynamic feature version soon, but you do it now using the feature configuration You probably heard about, you know in the future you can instance enable that to have the caps with the bundle This is why The feature plug in and dynamic feature plug in are similar The features share probably over 95% of the code. Extremely similar So if you are architecturing your app, switching would be the same architecture with slightly different plug ins If you are doing dynamic feature now, allowing that to be in instant app would be similar in the future I mention bidirectional dependancy where they need to know about the base app needs to know about the feature, you have the two module before, if you are building APK, look no more There is a bunch in each module What happens is the bundle task belongs to the app bundle, the app you use When you build the bundle, we have to feature from the base module and we need all the dependency of the base module If you don’t add to the base it won’t

show up in the bundle at all >> XAVIER DUCROHET: So when developing dynamic features, it is penitentiary to understand differents in the building There is differences in the variant which won’t change between the base module, the application I.D., version code, version name, all of those should be set in the base module and we’ll publish them back to the features So they can be used in the task when for instance, generating the manifest or whatever. It is important to understand that you set them in the base, you do not repeat them in the features. We will automatically transfer them Okay Let’s talk about multidex So you are in a situation, not using features but doing the bundle thing. It is the same behavior Instead of shifting into MPK, it is into a bundle which is uploaded. The dynamic delivery will not do much. It will take the dex files and store them on the device before execution So same behavior. If you have feature, not a lot of difference Basically each feature can develop and goal can become itself a multiple dex files. Everything is merged on to the base module and you get all package inside the bundle again. Same thing with our feature, it is delivered on the phone, not a big difference But default we enable the multidexing Things get more complicated if your SDK is less than 21 You need the multidex if you are deploying on a device that is 25, but if you are targeting and deploying on 15, we need to do something specific for that What will happen is a dynamic delivery will fuse all of the text files into a single one that will be delivered This is important to realize, even if none of the modules, including the base module or feature modules, if none of them use multidexing itself, because we combine those into a single one, this may actually go over the dex limit and may require multidexing So you need to manually enable multidex in the base application What that will do is basically create the (? ) That it will ship along with the dex files to the bundle The dynamic delivery will take the bundle and be capable, if you are trying to deploy this on the 15 device, it will use the multimultidex for diffusing and use the extended dex correctly All right? Shrinking and obfusication. It is the same as before You set it on the base module like the application. You say identify, enable, not a lot of changes But how we implement it is really, really different. So let’s go through the motion here First of all, this is only supported with R8, which is the shrinker I talked about earlier. What we do, we take all the rules that will be defined close to the classes they’re applied to, the rules and the features, we merge all of them into a single one With the class files we field this into R8 R8 will create a combined dex, big dex file is everything We will feed that dex file into a dex leader which will split the dex for the features and the base. It is a different mechanism as before Now we have again, this dex is shrinked, obfuse indicated It is two scenarios possible You are using the bundle or the APK If you are using the bundle, it is simple They’re automatically transferred into the bundle as we saw earlier If you are using feature, it is like the dex file published back to the feature Then they will be packaged into the APK All right. It is a slightly different mechanism as we have seen in the past. A lot of going back and forth, we need the dependency between features and app >> XAVIER DUCROHET: We wanted to talk about things that are not introducted in 2. We have been working on it for a while We expect they will ship soon We want to give you a heads up and feedback as soon as possible. This is how we process resources

If you have a library 2, in the flow, the processing of resources and just generate another Java and nothing else So when this is consumed by on or about library, library 1, it may be referring sources from library 2. To validate all the resources references and also with the overlimit mechanism, we will merge the two resource folder into a single resource folder and process it, generate another R class. We generate two. One is for the library itself, which everyone, which contain all the resources from library 1 to library 2 And then we have another one, which is just a new version of the R class generated by library two, which is in case you want to have tests you need the final I.D. version of it That is consumed by an application. We cannot ask you to do the same thing again We will remerge the app, library one, library 2 and process all of that. And then generate the final resource binary format as well as all the final classes that are needed by the code from the app library 1 and 2. As you can see, this is not exactly efficient. This is probably one of the last things that is very inefficient in the way we build things. The resources from library 2 is merged and processed three times. We generate a lot of classes. In fact, we’re seeing project with a few hundred modules generating several thousands of classes, which is too many We need to fix this. This is the new flow we’re working on and what it looks like When we compile library 2, there is binary resources We can’t make it work with aapt2 We need aapt2 is the future We create the R class Rather than the source code so we don’t have to compile it When we depend on it, consume it in library one, what we will do is only process the resources of that module Then we’ll use the output of the previous module, the same way, basically, you use during Java C It is mostly there to validate the resource, and use the packages that only contain the resources. There is no longer a merge. And the R class only of that library That library R class only contain the resources of that library. It does not contain the resource I.D. of the previous library and this one We don’t merge those anymore When the app depends on library 1 and 2, it is the same It needs to merge to the final one. Create the final class You can see a lot less classes and efficient You can see in the library 2 by editing a string There is no reason to compile library 1 again. It is the least of I.D.s that I need to change against for reference is exactly the same one. No changes. So we can use the same thing, like compilation avoidance that we used and we don’t need to recompile library 1 or the app, all we have to do is the final link and that’s it We can do that also because the R class for the app does not use final I.D. any more Let’s look at some of the impact of the change So first, I said that the R class of each module only contained the (?) Of that module, it doesn’t contain all of them If you want to reference another resources from another module, use the R class from that module instead of the R class from your app You are likely to impart more than one R class We are looking at ways to make that work. It is not reason for the R class to be R, it could be whatever else Each library could define its own name to reduce the amount of conflicts The R class are not final, even in the app, which means you can’t use Java switch cases any more I mentioned we’re not doing resource merger any more If you have a module that redefined a resource presented in the library, that won’t work any more. There is good reason for that. We definitely want to hear from you if you are using that in a strange way If what you are doing is well in debug in the app I want to everride an icon coming from my library, you can use that (?) In my library, have

a version of the debug in the icon. The app will contain the debug, which was not the case before 3.0 That solves that problem If you have another use case, talk to us, we definitely want to hear about it So I mention name space and I talked about the name space I said we don’t have a resource merger or resource available The application depend on a third party here and here And both have the same resource, same name with a different value A string may be a layout or night or something like that In the past the resource merger takes one physically. The first one defined has a higher priority and merge on top of the other one and replace another one, which is probably not what you want So here what’s happening if you want to package both Both are going to be compiled in their own name space We will package that internally, the R class from those libraries will have different — would have the same string, with the different values but interestingly they will point to two different resources, which is much better. When you reference to a resource from the library, you have to use the name space of the library Using the R class of that is to classify it You don’t have a single R class with all the I.D When you do that, it is different You have to use the manifest — the XML, you have to use the name space directly in the resources So at dot comlibrary one/X, which is very similar when you name an Android. Android column something You have atribattributes you probably do the same thing. Now you have to case what the name space is This is out This is — you know, this SML constraint is that name space of the library. And I can use the attributes So we are hoping to bring this — that in 3.3 Canary We will enable that with a flag We will include refractoring in the studio, so all of those will be changed for you. We will want a lot of feedback, it will not be turned on in 3.3 or redot There is a revision where there is changes, but we want a lot of feedback >> JEROME DOCHEZ: Thank you Last thing we want to talk about is the public API We are painfully aware that the public API we provide is not very good The plug in is bad not specific to the module type It is not set in different modules, but it is still there There is no access to the merge man vest, class files, dex files you name it, you can’t do it in a nice way The varying modal doesn’t give information coming from the properties You only have access to part of the truth. This is too many things that require internal PI, hacking tasks, the separation between public and internal API is not well defined. You don’t know when you switch from one to the other. It is a mess. And to give you an example, this is how the configuration works today So we have the DSL passing first and create the variant, right That variant is created, then we create all the tasks for the variants and eventually we have the RTPI This is where the scripts are running. The problem is there in front of you, it is happening after the tasks have been created So there is no way to change anything that will influence tasks creation Say you want to switch from legacy multidex to regular multidesk, you can’t do that because we have changed to legacy multidex You want DSL, you can’t do that either because we created the task and chances are that the input have already been finalized So it is happening too late and eventually, it creeps into the system running later There is no way to do a lot of things here So people do is excise the task, try to find out the generator It is a mess What we would like to do is have a simple Gradle API interface that is the one to depend on. If you

depend on older ones, you should talk to us, tell us what are your needs, why are you doing this? Don’t think that we’re not listening. We are interested in it figuring out why you need to customize things and interested in providing you a good solution. So one of the things we’re looking for, for instance, is to give you access to all of the intermediate files. All of the intermediate files are really a type Classes, merge manifest, text files, resources All of those should be accessed and capable of being replaced You should be able to say give me the merge manifest, change it a little bit and I give you back the new version. That is the one you should use to continue the build process. It should be replaceable. Dependable Maybe you want to add class files directly Eventually, all of this should be done through well defined extension points. We started working on it. We would like to have feedback on this Proposed phases would be different DSL parsing, invoke the custom code there You have the ability to stop changing the DSL program atatally before we create variants. Then we lock them up. You can’t change any more. Once it runs, it is locked up. We create a variance from the information From the variant we can code from variant to API. That is the ability for you to change variant related information Now, be aware that this is happening now before the task creation. There is no task created yet. This is really the ability to say it was using multidex, now I want it to use legacy multidex one these have run, we will lockup the variant object So again, if you use this later on, after evaluate, it is too on late. It will throw an exception Then we create a task, we will lock them up immediately. We will not allow you to modify the task any more You have to go through the buildable artifact, to inject, replace, append, not with the task, because we replace them, we merge them, we delete them, we split them to make features or make things more efficient Eventually, it will run The compatibility will be a painful subject We will try to make use of the DSL mostly unchanged. We will make an effort to be more content friendly, and unfortunately, we think the variant API will change heavily, since we realize it was not so great, hopefully you will be sympathetic to the idea that you will have to change the script using the variant API So we want to hear from you it is important that we get feedback from the community for us. I want to thank you for coming tonight. I know it is a late session It was very nice to see so many people here tonight. Thank youyou >> XAVIER DUCROHET: Thank you (Applause) >> Thank you there joining this session Brand ambassadors will assist you with — (Session concluded)

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.