Uncategorized

[MUSIC PLAYING] CHET HAASE: Good morning And welcome to Android for Java Developers This talk is a wrap around a series of very long articles that I hope everyone will read eventually, called Developing for Android So if you go to medium.com and search for that phrase, or if you just go to the internet and search for that phrase, you’ll wind up on the series of 10 articles These came out of an incredibly long doc that we wrote internally at Google, when we were faced with a bunch of app developers we would meet with on a regular basis And someone said, so every time we talk to the framework team, and we say we’re doing this thing, and then the framework team says, no, no, no, don’t do it that way, do it this other way instead– and then they’d go away and they’d do that And they’d come back with another question And we would say, no, no, no, you shouldn’t do that What you should actually do is this other thing instead And they said, OK, so where do we go for the information that sort of collects all these tips, and tricks, and techniques, so that we can stop getting slapped by the platform team for the way that we’re approaching this stuff? So that was the genesis of the article We decided to actually take a step back and say, OK, what are all the things that we think are obvious, but typical Java developers do not? And the problem is that a lot of developers, most developers, writing Java code at Google come from the server world, or even the desktop world, sort of traditional Java platform And then they say, OK, well, this is the same language, so I’ll carry over all my patterns and practices from that other world into this little, tiny mobile device with a constrained CPU, GPU, memory, the limited memory, limited bandwidth, like very, very different programming environment Really, the only thing that is the same is the programming language itself So the way that you use that programming language is very important, in terms of the performance and the experience that you’re going to get in the end result of your application So we wrote this huge article We put it out internally And then for the short attention span public, we broke it into 10 still very long articles on Medium So please check those out In the meantime, I wanted to walk through some of those today If I tried to go through everything, this would be about a three-hour talk We tried this once We gave a talk at a user group in the Bay Area, and we put together a version of this talk And we made it about a quarter of the way through in an hour and 20 minutes So yeah, actually, three hours is an underestimate as well Because each of the topics, like they’re little, tiny bullets, but they’re kind of deep crevices that you can go on at length about So I thought, instead today, instead of trying and failing to cover everything that those articles cover, I wanted to talk about the memory situation, because a lot of the performance concerns and a lot of the problems that people hit are in the specific area of memory That’s really where a lot of the performance problems and bottlenecks come from, partly because the runtime and the garbage collector are vastly different than what you might expect So I wanted to start out first by talking a little bit about how garbage collection and allocation actually works on Android And then we’ll talk about some of the tips and tricks about memory in particular And then maybe we’ll talk about some other stuff as well We’ll see how far we get So first of all, I wanted to mention that one of the important things that we see in terms of device usefulness or application niceness is what we call, the Tragedy of the Commons And this is an effect where every application will work in its best interest It will be necessarily greedy Because obviously, if someone has installed my video-playing application, it’s really important to them for the user to actually watch the videos in my application, so I’m going to sync all the time And I’m going to make sure they get the most up-to-date data And I’m going to allocate tons and tons of memory so that I have all the stuff cached and it’s just ready to go all the time The fact is, the user has many, many applications installed Yours is one of them And if it’s too greedy, yours will not be one of them for long, right? If every application act in this way, and they do tend to by default, then it makes the overall device experience suck, right? If every application is allocating too much memory, if it takes up too much heap, if every process is huge, then the task manager will continually be killing all of the background applications in order to make the foreground application have enough memory to run itself And then, when your rather greedy and bloated application goes into the background, and the task manager says, well, I’m running out of memory, I’d better go look for some large memory users, it’s going to find yours, and it’s going to kill it And the next time the user goes back to it, it’s going to have to relaunch and do a cold start, because it was killed, right? Wouldn’t it be nicer if all the applications were more slim and trim, so that whenever the user switches between them, they can do so easily, because they just have to page them in, instead of actually reloading and relaunching everything from storage? So there’s two effects that we see There’s the tragedy of the commons where everything is greedy, and then there’s the every device a village

So not only are all the applications greedy, and they are suffering for it, but then the overall device is suffering for it Now, the user experience from all these greedy applications on the device is that they’re constantly relaunching all of the activities there, or some service running in the background is syncing, causing performance problems and the CPU to be doing things for absolutely no reason whatsoever I’m not using that service right now I haven’t used that e-reader for a month Why is it actually syncing in the background when I’m trying to use another application in the foreground? So these two effects are very interrelated and contribute to the overall user experience of Android So the goal of this article and of these tips and practices in general is to help application developers write better applications, so that we can make the overall platform better So let’s go talk about memory Three dynamics to be aware of is that memory is limited And this is far more limited than you might realize Living here with our less constrained financial resources and more available technology, we tend to think of one gig as being the low end You know, there are devices with two-gig or more readily available So surely, this is what people have in the real world In the real world, low-end devices are, A, still being used, because they were sold several years ago And people don’t just ditch devices for new ones, unless they’re like the people in this room Maybe they don’t actually have enough money to go buy a new phone whenever they’re excited by it, so they may keep that device around a lot longer than you, the developer, want them to So they’re running with a low memory device, because they bought it a while ago But the more insidious dynamic is that there are still devices being sold with low memory In markets where money or technology is not as readily available, they may be selling these phones to people that are really excited about getting this new device that only has 512 on it, right? Not much memory It’s a new phone What are they doing coming out with that amount of memory? Well, this is the configuration that made sense in that market And there are a lot of these phones still being sold out there So even though you are running a device that you don’t consider high end and you think the memory problems have gone away, they actually haven’t And they’re going to be here for a while because of those two dynamics So it’s very important for you, the developer, to realize there are these low memory situations and to make sure your application behaves reasonably well when one of those situations occurs for the user The other issue, as I alluded to before, is it that memory is shared Everybody is swimming in the same pool I don’t really like that metaphor So everybody has to behave, if you want the overall experience to be good If your application is greedy, and every other application is greedy, the entire experience of the device will be horrible, because everybody is constantly going to be shoving everybody out of CPU cycles, as well as memory And then, finally, memory equals performance And this is the one I want to spend a little bit of quality time on today and explain what we mean by that So first of all, let’s talk about memory Let’s talk about the garbage collector and how it actually works and why we see some of the performance issues that we do So there are three things that cause memory to be expensive, in terms of performance, on Android One is the allocation, the process of actually creating memory for the new objects that are being allocated The next is concurrent collection, or just collection in general, when we actually need to clear things out so that we can make room for other allocations that need to happen And then the third tends to be the most painful one, which is collection-for-allocation If anybody’s taken a look at the log and looked at the GC information in there, this is commonly referred to as GC_FOR_ALLOC It’s a situation where you go to allocate something, there is not a free space in the heap And then the garbage collector need to run synchronously, right then, to free up enough memory to store your object there So let’s go into some of the details there We can see the four phases of normal allocations and collections Here, you have this new object on the left, next to the squiggles And we need to find space for that Oh, fortunately, there’s a space right there We’ll pop it in at the top This is an unusual situation where it was really easy to do that So once that’s there, then we enumerate all the objects to figure out what’s still refer to in the heap You can see the red one doesn’t have a reference there We mark all of those, to make sure that we know what things are still referred to and what things can actually be collected And then we collect And you see the red object goes away there, because there was no reference to it So we can collect that now and free up space in the heap So on Dalvik, there’s actually two pause times that occur in the normal process of simply allocating and collecting There’s a pause time to enumerate So we basically stop the world, the GC thread runs alone in the process Everything else is paused, while it figures out what the things are in the heap during the enumeration phase

And then there’s the mark phase at the end of that So it it’ll mark all the things, to figure out what doesn’t have a reference there And then it needs to run one smaller mark phase at the end of that, in case there were allocations while that concurrent marking was going on And there’s another pause there So there’s a large pause, potentially, in the enumeration phase And then there’s a smaller pause at the end of the mark phase And then it can collect everything And it does that concurrently In ART, it’s a little bit different We eliminated the first pause, so there’s no synchronous pause during the enumeration phase It can do that concurrently There is still a small pause at the end of the mark phase It’s a little bit smaller, because there’s been a lot of optimization work going into the collector in ART So it’s better in ART, but we still have a pause there And I should point out, too, that even when there’s not a pause here, there’s still stuff happening, right? So all of this is causing CPU to do things in the background, whether it’s– I just turned on the audio on my screen, so look out– even if it can happen concurrently, it still means it’s happening, right? You’re still taking up cycles to do these things And in general, it’s good to not spend the cycles, if you don’t have to OK, so now, let’s talk about the one that’s more problematic This is the GC_FOR_ALLOC situation where a new object comes in, and we pop that onto the heap And then another one comes in, and we walk down the heap and we look for space in there, and there is no space And then we have to actually run a GC synchronously We stop the world So we’ve got a huge pause in the middle, while we actually go and free up the stuff that we need to in order to fit this into the heap And then we can put it in there and go on about our business And the GC_FOR_ALLOC tends to be painful both in terms of time– on Dalvik, this can take 10 to 20 milliseconds, which is easily more than a frame So you’re going to potentially skip an animation frame in the middle and cause a hiccup And then they’re– I totally lost my train of thought Yeah Just gone It turns out you should sleep at night So it also means that none of your other stuff is running, right? So it’s just going to pause in the middle It’s going to do all this work, and nothing else can happen at the same time So in general, that’s a good thing to try to avoid ART makes this a little bit better No, a lot better, because it has a separate heap for large objects One of the causes of having to do too many collections was that all of the objects were stored together So you’d have these little, tiny temporary objects, new object, or float, or integer, or whatever And then you’d have this bitmap taking up a massive amount of space in the middle And it would cause the heap to get really large– the amount of space that would have to be walked– to be really big, and the fragmentation issues to be much greater in the heap And now, ART takes all the large objects, all the bitmaps– and they live in a separate piece of memory over there– so all the big ones go out there All the little, more temporary objects go in the main heap And it means that there are far less pauses It means also that the pauses are much smaller So whereas, the previous Dalvik pauses may have been on the order of 10 to 20 milliseconds, now we see pauses of three to 5 milliseconds, which is much better It’s way under the frame boundary limit, which is great But it’s still significant enough that it can push you over the frame boundary limit anyway, so it’s still good thing to avoid Meanwhile, while all this churning is going on of actually allocating objects, needing to collect them to free up the space for them, you’re also growing your heaps So the more allocations you’re asking for, the more you’re causing the heap to grow over time So if you just allocate more and more objects, eventually the heap is going to say, well, I’m out of room But you’re not up to your process limit yet, so I’ll grow the heap there And so it will go through, A, the work to do that, and B, it’ll take up more memory on the device to allocate that larger heap And larger heap means, now, your task, your process, is taking up more memory There’s less available for the rest of the system And you’re also causing your app to be killed, because you’re going to be taking up more space When the task manager is looking for background apps to kill, it’s going to look for large ones, because that’s a lot of memory that it could take back An important point to note about this is that, under Dalvik, there was no compaction, which means that when– this causes big problems with fragmentation the heap where you would allocate these temporary things over time, and then you would remove the things that weren’t there anymore But the things that were permanent, or long-lived, would still be in the heap somewhere And we could never actually get back that memory There is a certain amount of trimming that happens under Dalvik where, if we get rid of enough of the objects that occupy an entire page, eventually, we can hand that page back to memory But in general, you’re stuck in this situation where you basically grow without bound

The heap gets larger And we can never really get back all of that space, even if you only grew it to a large size very temporarily Under ART, this improves, because we do actually compact the heap eventually When your app goes into the background, when it senses that it’s an idle time that it can do this operation, then it’ll take a look at the heap and realize, well, there’s a lot objects that went away In the meantime, we can compact the heap So it gets better under ART, but it’s still a problem, especially if you’re just the foreground application We’re not going to compact the heap at that time All right So there’s a few points to come out of this So when you have fewer allocations, you get a smaller heap You also get faster allocation times, because there’s simply less work to do to figure out where the free space is You get faster collection times, because again, smaller heap, less things to keep track of You get fewer pauses, because there’s less to do over time And there’s less CPU usage, because you’re not causing the CPU to actually continually do this mark, and sweep, and collect in the background And then, overall, you get less jank, especially for the GC_FOR_ALLOC If you’re constantly allocating things, and then you run into a situation where there’s not enough free space for a new object, then you’re going to cause a GC_FOR_ALLOC, which, in general, will cause jank It’ll cause you to miss a frame, particularly on Dalvik, but also on ART, because you’re basically causing a whole lot of work to happen at a super inconvenient time And then, all of this, in general, I would posit, leads to happier users and world peace I leave the last item as an exercise for the reader OK, so let’s talk about some of the tips and tricks about using memory more effectively We, meaning me and Romain Guy gave a talk years ago at Devoxx– and it’s up on parleys.com– that goes over a lot of details here, like some of the sizes and quantities involved So I would encourage you to check out that video It has a lot of the details behind some of these items First of all, avoid allocations when possible One of the things that we’ve seen– and there’s actually a lint check for this now– is don’t allocate in the middle of your inner loop In particular, if you’re in onDraw, and you realize, well, I need to draw to this canvas, and I need a Paint object, let’s get a Paint object So we’ll create a new Paint object, and we’ll do this Incredibly common So there’s like a lint check for that specific pattern, just because so many people are doing this It seems dumb You’re like, how big can it be? There’s only five letters in the word, “paint,” right? It turns out it’s a problem It’s a problem for two reasons One is not as obvious is that the Java level object that we’re allocating, the Paint object, is kind of the tip of the iceberg We’re also allocating stuff at the native level, which then needs to be finalized and collected later, which is kind of an arduous process to go through as well I’ll talk about finalizers later So there’s a lot of stuff that’s happening underneath Paint that you’re causing to happen, just by allocating a temporary Paint object The other one is the churn that I was talking about If you are in onDraw, and your onDraw is being called on every frame of an animation, and on every frame you’re allocating a Paint object, well, at some point, that means that your heap will fill up Maybe not in that animation, but maybe in a future animation, at some point, you’re going to go to allocate that Paint object, and it won’t find enough space And then, it’s going to have to make the space So you’re in the middle of an animation, you’re in your Draw loop, you’re on the UI thread, and GC says, you know what? I’m going to have to collect right now Would you please hang on for a few milliseconds? And that’s what causes jank for the user So don’t do that, when you don’t need to One of the strategies that we use internally in the framework that I would encourage people to look into, in specific inner loop situations like this, is caching objects So one of the tricks– if you ever look at the source code of the framework, like view.java, or any of the core classes there, we will keep around either instance classes, or in some limited cases, static objects, that only get allocated lazily So when we see the first time that, OK, we’re going to need a Paint object, or a Rect object, or a Point object, for this particular call, if it’s null we’ll go ahead and allocate it then And then, thereafter, we’ll just use that shared object In a lot of cases So in the onDraw situation, chances are, you don’t need that Paint object for anything else in that class So you could have an instance variable– or a static, if you want to manage it that way– that only gets used when you’re actually in that onDraw method So it seems a little silly You might as well have a local field But because of the memory concerns, you really want a cached object instead So allocate it lazily, and then use it whenever you need to in that specific method You have to be careful with this Obviously, if you’re actually accessing that shared variable, either instance or static, from multiple places, that can get a little tricky

So it’s not a blanket pattern Just like all of these tips, these are not like hard and fast rules But in general, it’s an approach that can avoid the kind of expensive, insidiously expensive, allocations that we see in inner loops Pools– so object pools are something that traditional Java, certainly server developers, walked away from years ago You know, let the memory manager manage your objects for you You don’t need to But because of the allocation concerns that we have on Android, sometimes, it’s a good idea to actually do this If these are expensive objects to allocate, maybe it’s better to actually have a small pool of these things and keep them around, instead of having to reallocate one every time you need it This also can be tricky If they’re being accessed from different places in the code, then there’s a bit of management overhead to go with this This is not as easy a thing as the cached approach I was talking about, easier That’s easy There’s just a single field to manage Object pools, there’s a bit more to it There’s things like LRU caches that you can use to make this easier But figure out the right trade-off for your code Arrays– ArrayList is pretty good I tend to use it a lot It’s one of the nicer and more streamlined collections for storing stuff there You keep adding to it It’ll reallocate when necessary But if you just have a statically determined size collection that you need, Array itself tends to be more optimal than ArrayList, right? It doesn’t need to allocate things in there, it’s just got the Array itself, and then the objects that you put in it So consider using arrays They’re just a bit more streamlined and optimal, and don’t do as much churn for reallocation that collections would do automatically Speaking of collections, I would encourage you to check out the Android collections So the traditional Java programming language collections, they’re all very powerful and useful And they’re probably still the right thing to use for large collections, like HashMap Awesome, if you have a really large amount of data that you need to store But check out ArrayMap, instead, if you actually just have a smaller collection It avoids a lot of the boxing, as well as the allocations that are inherent in HashMap itself So there’s a bunch of collections in Android There’s ArrayMap In the support library, there’s SimpleArrayMap Then there’s SparseArray There’s actually a bunch of sparse things, LongSparse, LongIntSparse, LongInts, lots of different combinations But they basically use primitives as keys, instead of the auto-boxed Java language versions of those, like not capital L, Long, but instead a primitive long Wouldn’t that be nice? Methods that mutate– this is another pattern that I think traditional Java programmers walked away from years ago, maybe holding their nose In general, so let’s say you want to pass in an x,y, and you want to get a point return, because you need that point data structure to pass into some other method, right? So you have this utility function I’m going to pass in x,y I’m going to get point back Really stupid example You could create your point on your own But to illustrate the point– [LAUGHTER] Ow So traditionally, you would pass in x, y, and it would allocate a point And it would pass it back to you Not a big deal It’s just a temporary object, right? Again, if you’re in your inner loop, allocating is, in general, bad Wouldn’t it be nice if you didn’t have to reallocate that thing all the time? This is just a temporary object So instead, what you can do is keep a cached object in the caller, and then call a mutating version of that method instead So instead of passing in x,y, maybe you pass in x,y and a point data structure And then it fills in the data structure And then that gives you the option to allocate it on-the-fly, if you really want to, or to keep it and reuse a cached object instead So if you look through the Android source code, especially in the framework, you’ll see a lot of instances of this internally, where we’ll pass in a Rect that gets filled in during layouts or whatever And this is specifically to avoid the temporary allocations that are necessary to pass back richer data than simply a single primitive return value And speaking of primitives, we have primitive types We really like them in Android In general, if you use primitive types, you’re going to avoid all the boxing that’s inherent in using the object equivalents So if you write a method that takes a capital F Float, and you’re running an animation or doing a calculation that has a primitive float, when you call that method, it’s automatically going to box It’s going to create a capital F Float out of it There’s slightly more overhead in getting the value of that I’m not as concerned about that I am concerned about creating garbage, creating these small, temporary objects, when you really didn’t need to

So as much as you can, stick to the primitives The collection classes, obviously, use the object types instead, so you can’t really get around it there But for all of your internal methods that aren’t using collections, there’s no reason to not use the primitives instead They just tend to be a lot more streamlined and avoid the memory situations that we’re talking about There are a lot language things that you can trip over, without really realizing why Iterators is one of my favorite examples So I love the for-each syntax I don’t know why it’s called the for-each syntax, since there’s no “each” in the language Somehow, we adopted that way of referring to it And then you go looking for the primitive, and it’s not there They should just maybe call it a for So it’s very convenient It came out in JDK 1.5 You say, for object O in this thing And then it iterates through that collection Very convenient What’s going on under the hood, though– this is syntactic sugar It is creating an iterator for you, right? So it’s allocating that object And then it’s doing the normal iteration, using that iterator object The iterator approach was kind of ugly and obtuse I don’t think people enjoyed using that The for-each works around that It makes a nice streamlined thing, but it’s still doing the same thing under the hood In particular, if your collection is empty, it’s going to create an iterator anyway It does not know that it’s empty, until it creates the iterator and then tries to get the first item And then it says, oh, your empty You need to do this So one of the optimizations that we did along the way to creating the new animation system in Honeycomb was to eliminate all the allocations that were going on in every frame So we have this new animation system We use this tool internally, called Allocation Tracker I would encourage everybody to use this, to make sure that you’re actually using memory correctly So I would start the animation I would start allocation tracking I would collect the allocations And then the animation would finish, right? I don’t care if we’re allocating at the beginning and the end To some extent, that’s unavoidable, right? This is Java It’s a memory garbage collecting language You’re going to allocate objects What you don’t want to do is allocate during the animation, like during the actual frames And then I saw that, on every single frame, we were allocating objects for listeners What was going on was we had this View TreeObserver where you can listen to various events going on, layouts, pre-draw, draw, the things are happening in the view hierarchy And on every frame, we would say, for-ViewTreeObserver, Listener, whatever, in this collection Then we would iterate through, and we would do something In general, nobody actually added a listener It’s not typical to have a listener there But we would create an iterator on every single frame anyway, because that’s what for-each did for us So there’s one specific case where it doesn’t do that, which is, if it’s a primitive array, it actually does the right thing and will not create an iterator for you, so yay But otherwise, it’s kind of a good thing to maybe avoid, if you’re not really sure whether it’s going to create things You can certainly go back to the old approach of doing an actual for loop instead, and just get the items in the array And then, it doesn’t advance to the first one, if it didn’t need it Or if you use it, just be aware of when it’s actually going to cause an allocation when you don’t want it to Wait Let’s go back Yeah, this one So let’s talk about enums no more I’m really tired of this topic at this conference Go to the article and read There’s nuances to using them Please understand the overhead inherent in enums and make the right decision for your code Moving on Finalizers [LAUGHTER] So one of the things that’s not obvious about finalizers is nuances in the language spec mean that, to finalize any object, we actually need to do a GC twice So whenever you have a finalizer declared on your class, you’re basically forcing a future GC twice And you kind of want to avoid every single one, so why are you forcing two on the system? There are particular situations that really require finalizers We do use them internally, specifically, when we have native objects that need to be cleared So we need to know when that thing went away, so that, on the native side, we can go ahead and free the native memory associated with it So that’s a valid use case for it But I would say, try to limit the number of valid use cases And definitely eliminate finalizers when you don’t need them in other cases They may be convenient, but there are better ways, certainly from the memory standpoint, to do what you want to do Static leaks, this is one of my favorites I may or may not have caused leaks in activities in some of my code I should do a deep dive into the way that HashMap or WeakHashMap works So there’s a situation where I needed to store information about listeners associate with views And I said, well, I know that WeakHashMap uses a weak reference So when that view goes away– which

is what I was using for my key– then I know that everything will be collected Turns out that’s not true The way that WeakHashMap works, it actually had a hard reference to the key It had a weak reference to the value, which kind of turns my head in knots But the end result was that the activity would undergo a configuration change So the phone would rotate, and it would tear down the activity, and it would come up with a new activity And all that old stuff went away, except that I had a static WeakHashMap which had a reference to the view which implicitly has a reference to its activity Bad thing, in general So beware of static leaks, in general The real problem here– besides my ineptitude and misunderstanding of how a WeakHashMap worked at the time– was that the lifetime of the process is different than the lifetime of your activity This is something I’ve hit again and again in Android I tend to think of, when a window comes up, when basically that application object that I’m working with– which is, the activity in my mind is sort of synonymous with application, that’s the first problem– when that comes up, surely, that’s where all my static objects live They don’t They live in the process itself And the process is long-lived So when you undergo a configuration change, we rip down the activity, and we pop up another one And it’s in the same process So if you have a static object there– whether it’s a WeakHashMap, or something else that’s holding on to something it shouldn’t in the old activity– it will continue to hold onto it So static can be the right thing to use in some situations, but is really dangerous because, just know that it’s going to live a lot longer than the activity that you think you’re storing things associated with Static initialization is a good thing to avoid, in general, especially for expensive allocations or expensive operations The problem that we see is that, when a class gets loaded, and then does a bunch of static work, it’s going to do all that stuff right now This causes a problem, for instance, when you launch your activity, right? So we’re going to go We’re going to launch We’re going to try to launch as fast as possible And then it loads this class, which does a whole lot of work that it really didn’t need to Why not do some of that stuff lazily? If you didn’t actually need to initialize the database, whatever, at that point, why don’t you wait until a better time, instead of forcing that to happen immediately? OK, so third-party code We’ve seen this one a lot, where again, something the traditional Java developers will do is bring over their libraries and their approaches from the old world Oh, I really like this dependency injection library So a common one that we’ve seen is Guice Really powerful Very flexible People love to use this And then they’ll start using that It was not written for mobile, right? It does a whole lot of reflection And I haven’t really talked about reflection yet Big secret, reflection sucks, OK? It has a lot overhead associated with it, a lot of allocations, as well as just performance overhead So in general, we tend to avoid it Everybody kind of knows that, right? You don’t really do reflection, unless you need to But a lot of these libraries that you’re dragging in are doing it on your behalf So the general tip here is to not use a library or third-party code in general, unless you know that it was actually written for mobile Because if it wasn’t written for mobile, it’s probably using a lot of the patterns that we’re telling you not to in your code So why are you using it indirectly in someone else’s code? For dependency injection, there have been a couple libraries written since that are more tuned to Android There was the Dagger library And more recently, there’s the Dagger 2 library I would suggest you check those out, if you really want that functionality And in general, just look for libraries that– I wish we had a logo saying, “Mobile-Friendly.” You can look for that logo It doesn’t exist, but look for the logo Otherwise, just make sure that you actually know what that library is doing The other problem that we’ve seen with third-party code is, if you’re really using a really large library, chances are there’s a dependency graph in there where you’re dragging in a lot of stuff that you don’t necessarily need Like, if you are using library fu, because you really like that collection class for managing this particular thing, and then, all of a sudden it added 20,000 methods to your method count and a whole lot of APK size, just so that you could use that one collection, probably not what you want in your application So just be as concerned about your library code as you are about your own code So there are mechanisms that Android provides to help you with memory concerns One of them is trimming memory So the system will tell you when it’s getting low on memory And you should really respond to that, because it’s not just telling you, like, oh, by the way, I’d really love some memory It’s not a casual conversation It’s saying I need memory now Could you please free some? Because otherwise, bad stuff is going to happen So when it goes out, it reaches out to the activities running on the system, the processes, and says, we’re running low on memory Can you do something about this?

And there’s various levels of it, so you can set your warning and panic appropriately But if you’re keeping cached thumbnails around, just in case the user wanted to do a fling– but right now, you’re running in the background, you don’t need those anymore– maybe it’d be a good time to jettison those Because if you can make yourself smaller, then maybe the system can get back the memory that it needs so that it doesn’t have to go killing activities, like yours So pay attention to the trim callbacks, and do something about them isLowRamDevice() is a method on, I think, Activity Manager that tells you whether the system– at the moment, it means this has 512 memory in it So if you really need more memory to have the best user experience, but you also want to work adequately on 512-meg devices, then you might call this method and set the way that your application behaves accordingly Avoid large heaps There is a way to ask the system for more memory And sometimes, this is necessary You know, you’re a video-playing application where the videos simply won’t fit into the standard heap, or you’re doing image manipulation with massive images, whatever it is, there are some corner case situations for which this was introduced But it also tends to be a back door for lazy developers who are like, well, I just– but I want more memory It’s easier Yes, it is easier And it makes a horrible experience, because the more you allocate for your process, the less everybody else gets And then it goes back to the original point So don’t use it, unless you actually really need it Please? Don’t keep your services running They can continue to run in the background But if they exist just for a particular reason, then finish that purpose, and then get out of them, right? Otherwise, they’re just sitting there doing stuff If nothing else, they’re taking up memory in the background, meaning that there’s less available for everybody else on the system And then, finally, optimize for code size So this comes in in a lot of ways But it makes your APK download better, certainly, but it also decreases the amount of stuff that you’re loading into memory for your system So just be smart about how much memory you’re taking up So I want to go over some of the tips that we have in user interface This is sort of a grab bag of things Don’t overdraw There’s a tool on the device called, Profile Overdraw? I don’t know what it is Colt? AUDIENCE: [INAUDIBLE] CHET HAASE: Profile– what? AUDIENCE: GPU overdraw CHET HAASE: Profile GPU Overdraw, so you can see what the overdraw is on the device It paints in a lovely pallet of pastel colors It will indicate to you how many times each of the pixels is being drawn on the screen The problem is– so Android uses a common rendering technique, called painters fill algorithm– I guess, because painters used a lot of rectangles or something– where basically, we will paint all the stuff that you tell us to in the order of back to front, because that’s going to result in the correct display for the user So you’ve got a window background? Great We’ll paint the window background You have a container covering the window? Great We’ll paint the container with it’s opaque background Oh, you have another container covering that? That’s great We’ll paint that one too You have a list view, which has background? We’re going to paint the background Oh, all of your items have background– and all of a sudden, when you finally get to, like, the text in a list view, you’ve painted each of the pencils in those characters five or six times That’s something that the GPU doesn’t really like to do, right? That was a lot of wasted effort in there So what you really need to do is actually figure out what opaque objects are completely covering what other opaque objects and maybe eliminate some of that overhead there So you’ve got the window background? Great Use it Set it to the background color that you want, and then don’t have an opaque background on the containers that are sitting on top of it So take a look at your nested hierarchy there See what the organization is of containers, as well as the opacity of the backgrounds that they’re using And then, do the right thing there, to make sure that we’re drawing as few times as possible on every pixel So use the tool See what your overdraw is like Red is bad I’ll give you a little tip, red is bad And then, do something about it Avoiding null window backgrounds– so one of the tricks to avoid overdraw ends up in some artifacts that you should be aware of So people will eliminate the window background, because they’re like, great, then I don’t have the overdraw of painting the window, and then also painting the first container on top of it It’ll just paint the container That’s true On the other hand, sometimes then, you have an artifact where all we have to paint is the window itself So like we’re animating in the keyboard, the IME is animating in And the Window Manager is going to handle painting the window Or when we’re launching the window itself, the activity is not running yet, we’re going to be animating in the starting window, and it’s empty There’s null The Window Manager has nothing to paint You’re going to end up with an artifact on the screen Either it’s going to draw black, or on some GPUs, it may draw garbage on the screen because there’s undefined contents in that buffer

So the window background is there for a reason It’s there to tell the Window Manager what to paint on the screen, when it has no other information about the activity So keep the window background But to avoid the overdraw issues, see how you can use that window background to do the right thing, instead of then having an opaque background on the container that overlays the window Also, avoiding disabling the starting window– this is another situation that results in some artifacts where people will disable the starting window, because they didn’t want that blank window up before their activity launch But again, the Window Manager doesn’t know what to paint, if we don’t have a starting window I would say, instead, actually use the starting window more effectively You can brand your application with this We’ve seen particularly ineffective approaches where someone wants a splash screen before they get into their activity Maybe that game took a couple seconds to launch or whatever, so they’re like, OK, well, have a splash screen experience here But then they get this weird experience where Windows Manager doesn’t know what to paint, so it doesn’t do anything for a while, because there’s no starting window And then a splash screen starts after a second or so with this completely different experience And then the game starts with a completely different look Pretty awful Or in some situations it gets even worse where they kept the starting window, because they didn’t actually understand what it was They have a starting window, and then they have a splash screen that’s completely separate, and then they have the game screen So then you have three completely different experiences over time, which is nice, if you want different experiences, but kind of sucks What they should actually do instead is remove their splash screen, and take their logo, take their branding situation, and use that as a background on the starting window instead Then, they get the benefit of having the starting window, so that the system knows what to do before the activity is actually up and running, and then they also get to brand that as well and have the splash screen experience before their application starts There’s some tips about avoiding UI stalls So the UI thread really likes to run and keep running, otherwise, the user’s going to sit there looking at a frame while it’s actually busy doing something that it shouldn’t be So inflation tends to be expensive, so try not to inflate when you don’t have, to or try to minimize the amount of inflation happening If you have a really complex view hierarchy, maybe you didn’t need all of that all the time Maybe you could actually use view stubs in there and inflate other stuff on-the-fly as necessary, instead of having like, I don’t know, Play Store-like hierarchy that gets inflated on-the-fly whenever you launch your activity That would be nice Now, handling events When you get an event, it’s nice to do less expensive operations It’s nice, when someone clicks a button, if you don’t actually make a network call, in general, or go to the database You kind of want to do that stuff asynchronously, off the UI thread, because those events are being processed in the same thread that’s handling your animation events, your input events, as well as your rendering events and layout All of that stuff has to happen on the UI thread So anything that you’re doing that’s not visual, that’s not UI-related should really happen elsewhere Even if it will end up in data that does populate the UI, which a lot of this does– like they click on the button, that means some transaction where we need to re-populate the data that the user is looking at That’s great, but you don’t have to do it synchronously, right? So you could spawn an asynchronous task, AsyncTask, or loader, or whatever, to go get that data And then, when it’s back, then you can populate the UI In the meantime, the user was actually able to interact with your application, and it didn’t seem so janky Measuring and layout is quite expensive It’s good to avoid it, particularly during animations So if you wanted to, let’s say, animate an object to move from one location to another, you could actually animate the layout params, right? You could change the layout params that were causing that thing to be positioned in the window You could And that’s kind of the physically correct thing to do Well, change the layout params That forces a re-layout And then, it’ll figure out where it’s supposed to be And then, it’ll draw it at the correct place And in the meantime, it’s going to run a lot slower than you wanted it to You’re going to miss frames in there, depending on the complexity of your hierarchy It’s a lot better to actually animate with post-layout values, like translation x translation y Don’t change the layout params, which force a layout Instead, animate something that makes it visually correct, and then fix up the layout at the end Or a typical technique that we use in animations is figure out where it’s going to be at the end of the animation So it’ll run layout, it’ll figure out where it needs to be And then, you’ve set an onPreDrawListener on it And then, in your on onPreDrawListener, you say, OK, well, I know I want to animate to this other spot down there So I’m going to run an animation, basically rewind to where it was before, and then run forward to the new layout location So basically, running translation y from negative 100 to zero This is essentially the approach that we used in the transitions package We put an onPreDrawListener

We figure out where it was We figure out where it’s going And then we set up the animation to rewind and then play forward Drawing– in general, that’s related to the allocation concerns and the amount of operations you’re actually doing in the onDraw and then the animation concerns, in general Just be aware that, when you’re in the middle of an animation, every expensive operation you’re doing, or every memory allocation could be contributing to missing a frame It may not seem like that big a deal Like 30 frames a second, 60 frames a second, it’s still moving on the screen The real problem comes in when it’s inconsistent So if you’re typically able to get 60 frames a second, but eventually a GC kicks in because you were allocating a bunch of stuff in onDraw or whatever, then there will be a skipped frame in the middle So yeah, 30 frames a second is reasonably smooth, if it was consistent But going from 60 down to 30 and then back to 60 causes a hiccup that’s very noticeable to the user In the middle, it’s going to pause just slightly and then skip forward longer than it would have if it had a smooth frame rate instead Avoid complex view hierarchies– I alluded to this before Like don’t have more views than you need Don’t have deeper nested layouts than you need I pull out some applications in Hierarchy Viewer– who’s used Hierarchy Viewer? OK, good Getting there We’re getting there We’d like 100% someday It’s a really good way to get a mental model of what your application looks like on the inside, what the model and the container hierarchy is like I’ve seen some applications that have this long tail of containers where they’ve got a relative layout, and then there’s a linear layer inside of that, and then there’s a frame layout Each one of these things, it had a purpose Somebody had a reason for that Like, I want to have this background here And then there’s this other layout that’s tuned to have the right fringe effect on– I don’t know what their reasoning was I’m sure there was a good one, but not good enough What you want to do is figure out how to have the single container that needed, instead of the long nested thing that’s simply going to cause more overhead for inflation, for layouts traversal, for rendering, all of this stuff Every layer in a hierarchy is just causing more work for the framework every time we need to redraw you Also, relative layout is probably the most flexible layout It allows you to do the association with sibling views, and the stuff on the side, and I want to align this next to that So it’s the most flexible, which is why, unfortunately, it is the layout that we use when you create a new project in Android Studio This is not something I’m real happy about right now We would like to change this eventually The problem is that relative layout causes us to measure twice, right? So if you are associating views with other views, that means we’re going to ask all the views how big they want to be and where they want to be Well, we’re going to ask how big they want to be, because we need to figure out where to put them So we’re going to ask all of them, and we’re going to take all this information and we’ll crunch on it a little bit And we’ll say, OK, we know how big everyone wants to be Now, we have more information about all the relative locations and sizes of things, we’re going to ask you one more time So it’s going to measure twice before it actually lays out So if you have a relative layout at the top, basically, you’re measuring every view in the hierarchy twice Or even worse, what we’ve seen is nested relative layouts And then you basically double it for every layer in the hierarchy So a relative layout sitting beneath another relative layout, you’re measuring all the children of that nested one four times Probably a bad idea So if you don’t need a relative layout– we understand that, in some situations, you need it, usually not at the top level of your hierarchy It’s usually needed at a container level where you actually need the association of the siblings or whatever So go ahead and use it when you need it, but be aware of the overhead of it And try not to put it at a really high level And certainly try not to nest it Yes? AUDIENCE: [INAUDIBLE]? CHET HAASE: Is it better to have a relative layout with a lot of views? Or a nested linear layout? It probably depends on– so the answer is, always, it depends It depends on how high up in the hierarchy it is So if it’s sitting at the top, then it’s going to cause all of that overhead to everything sitting underneath it If it’s at the bottom of your hierarchy, it’s not going to cause that much, right? You’re going to be measuring all the views, but you wanted to do that anyway Like you wanted those sibling associations That’s probably fine Nested linear layout also has its own overhead associated with it So it doesn’t measure twice, but you just added a bunch of layers in there It’s also worth considering custom layouts at some point too If you find yourself tying yourself in knots and adding more and more nested linear layouts to get the particular effect you wanted with all right padding and associations between all the different subviews and sub containment hierarchies, at some point, it’s much more optimal to simply create a custom layout So you subclass ViewGroup, or you subclass some layout that does most of what you want And then you do your own measure in the layout

And that’ll probably save you time in the long run Launch fast– go ahead and try to get the UI up as quickly as possible This pertains to some of the stuff I was talking about before, like not doing too much in static initializers, like not inflating all of the views that you might possibly need in the future Instead, just get up and face the user with something quickly Otherwise, they click the button, they’ve seen the starting window, and then four seconds later, they see your application Horrible experience, right? Wouldn’t it be better to be faced with a simple UI that could then populate itself later as necessary? Defer the extra work If you didn’t need those fields initialized, maybe you could actually initialize them lazily instead And also, measure cold starts So when your application starts, it’s important to understand the different dynamics of what state it was starting from So when it started from the first time after a reboot, that’s what we refer to as a cold start This means it’s all the work that we had to do to actually read in the APK, to load all the classes, to initialize all the stuff, and then to do the first layout and rendering of that thing There’s the window animation, but let the Window Manager deal with that It’s all the stuff they were doing inside of your application, to simply get to the first frame that is displayed So it’s important to understand how much time that takes and to measure it appropriately So if you launch your application, and then you hit the Home button, and then you launch it again, your application, depending on the amount of memory available on the system, was probably still resident in memory So all we really needed to do was display it again We re-rendered it Period Right? We didn’t reload it In a lot of cases, we didn’t need to do another layout We’re basically just showing the same thing that we had before So you’re like, this is great I can start in 50 milliseconds I’m super fast And then the next time you reboot, it takes four seconds So what you want to do to really get a better measurement is actually kill the task, right? So go into Recents and swipe it out of the way And that’ll get you most of the way toward the situation of a cold start from reboot So get it out of memory Make sure that we’re actually dragging in all of the stuff again, to really understand how much time your application is taking to launch I want to talk about some of the tools that are important to use Hopefully, everybody uses most of these Systrace, I talked about a little bit yesterday Colt talked about it as well Super powerful tool Super confusing There’s so much information in there There’s so many options You look at it, and you’re like, I see a lot of green I see a lot of red I don’t know what to do about it We added to the tips, the little circular bubbles in the middle I would encourage you to get the latest Systrace and play with that Click on the tips and see what it’s trying to tell you In general, the problems that we’ve seen that we can do some amount of analysis on in the tool now for you tend to be common issues, like you’re in the middle of an animation, and you ran layout, or you’re not reusing the view when getView is called on your list view So some simple things that we noticed over, and over, and over again that we now fed into the tool, so you can get for free And then, once you start using Systrace more, you start to understand, OK, well, these are the VSYNC pulses This is the amount of CPU usage that was going on at the time My thread is sleeping, because it’s tied to this other event in surfaceflinger, which was processing the GL There’s a lot of associations that you can get over time and practice with using the tool But it’s really the only tool that we have that gives you the big picture of what was actually going on in the device that was causing the jank that you see And you can see the jank in the output there You can see, I’ve got regular pulses I’m doing my performTraversals, which is the rendering loop in the UI thread, on every single frame And then I skip three frames Why? That’s what you need to figure out, so that you can fix the jank in your application Allocation tracker, super useful for all of the memory stuff that I was talking about before So obviously, we’re using a VM here, runtime that’s going to be allocating objects You can’t avoid allocations What you should try to do is avoid allocating during times when you know it could cause jank for the user So run that animation, and then see what’s being allocated during the animation And make sure that all of the objects are not actually come in from your code Ideally, there would be no allocations during the animation But if you can’t fix that, you can at least fix the ones that are coming from your code that don’t actually need to happen during the animation Traceview, there’s two versions of it There’s a sampling, as well as non-sampling– instrumented? What’s the other way they refer to it? What do you– sampling means, OK, it’s going to look occasionally and see where it’s at This has very low overhead, which means you’re going to get reasonable times back for how long these various things are taking But it’s not going to give you the full call stack for where the code was at any point in time So you want to use the instrumented version, instead, if you’re trying to understand the code flow in how you actually got to this at that particular time However, that has a fair amount of overhead associated with it just in each of the method calls, so make sure that you’re not optimizing the wrong thing Don’t look at the raw, absolute times

you’re getting out of Traceview, if you’re using the instrumented version, because the times that it’s reporting for method calls is really out of whack with reality I have optimized stuff before and saved zero time in the end result, just because it wasn’t really giving me the right information So it’s useful for understanding the flow and for relative times, but don’t take the numbers to seriously if you’re using the instrumented version Hierarchy Viewer, we talked about that MAT, I would also call out the new Memory Analysis Tool in Android Studio Memory Monitor– I think– there’s a couple of new memory tools One of them is Memory Monitor that just shows you the use of memory over time The other one actually analyzes leaks and dependencies in the graph So check that one out Should be a lot easier to use and more tuned for Android dynamics than MAT, which is an Eclipse tool So basically, to use MAT, you take a heap dump, and then you go into MAT and you basically see what objects are still alive that you didn’t expect So this is where you find out things like activity leaks But as I said, there’s possibly an easier tool to use in Android Studio for that There’s also an external tool put out by the folks at Square, called LeakCanary, that I would suggest you check out as well Memory Monitor, I just mentioned that, for Android Studio And then there’s on-device tools So those were tools that you run on the host, on your desktop machine But then there’s device tools where you can see, in real time, some of the information that you need to tune performance There’s Strict mode You can enable that, and it’ll do a red flash whenever your code is doing something that it shouldn’t on the UI thread, like making a network access, or disc access, or whatever GPU profiling, there’s the overdraw set that we talked about There’s also the raw performance It’ll put colored bars on the screen to show you how much time you’re taking in each of the various phases of rendering So it’ll show you whether there’s a spike at a particular time, because you’re getting inconsistent results from this thing, or whether you’re just consistently taking too much time creating all the rendering objects, or whatever it is Duration scale, this is useful if you want to just slow down your animations so you can see what’s actually going on on the screen I actually find Screen Record to be a much more useful tool for debugging animations, because I really wanted to see it run in real time, it’s just that my eyes don’t work that fast So I’ll do a Screen Record, and then upload the MP4 from the device And then I can frame step it in some animation tool, or just a movie player And then I can see what happened on every single frame to try to track the artifacts or the problems there Then Hardware Layer Updates, this is another visual tool on the device that shows you when you’re updating information that is currently cached in a layer, which is generally a no-no I would say that there’s time for Q&A, except for the fact that the timer is running out right now So thanks, for coming [APPLAUSE] [MUSIC PLAYING]

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.