Uncategorized

just quick pull the audience how many of you consider yourselves to be developers excellent good good how many of you are familiar with – how many of you have worked with – okay that helps a little about me I’m Jeff tapper senior consultant with digital primates been building Internet applications for about twenty years now I started back in 1994 and for the past eight years have been focused largely on building video applications so lots of experience had a chance to write about a dozen books over the years on various internet technologies and if you have any questions that certainly ask questions throughout as you have them they’ll have ply up some time for questions at the end also if you have questions that are outside of what we’re talking about here but you just want to pick my brain I’m around all day today and all day tomorrow come find me happy to talk so today the things that we’re going to talk about our video in the internet today we’ll talk about the HTTP streaming options available to us I’ll talk about what can we do without a plug-in these days a big thing we’ll talk about – and – 264 and how we make it work in a browser and I’m gonna dive into one particular – implementation I’ve been working on for the past year and a half or so a JavaScript version of a – player but the idea of most of what I’ll talk about until we get to that specifics is fairly generic and will apply to whatever technology you’re using for – players how many of you are web developers mobile developers set-top-box developers ok other kinds of developers other than what I mentioned ok that’ll work alright so as you guys probably know if you’re doing any sort of streaming video we’ve got as we’ve got video online – primary options available to us we can either do video as a progressive download and the idea there of course is simply you have one video file sitting on a server and it comes down over time and plays back as it’s coming down to an end user some real benefits of progressive download is that it’s going to work fair you bit quit ously now that h.264 is available in almost all of the browsers progressive downloads just going to work almost everywhere the downside of progressive download of course are many there’s no real way to do adaptive bitrate you can’t effectively stop this at one version and switch up to a higher quality or lower quality you’re playing just this one file so the other side of it of course are these streaming options and within streaming once again there’s a separation we can either stream with the real time with the real time protocols you know sir probably familiar with rtmp RTSP RTP these are all the various real time protocols and the way these work is there is actually a connection socket from each individual connected client back to the server and again some real nice benefits and the real-time protocols are extremely low latency since we’re just we have a protocol that is dedicated specifically to pushing data from one server to one client it works very quickly and very well the downside of this of course is the required infrastructure if you’re doing any sort of large-scale one of our clients is Major League Baseball and you know they routinely have hundreds of thousands of simultaneous connected viewers it would require a tremendous infrastructure to have a hundred thousand or several hundred thousand open sockets to individual clients so the preferred choice these days is HTTP streaming and there’s a number of different options in that world HLS is far and away the most popular that’s Apple’s HTTP Live Streaming format how many of you are familiar with HLS HDS is Adobe standard that’s HTTP dynamic streaming microsoft standard is smooth streaming and so there’s a lot of different options and proprietary standards out there how many of you were in the keynote this morning about half of you the guy from twitch TV was talking about how they’ve standard out standardized on HLS across their stack so HLS is definitely a very popular choice these days so part of the challenge that we have today is that while most folks are agreeing agree for large-scale streaming that HTTP is the right choice we’ve got a lot of variation in what devices are going to support what formats and there is no one standard that is supported ubiquitously and this results in most companies that are trying to do this and try to get this everywhere having several different versions of their content that they can stream to all the different platforms that they need which is obviously a

challenge so as I mentioned earlier progressive download is the only real option that is ubiquitously supported everyone supports progressive download there are some variations in what codecs the different browsers are supporting right most of them at this point I know Cisco earlier this year announced that they were going to pay for the licensing for all the browsers to use h.264 which was a huge huge leap forward before that we have lots of browsers that would only support WebM and vp8 and other options like that and there’s now new standards coming out of vp9 there’s h.265 and all sorts of other things of that sort and there’s just a wide variety of support you’re just looking at desktop browsers it’s simple enough as you start to look at cross all the mobile devices I’ll cross all the connected TVs all the set-top boxes it gets really vast very quickly in terms of streaming to a browser though even once we get past all of the various other issues the only browser that natively will support streaming is going to be Safari on iOS and Mac OS there are now options for us to do streaming directly to a browser using media source extensions any of you are familiar with media source extensions very few of you media source extensions are an extension to the w3c specification for internet protocols and what it really allows us to do what you guys are familiar with the video tag in html5 media switch and the way the video tag works out of the box is you simply give it a URL of a file on iOS and Safari for Mac OS you can give it the URL of a manifest of the m3u8 and it will playback that file and again that’s how we do progressive download into the browsers these days what media source extensions do is they give us an api to that video tag to allow us to hand a little bit of video data at a time to it so effectively we can write a whole application around that that allows us to do streaming on-the-fly the same way that you would implement HTTP streaming in any other platform you figure out where the segments are where the cut where the data lives you download the bits and you hand it over to the browser and you’re able then programmatically to start controlling you know is this the right bitrate should I switch up should I switch down and it gives us all this control that we otherwise lack in html5 video the media source extensions currently are only available in Chrome and Internet Explorer 11 however if you follow the public forums at all from Mozilla you can see that they are actively working and implementing their support from media source extensions opera has made public comments about planning support for media source extensions so while right now it’s just the two browsers that cover about half of the users in the world it’s coming to more and so that’ll be very encouraging and so as I say media source extensions they allow for just bits of the data to come in at a time and to be handed over it allows us to do real streaming it’s currently a candidate recommendation to the HTML working group and I think that last update was a few months ago I actually haven’t checked this week to see if it’s moved to the next stage towards approval but the hope is that it will get to be full approval within a year MPEG DASH most of you said you were familiar with MPEG DASH – is dynamic adaptive streaming over HTTP the real key differentiator of – as opposed to any of the other HTTP streaming formats is it’s an open standard HLS HDS smooth streaming they’re all owned by one company and some of them have become de facto standards on their own but the more I work with broadcasters and people who are used to open standards like a.m. and F and UHF and VHF these are standards that everybody agrees on the broadcasting world loves standards it makes sense they don’t have to jump around sig well unless which vendors again so I’ve got a retool everything I’ve ever done that’s the nice thing about the dash world is it’s an open standard there’s some other nice things about dash in that it was designed by Adobe and Microsoft and Apple among other companies folks who had already written standards similar to these before but it was designed several years later where they’ve learned lessons they figured out oh you know what we don’t handle this so well next time we should try something more like this and so dash was designed with a lot of these best practices in mind of the idea of more easily handling multiple audio streams while HLS and HDS and smooth streaming can be forced to work with de-mux audio and video for the most part they like to work with the audio and video together on the same file and while certainly you can imagine in most most of the time it makes sense to have

them in the same file but when you’re switching between several different audio tracks right if we have a baseball game that is being broadcast in four different languages the same video feed four different audio feeds as we’re duplicating the content why do we want to duplicate the video which isn’t going to change across those four why not just have one video feed and then four different audio feeds and you can choose which one makes the most sense right Dene woxing audio and video makes a lot of sense and dash was designed with that in mind from the beginning smooth streaming and HLS can now handle de-mux audio and video they don’t do it all that elegantly because it was an afterthought it was added on another interesting thing about Dash and most of the other standards I’ve talked about were built with h.264 in mind it was the de facto standard at the time and so they’re built around that – was built to be codec agnostic – doesn’t care what the content is really all dashes is a way of segmenting your mp4 files and describing where to find those segments so those mp4 files whether you’ve encoded then with to h.264 h.265 vp6 vp8 vp9 it doesn’t matter – doesn’t care in fact – has the ability in a single manifest to describe that same content with several different encodings and the player can then figure out oh I know how to play this one I’m gonna grab that part and that’s how I’m going to handle it so as I say it’s designed with a lot of the best practice a lot of the lessons learned from the other streams already designed so at digital primates we’ve been working with video players for many years and we’ve built several different – players over the years we started in flash we used to do a lot of flash work I’m gonna get flash developers a whole lot less than in years past when I’ve asked similar questions it’s a nature of the world I suppose we’re doing a lot less flash ourselves these days we still do something but not as much as we used to but we’ve built out – players for flash for Android and for HTML as well as some for the set-top boxes as well and we’ve built – j/s it’s an open source project of the BSD three license it’s available on github anyone that wants to grab it and start playing with it by all means do anyone that wants to start contributing please do we’ve got several active contributors and we’re actively seeking more but – Jas was initially built as the reference player for the – Industry Forum and the – industry forum was where a lot of the major players who are working with – today it’s a place where they get together and they can discuss ideas and figure out how to promote interoperability and one of the things when I started getting involved I found that while there was all sorts of discussions between encoders and DRM and CD ends and all sorts of other places there weren’t a lot of folks who were interested in players and players of course are where I’m most interested so I got involved and we architected the initial version of – s as the reference player for the – industry forum so the way we play a – stream then this is going to be the same regardless of what technology are using is you’re going to start with a man fest file let me just give you a quick example here so that’s not the manifest file sorry here’s a manifest file it’s way too small and what this does is it just describes the content and which we can use to figure out what are the segments where do we find the content we want to play and I’ll go into more detail on manifest files for in a few minutes but the whole idea of the manifest it’s an XML file that describes what bit rates are available to us and where to find the content you’ll notice right off the bat that we have a separate separation between our video and our audio so down here in this case we have a single audio representation right as we’re switching bit rates it’s the video that we have different qualities of we’ve got one audio quality that’s gonna be used the same across all of these for some of our clients that are more concerned on the audio side we might have exactly the opposite we might have a single video and several audio in fact the this particular – J’s player was the basis for any guest earlier with what the BBC recently did they did a week-long I’m sorry two weeks long broadcast audio only broadcasts using – je s I think that was late March or early April it was and so in their case they had no video representations at all it was all purely audio representations so to get started we simply we download the manifest we parse the manifest right it’s XML we got to figure out what’s

there we all along the while anytime we’re downloading the file we’re trying to make decisions what’s the proper bitrate for this client and these are complicated decisions there’s a lot of different things that can go into it and especially as you’re first starting you don’t have a lot of information yet but what we do know is that you’ve downloaded a file we know how big that file is and we know how long it took to download that file so we can start to make a first guess about what your bandwidth might be so we try to make a guess at our initial bandwidth for the client we’ll then initialize the player hand over the initialization segments to the player so that it knows how to play that bit rate and that initialization will happen every time you switch bit rates mostly because you can have different profiles of your codec for the different bit rates and that’s part of how you’re going to achieve your different bit rates and your different optimizations and then want to start downloading segments and as the segments get downloaded we’re going to hand them over to the player yes so the question is for having audio and video debug having them separated do you need to initially due to different initializations you won’t reinitialize for every segment you’ll reinitialize only when you’re making a change of bitrate you each segment gets downloaded separately yes audio and video yes that’s absolutely correct so a question about input is there increased latency and mobile applications if you are debugging these and the reality is yes there is some additional latency and again as you talk about the latency certainly downloading to files is has more inherent latency because there’s all of the opening and closing of sockets and all things that happen with each download although the keepalive tends to keep those open which mitigates most of those those factors for you and the other mitigating factor of course is that the files tend to be smaller the audio files are relatively you know – unless you’re handling you know 51 channel surround sound which we’ve seen it’s they tend to be relatively small so we download the segments we hand the segments over to the player in the case of the one I’ll show you it’s the media source extensions is how we’re handing it to the player but same idea there’s an API and Android for how we handle data over to there there’s an API for the for flash an API for the connected TVs and so on and so forth and each time that we’re downloading these files and as we’re playing back these files we’re collecting a series of metrics and we’re going to use these metrics to start to make decisions about is this the proper bitrate for this end user and then there’s all sorts of other efficiencies we can start to throw in there if we’re already at the top bit rate if we know there’s no higher bitrate to go to and we have extra bit extra bandwidth available to us we can actually start to increase our buffer we can start to download more content so there’s less chance that they’re gonna have to switch down lower if they there’s something happens there’s a hiccup in their connection we try to get as much content at that highest bit rate while we can again there’s all sorts of logic that can go into this so – as we’re dealing with it fundamentally there’s gonna be three different types of files that you’ll deal with there’s the manifest that’s the XML file I showed you there’s the initialization file which is actually just it’s a segmented mp4 it’s part of the mp4 box structure that is gonna be used to tell the player here’s what the content you’re going to start playing is and the internal of the player that varies differently between Android and flash and HTML but fundamentally they need to know what kind of content they’re going to play so they can play it effectively and then there’s the individual segment files this is the actual content and again when your deem up there a separation between the audio and the video they don’t have to be deem up they can be MUX together in which case they’re all in one file and your segments will contain 0 too many video tracks 0 too many audio tracks so our manifest as we start to look at it there’s a couple of different ways that we’ll dig through and see how these manifests work but fundamentally like any XML document there’s a root node and then inside the root node you’re going to have a series of child tags fundamentally our initial set of child tags are the periods and each period describes a discrete segment as this group section I don’t use the word segments it means too many different things already discrete section of the video content the primary use case I’ve found for periods has been advertising and that you can describe I’m going to play the content for 15 minutes and then I’m going to switch over to a two-minute ad break then I’m going to play the content for 14 more minutes then I’m gonna switch over to a three-minute ad break each of those sections is a period

and so as each period can have a duration assigned to it and so we can switch between these periods as we need to switch over to advertising or there’s other potential use cases as well advertising is the most common one of those each period contains adaptation sets for the video and audio and the idea of an adaptation set is it’s functionally equivalent content at different bit rates and so an example what do I mean by functionally equivalent if you can imagine a any piece of content you’re playing back you might have HD and SD versions right versions that are in a four by three format and version to sixteen by nine format and as you’re switching bit rates you really don’t want your player bouncing back and forth between four by three and sixteen by nine right you really want to stay within one or the other to switch between the two would not be functionally equivalent so you might have an adaptation set that describes the four by three content and a separate adaptation set that describes the sixteen by nine content and any time logic needs to happen to switch bit rates it knows to only switch within the same adaptation set for some of our audio clients they get really picky about switching between stereo and multi-channel surround it can be very jarring to have seven channels around and suddenly five of the seven channels drop out and you’re just stuck back to left and right and so in their content they describe their audio segments in different adaptation sets here’s our seven channel surround content here’s our five channel surround content here’s our stereo content here’s our mono content and again what that tells us as developers is that when we’re switching only switch between the same adaptation set between the same functionally equivalent content make sense alright so as we go to describe the representations and this is where things can get a little bit complicated yes ultimately that’s going a business rule so it may be based on you know if you’re playing on a device that is more square unless you long gated rectangular you were going to switch for certain things I thought my clients generally tell me what they want to do with the content for different styles of devices and sometimes they give the client the end-user a choice do you want to watch in widescreen or do you want to watch in this mode or whatever else they’re built into the individual client side players some of it were able to do detection we’re able to figure out do they have a surround sound decoder and things of that sort usually not it could you could write the player logic in such a way that it tries to favor the top one and if it can’t play that at go fails to the next one into the next one but for the most part we we tend to just have our clients tell us on these devices we want you to use that content on that device this other content sure so there’s a question about is it practical to have different manifests which have effectively just different profiles different sets of adaptation sets that are only available to those particular users and in a lot of cases we’ll do exactly that especially if there is a separation between premium content and non premium content some of our clients have subscribers that are only allowed to get the SD content and not the HD content so we don’t want to send to the end user or the manifest that will even describe what HD content might be available there’s other things that we won’t know until it starts playing back like what’s the audio that it’s connected to is it mono stereo five channel 7 channel or something else so it’s in some cases yet it’s entirely practical in other cases less so there was a question as well points out not it’s not the order of the adaptation sets there’s actually knows there’s actually attributes of the node that say this is the primary one if all else fails this is the one I want you to play other questions on this point all right so as we go to describe the representations again we’ve got several different choices and for better or worse one of the pain points or great designs of – depending on how you look

at it is there’s a lot of different choices available to people creating the content people segmenting the content we have three different ways of describing the various representations available to us it can be done with a segment base which effectively is going to assume that for each bitrate there is a single file sitting on the server and the server is going to chop it up as it’s requested and hand you individual byte-range requests so you’ll say I need bytes one through 2001 and it will the server will know how to hand that off to you and that becomes the simplest from the describing the content point of view it requires additional logic at the server to be able to know how to handle byte-range why it’s not terribly complex logic but it does not require any sort of pre segmentation the other choices are either using a segment list or a segment template a segment list simply will describe for you here is the list of segments and here’s how you play them the segment template will use wildcards to start to insert that and the main differences between the segment base where you’re doing the segmentation on-the-fly and the other ones where it’s pre segmented it’s really it’s a trade-off between processing power and storage some companies based on their infrastructure will decide that it is cheaper to just store the file once and chop it up on the fly and have more processing power other companies would rather save their processing power and have more storage and I’ve seen valid use cases for both I took me a while to understand why given how cheap storage was people would want to do it on the fly instead but I understand though so a segment list looks like this where there is literally within the representation we described here’s a segment list that each segment is going to have a duration of ten thousand so ten thousand milliseconds so it’s ten-second segments and that 1000 milliseconds equals one second so we then specify here’s the URL for the initialization and then here’s each of the individual segments that you’re going to play very simple literally it’s a list which we term as an array and we just go from next to next to next the reason we want to know things like the duration or the time scale is for when the user seeks I need to know if they seek to 15 minutes in which one to get and so these attributes time scale and duration give me that additional information that I might need in order to pull that off a segment template with a fixed duration my favorite use case because it’s simple in what Italy tell us is that with each of these within the representation there’s a template so all of the segments will be the same length and so you’ll see here we have a duration of each segment where’s my cursor here a duration for each segment and a time scale so we know in this case each segment is 13.8 seconds long and we’re able to very quickly just with wildcards increment a number so first I’m gonna play one then I’m gonna play two that I’m gonna play three if I want to seek ahead I can very quickly do the math and figure out what number the next segment is these are great because they’re easy but oftentimes in the real world it doesn’t quite work that well oftentimes the segments aren’t all exactly the same length and so the other choice we have here then becomes a segment template with variable duration and we do that with a segment timeline and we’re able to say here’s our template here’s gonna be what your pattern is gonna be right we’re going to substitute in the bandwidth well substitute in a time variable and then we’re able to specify for each segments that we’re going to have the first segment is going to be a particular duration and then for others we can use a Arnaud to tell we’re going to repeat and this particular duration over note for so many times in this case will repeat twice oftentimes we I’m gonna repeat it this duration you might just have to two of these attributes I’m gonna start and my first hundred and fifty segments are this duration the last segment is a different duration as it didn’t break up evenly that was the math came out slightly differently does that make sense and these are all just different ways of describing how the content find the content so ultimately the idea of the manifest is to tell us what’s available and where to find it so the – Jas players I mentioned it’s an open source VSD 3 project using media source extensions using encrypted media extensions didn’t anyone familiar with encrypted media extensions encrypted media extensions of the way that we can do DRM in HTML today it’s available by the same browsers currently as the media source extensions that would be chrome and ie 11 I know fire I know Mozilla is working on it I haven’t heard from opera specifically Alan eme do you know if they’re no word from opera Ronnie I mean yet but so our – player that will dive into here it’s written in JavaScript and it works in chrome and ie 11 currently so so some recent uses our – jazz player

I mentioned the BBC live broadcast that happened late March I was a familiar with Wow’s those media servers they send a test player with their server for their – content that is – je s it’s literally just out of the box I don’t think they customized it in the least bit it’s our player exactly as it is easy DRM anyone familiar with these guys they provide DRM hosting services and again their test player that they use is the – je s player and lots of other folks the interesting part of an open-source project is we don’t know who’s using it it’s free for anyone to grab and start using and most of the ones that I’ve found that I know that they’re using I’ve just done Google searches on strings that I know we’re in those JavaScript files to figure out who has those on their server and that’s the best clue I have of knowing who has the content who’s using the player oh sorry let me just give you guys quickly show you the player running here so I’m going to grab this particular manifest file showing you earlier copy it will paste it into our player and we’ll load the content up give it a second to initialize start to see the individual segments being downloaded and played back so again this is more of a lab quality version to player right now so we’ve got lots of debug tools you can manually switch the bid rates if you want or let it automatically for you you can find additional information as you’re playing back about the buffer or other debug elements and dig into all sorts of pieces within here about how much content in the buffer and so on again for a production player you wouldn’t want all of this but for a lab so let’s take a look at how this is all built absolutely true so live-streaming is a particular challenge in anyone who does any video streaming knows that VOD and live while they look the same to the end-user they’re very different use cases and there’s a lot of different considerations in terms of how to handle live and there’s still even within the – industry forum a lot of discussion going on about the proper way of describing and handling live streams as I mentioned – allows for lots of different ways of describing elements and ultimately alive the biggest challenge is calculating the live edge what is the most recent segment that I know exists and there’s all sorts of challenges around that if you’re like the BBC customizing a player for one particular sets of encoding that they they control and they own becomes much easier there’s no variability they don’t have to handle all fifteen possible ways that that live edge could be calculated or necessarily the difference in how time codes are known by different devices but given that each device may or may not have the same concept of what now means my laptop and this laptop over here could be off by a couple you know a couple seconds a couple minutes there’s no requirement in the real world that every device knows exactly the same time and so that makes challenges and depending on the nature of the difference if the server is faster the server is ahead of the clients there’s very few problems because the content already exists but if you’ve got a client that is even 10 seconds ahead of the server ahead of the encoder then suddenly you’ve got a client looking for content that hasn’t been created yet because it’s concept of now is different and so there’s lots of challenges there the latest version version 1 1 2 said our latest version well 1.1 ok version 1.1 of the – player – j/s player actually has much better live support previous versions live support was weaker and it continues to get better what’s that it well it wasn’t fully disabled it just it only worked in very explicit use cases so I know company out in San Diego had been doing

a bunch of live streaming with – yes specifically and it worked it was it specifically worked in their case because they met the exact criteria that version 100 of the player was built around well ultimately the client if the client wants to be at the live edge you’re watching a football game you want to see the same thing you would see or as close as possible is the same thing that you would see to the TV the client needs to know what segment to ask for well so they’re part of the differences between hls and – is that HLS has a requirement that you’re going to redownload the manifest frequently – does not have that requirement in fact there’s an attribute inside the manifest that tells you how often you should update the manifest there are folks who are doing live streaming and have that attribute set to never upload never update the manifest they require the client to figure out what now means using the segment template exactly so there’s there’s a lot of variation a lot of different ways these things can be handled that’s part of the challenge that we’re dealing with on it so the – KS player in order to actually make use of it there’s three core JavaScript libraries that are required we do a lot of asynchronous handling so we use Q as a framework for that we do dependency injection to make it more flexible more customizable and we’re using Dijon as our dependency injection framework if you’re familiar with dependency injection handful of you okay actually the unit tests are all done in Jasmine that’s not actually required by the core player but for testing as you’re working with it it is required the website around the – jazz player uses a several other libraries as well you don’t need to use these libraries in order to use – KS but if you go and look at the view of the source of our player you’ll see all these other libraries included purely for the convenience of building that webpage so the core things in order to use – KS are just Q for asynchronous handling and Dijon for the dependency injection so the – jazz player structurally there’s the classes are divided into two separate packages and just a little bit of background even though this was built initially as the reference player for the – industry forum I from the beginning expected that we would want to be able to make use of this for other streaming technologies at some point in the future I didn’t want to hard-code this to only ever work for – the reality is media source extensions don’t really care how the content is segmented how the content is described they just care that it gets back to content that it knows how to play so we divided our code into the two packages the core streaming package which has any of the baseline elements that are going to come in regardless of what the format of the content is and then our – specific package which has any of these subclasses of those specific to playing – so within our streaming package we have things like media player j/s this is the core thing that you’re going to instantiate to make use of the player and the idea is you’ll instantiate it you’ll hand in a reference to the HTML element of the video element and the URL that you’re going to play so this is the one that the end-user the developer will interact with primarily if they just want to take – yes and start using it today this is the piece they care most about the context j/s I mentioned dependency injection the contexts are how we specify what classes to include what dependencies do we want for our player and if we take a quick peek at what some of these are core context will simply map particular classes to particular class names so when we say we want a buffer controller by default it’s going to use this particular buffer controller but the interesting thing about this is this allows us to let people in the real world wholesale replace our classes and drop their ozone at their own in so one of the places we see this most frequently is in our adaptive bit rate logic adaptive bit rate of any of you written your own adaptive bit rate logic before you know it’s very complex and is very specific to individual use cases sometimes if you’re doing live sporting events the primary goal for them is ultra low latency we want to get the user as close as possible to the live edge and we’re willing to suffer in

quality to get them there closer others I’ve done some art installations where we’re streaming and what’s the most important to them is that everybody sees it at the highest possible quality they can withstand so even if it takes longer to startup to get build more of a buffer they don’t care they’d rather you wait an extra 10 seconds to see the content but when you see it you see it as it was intended and there’s millions of options in between and so our adaptive bitrate logic can be wholesale ripped out and replaced by somebody else’s based on their use cases does it make sense so stream j/s one of the core classes it it’s ultimately responsible for loading the manifest and refreshing the manifest if that becomes required it creates the actual buffers and will create the buffer manager that will manage those buffers and it’s the one that’s actually listening to the video element for events coming back so it can tell us things we get various you know where at the end of the content the users clicked play the users clicked pause the various events that come to us from the video element itself are heard through stream j/s in live broadcasts stream Jas is told is the one that specifically has to figure out what is live mean debug jas it’s a convenient class that gives us additional debugging logic buffer controller core class that’s responsible for getting the sake getting the segment’s getting the content and handing them over to the buffer inside there the buffer controller knows how to check with the adaptive bitrate rules and all of the logic that goes with it there’s logic within the manifest that tells us how much time what’s the minimum amount of time that should be in the buffer the buffer controller knows about that and is able to manage that and watch that for us our manifest loader and a fragment loader responsible for as you can guess from their names the manifest and the individual segments are ABR controller again this is the one that’s most often replaced least in my experience this is the one that knows how to figure out what bit rates what bit rates should be played and so this nose has a within it a series of rules that array of rules that it’s able to run through and based on those rules come out with an outcome we should switch up to X we should switch down to Y or we should keep playing the same quality and then some of those specific rules we’ll look at two of them the easiest one to understand the download’ ratio rule how big is the file how long did it take to download it’s a real rough metric it’s not the only thing you ever want to base it on but it gives you an initial baseline how long you know what’s my estimated bandwidth based on that insufficient buffer rule another really cool rule if you find that it’s taking longer to download each segment then that segment takes to playback you’ve got a problem you’re gonna run dry in your buffer and so this rule can suggest to us we switch qualities based on what’s happening within the buffer within the dash package yeah so how do you meet about a cop what can you give me more details about what specifically but the discontinuity sure so the question is as we’re switching bit rates does the browser need to be notified and ultimately yes the as you’re switching bit rates the browser the HTML video element needs to be reinitialized we need to hand it the initialization segments for that new bit rate before we start playing back the content and so before the switch happens we’ll download that as well and we’ll hand that off so it knows how to handle that switch afterwards we have another context class which is specific to anything – which tells us how exactly are we going to parse the – manifest how exactly our index handler how are we gonna know which – files to get and then any specific extensions lots of different companies have done things slightly differently within their manifest so we’ve got some manifest extensions for individual clients our – parser as you can imagine responsible for parsing that – file one of the things that we do inside here given that the manifest is XML but within the JavaScript world JSON is far more efficient we switch it all over to JSON and we’re able to work much quicker our initial versions where we weren’t doing this JSON switch were a lot slower in our manipulation and our parsing of the manifest and the – parser knows how to handle the various inheritance within the –

manifest lots of different attributes can be set at different levels in the hierarchy so this knows how to figure out well if you set the base URL at the top-level node how does that echo down through the others or other things of that sort our – handler is ultimately responsible for figuring out which fragment you want next and in terms of the API the way you’re going to work with this the two core methods that get called most frequently is get segment requests for time and this is what you’re going to use if you’re going to seek right I’m gonna go into 15 minutes in and I’m playing at quality you know 3 within the array so it’s gonna say tell me what the right file to ask for is for that quality at that time the other one that’s called most often is get next segment simply within you know for whatever quality I give you I want you to give me the next available segment of it how am i doing on time I might just about out of time 22 anyone right around there okay so anyways the fundamental flow of how you work with this to start using the – player in the real world you’ll create a context and media player instance looks like this initialize the media player give it the manifest URL attach the video element and either call play or if autoplay is not set to true set to true it’ll play automatically and then all these other things happen in the background the slides will be available through the conference website so you can get the more details of all the things that happen in the background but really it’s these three here are as a developer to use this all you really need to know and if you want more information on any of this there’s another – session a 104 today at 2:45 the – industry forum website is here and our reference player is there the source code for the reference player is here within github and if you want to understand more about the HTML extensions the media source extensions and encrypted media extensions their URLs are here and my Twitter info if you want to reach out on Twitter

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.