this presentation is called faster cincy passing binary data and JavaScript a little bit about myself my name is Felix and I actually have a connection with Atlanta I was here in 2004-2005 as a foreign exchange student at Grady high school and I just had the greatest time here I met a lot of good friends and the host family were staying this is really awesome so every time I can come back I do so I’ve been pretty much back in Atlanta once a year since then sir I like Atlanta and the people here what do i do i do a bunch of things i do note copter has anybody heard about node copter before most people happen so note I was the one I screwed up I’ll not talk too much about this but then for those who don’t know what this is basically last year in Berlin we kind of did a pre event to another conference called sheskin for you and we decided instead of doing another like lame presentation likes this way I’m just going to do a lot of talking we want to have people program flying robots with JavaScript and we were able to get sponsorship by a lot of robots get a lot of people and this old abandoned kind of pool thing and they were doing computer vision stuff image detection all kinds of interesting things within the day and they had never done it before because we figured out how to make it really simple and easy with JavaScript so it kind of took off and there was more events and I was actually we were trying to put sing together calls the summer of drones where we would buy a whole bunch of drones with sponsors and have kind of events all over the place with the help of local organizers and westland I were actually going to put on an event here in Atlanta but we screwed it up we basically took to a long time to prepare and sponsors ship didn’t come in and time to place the order sir it happens so instead of flying robots today we’re going to talk about something else but hopefully it’s going to have some practical value so the sea talk is about performance it focuses on little story i had with javascript performance but i think the lessons from it are actually applicable in general a few other things around a company called trans loaded we do file uploads and video encoding as a service so if you need file uploading and handle files we do that stuff I’m a no chess core computer so it was one of the first contributors I did a lot of patches initially because when we used a node was not really working for us and we just had to contribute quite a bit so I know it’s a node core pretty well and I maintained a few modules that are popular in the community this presentation is a lot about to work at it for the note mysql module but i’ve also download formidable which does file uploads and have a bunch of really small modules a whole bunch of them and oh I need to say one thing I actually have little blasphemous but I’ve kind of converted away from JavaScript recently a little bit and I’m doing a lot of go in these days I’m not going to talk about go but if you want like the ghost sales pitch after this presentation should sit me up at the bar anyway i’m going to say stupid stuff i always say stupid stuff so if you want to correct me best come to me after but if you just want to tweet out this guy stupid that’s my Twitter handle to do that and so let’s get this thing started so this talk is about performance and it kind of compares the performance of JavaScript to the performance of C which is comparing good versus evil we can argue which one is good and evil right with JavaScript versus C but we’re also going to talk about using the good part against evil and it turns out we may also needs a bad part to fight evil in this case and I’m just being facetious about Z being able it’s just a fun thing though but first of all am I really going to claim that JavaScript these days is faster than see I’m not going to do that this is a title beta I’m sorry however what would I will claim is that JavaScript these days can be as fast as binding to see for getting some particular performance work done and by that I mean if you have the choice of using a module in written in JavaScript that talks to a network thing like a database and has a binary protocol and you need to decode that binary data into JavaScript objects I believe that these days we can actually make that binding either by taking c library and binding that into nodejs or we can write the whole network protocol and the whole decoding and JavaScript and the performance will actually be comparable and that’s possible thanks to JavaScript being very fast however it still involves a lot of work and that’s kind of what I will talk about a little bit so there’s a story to this and the story is kind of said in early 2010 the node.js community had a problem and problem was that my sequel was very popular database still is pretty popular but there was no mysql module for noches there were actually no SQL modules for noches but not my school and so a lot of people were varied about this Ryan dolls the creative no chess with like oh my god what we’re going to do about this in other languages like Ruby or Python people were just always binding to live my sequel and that was fine because these languages are blocking if you call a function from with my sequel you expect your program to hang in there until the database server replies

however in node you don’t want to do that I know what you want to do asynchronous i/o and this is not really possible coup because lift my sequel is essentially blocking and single threaded so it was a problem for a long time I’ll spoiler head we figured it out eventually in the node core and say basically created a thread pool where stuff like binding to live my sequel would be parked in a threat and Cindy would just have multiple threats to do this stuff so that eventually was solved but back in 2010 we were worried about how this would work and then I can what’s this guy i can’t pronounce his japanese name but his twitter handle is misery tries and in likes a super polite japanese way he he sent a message to the node mailing list actually contributing a patch to the node core and then like in the little paragraph at the bottom he was like oh by the way i don’t know if anybody will be interested in this but i started writing this module that talks to my sequel and basically i just wrote this in javascript and everybody was like what and checked it out and so I was also looking for something that talks to my sequel and so I checked it out and I realized that guy had done it in pure JavaScript he has not used C or C++ to do binding he’d just be implemented to my secret protocol and that was crazy and actually Ryan and I and nobody else in the node community really thought that was a project you could pull off because it’s kind of a complex protocol it gets even more crazy in node we have sing called buffers which are essentially an array of bytes the holding binary data like a sea memory slice and he did that work before we even had buffers a note so his passer was actually using javascript strings to decode the binary my sequel protocol and makes this work I was crazy this guy was good anyway how many you guys if you have used no chess before ok that’s enough people so a little bit of no chest trivia to make that relevant buffers in node actually used to be called blobs and i really regret that they are not called blops anymore because buffers as word also use for buffering data and it’s not always clear if you’re talking about buffering binary data offering strings into an array so I think plops was much better word and they were actually called blobs for three mins in 15 seconds and then Ryan changed his mind and did a convert commit to revert the name and change to the buffers so i just i’d like to member remember the blobs I had a short life but it was good anyway so the big revelation from Missouri tries work on the my sequel tribal was set my sequel can be done without with my sequel we can actually pull this off and write a JavaScript libraries that decodes binary protocol unfortunately however he didn’t really continue with this work I think he got backed up with other stuff and he just left his library Sam meantime we got buffers into nodejs so we had a much better way to address binary data and his macho kind of needed a complete rewrite to make sense anymore and I was like well he’s kind of proven that it’s possible so why don’t I look into this because I still had this problem and I needed something and well this is embarrassing but my solution up until that point was to actually have a PHP process running in the background that I did I pc with and use PHP is my sequel binding and I’d send like chase and data back and forth it was terrible I don’t recommend this um but I was desperate so anyway so I basically went ahead and decided okay I’ll do this I’ll create this from scratch and since he’s not updating his i’ll just steal his name because the name isn’t good my sequel and makes sense he was cool with it and i basically also implemented the entire protocol and javascript and it’s actually it turned out okay i think most people in the node commune to use it it’s the most used my sequel module and we’re also going to talk about performance benefits or disadvantages of doing it this way that being said if i could go back in time and wait for somebody to do binding to let my sequel i would do that this was a lot of work set was just me being interested in the problem really anyway so I’m publishing this thing on github and the universe is like the universe is so no good deed goes unpunished and I believe that the guy who actually came up with this concept worse this guy’s sir isaac newton and he I mean he was a scientist who he used 10 CEO word sir he called it the third law of motion and with the third law of motion says is when a first body exert the force f1 on a second buddy the second body simultaneously exerts a force F 2 equals minus f1 on the first buddy that means that f1 and f2 are equal in magnitude and opposite in direction so here’s the thing about Newton Newton was amazing he there’s a lot of interesting work but he was also very secretive and very like competitive he never gave lat gave lightness any credit for so coming up with calculus you’re like mine I did that and he did not really share his work stuff and I think the reason for that was he was born in a time where there was no github so was github you would put his stuff out there and we could see when the first commit was made right it would be verified with the sha-1 hash would be great anyway anyway so let’s imagine that github are was

already around for Isaac so I think he would have come up with this this is me being very it’s the third law of github which is maybe phrased like this when the first port person pushes the library l1 into remote repository the second person simultaneously starts working on a second library l2 which will be equally awesome but in a different way that’s why we love github that makes good upgrade somebody does something crazy Emily comes like I can do you one up and that’s what happened to me I did all the work on my sequel Marshall and that this guy comes along Sony’s he basically created a binding to live my sequel once we figured out to sweat pool stuff we were like okay he was like this makes much more sense doing all that work again so he did a binding and he released it and well being the like we’re developers always competitive about things and performance so he did some benchmarks and he was like- so much faster than us it’s like let me see that so I ran the benchmarks of my own and so was it it’s not really relevant but if you’re interested in the details the benchmark I was mostly obsessing about with a use case of having a lot of data from a lot of roads in my sequel coming in since the typical you space would be like you kind of import 10 100,000 rows from an existing database you want to convert them and then push the back it up in the database like kind of my creation Singh so in this case 100 megabytes of data containing 100,000 rows the roast would have five columns ID as an order incrementing integer tile title at watch or text as a text and created and updated as state time stamps and conceptually speaking the work set driver for my sequel has to do in this use case is it has to create 100,000 row objects and interest these row objects is going to have properties for each fields so you end up with 500,000 keys and 500,000 values it’s kind of Sir work we are looking at and so he was claiming his was faster I did my own benchmark because you can trust other people to do it and this was the result my libraries on the left so yes his his library was slightly smaller shall we say easily by a factor of two or three and when I saw that I was like well of course lip my sequel is written in C and my stupid little thing is written in JavaScript and c is fast as in JavaScript right and I would get really desperate and it was about to give up I said I was like wait wasn’t this the aid Singh supposed to be super fast and turn my JavaScript into assembly code and wasn’t there was a thing called crankshaft that would analyze my JavaScript code at runtime and generate even better simply code by figuring out the types and was noche that’s going to solve all my performance problems anyway and cure world hunger and whatever was i living a lie I it was a I felt unhappy about this and so then I did a lot of stuff and my conclusion is well i was actually kind of living a lie because the ada node are tools there may be fast tools by by default and they may have interesting performance characteristics but turns out performance is actually not at all i don’t know what performance is exactly but it’s not a tool and from my experience it’s a lot more about hard work and data analysis so actually i sat down and it was like let me see if I can do this faster and so I basically rewrote the library and I had this great idea how to make confessor well it turns out the rewrite was 1,000 times slower because my idea was fancy and I didn’t verify data early on there was like well that was not a good idea doing everything from scratch and benchmarking NC and that’s not really good so i redid it again and so at the end result today I get something kind of fast but it was a lot of hard work and a lot of data analysis to actually get there in fact I poured maybe 150 hours interesting but anyway the end result was that I got a libraries that perform better and this is kind of the result my first version is on the far left in the middle is C binding and on the right side there is my my sequel version 2 which as you can see is easily as fast maybe a little bit fast faster the bars little higher but I don’t know if that’s significant or not but we’re competitive right so I was really happy with myself and at this point I was like man this is so cool i have to submit this to crave conference like jay eskom of you and like talk about this kiss man’s good well remember that cert love github sing it hit me again and before in between submitting the presentation I got accepted and after getting accepted this guy comes along Brian White he does another binding to a C library and he was doing a binding to Maria SQL Maria if you don’t know is the fork of mysql and they also Forks a client-side code it’s actually buy the original guy who started mysql monte and they forked the client side code and they actually made some my sequel client non-blocking so

also doing asynchronous i/o and brian is also very smart guy so he made a better see binding as well so you combine these two things you get performance again so he again released performance benchmarks with like- faster than yours and i was like man let me test this so I had this benchmark going already so I plugged it in and this was a new result ah so yeah I don’t need to say a lot about this I was I was devastated so what would you have guys done have you would have given up at this point raise your hand if you would all right I I was about to do it but then I was like never gonna get that and so I again set out to create another parser and I basically and I’m going to talk about what I actually did in just a little bit but I basically did everything from scratch again put even even more work into it made it worked on it really hot and this is kind of see end result and by end result i mean i actually just did this for fun I never actually integrated the faster paws on the current library I just wanted to see if it’s doable and then didn’t finish it anyway the thing is I wrote this new parser which is twice as fast as the previous ones that I had in the my sequel to version which puts it about on par with the performance of symmetry SQL thing they should be in the same ballpark and so I’ll talk about how this happens in a second but before what’s going to happen should I actually isn’t even worth putting in or said third law of github Singh going to hit me again and again and again actually I think this time around I have good reasons to believe from the data that I analyzed that this is kind of approaching the end game and by end game in this case mean that the last bottleneck becomes v8 so JavaScript engine itself and creating JavaScript object and it doesn’t matter if my decoding happens in JavaScript ONC at some point both of our libraries have to ask v8 to create a java script objects for me with all the properties for all the keys and all the values for the values and it doesn’t matter if i call this function from like if I do that in JavaScript or I call the c++ in the face to the eight that works has to be done the garbage collector needs to take care of it and needs to track it and do the garbage collection and by the time that the encoding just becomes ridiculously fast this is kind of the limiting thing and i believe i reached that point from the data i have also it’s just for fun at this point the my sequel server is horribly saturated at this point when you have a single client that can do six thousand mega bits of data retrieval from in my sequel server on a single cpu you’re probably going to saturate whatever my sequel machine you have chances are you may still have a 1 gigabyte network interface in which case you’ve exceeded the network interface but even if you have more as soon as the disc gets involved you’re not going to get this kind of data flow coming from in my sequel machine so I think we were the point where it’s really pointless to optimize for so anyway so let’s turn this talk into something useful away from the story how can you actually write fast JavaScript and some of this advice I hope it applies to other languages in technology as well so the first thing that I want to talk about is what doesn’t work and well I’ll put it in context but basically for what i was doing here to get really to the lost levels of performance profiling stopped working for me profiling is need when you’ve written a whole bunch of code and you kind of want to know what’s know but profiling just tells you what functions are slow profiling doesn’t tell you how to fix it it gives you no clue on what you need to change now sometimes it’s an easy fix where you’re just using an efficient algorithm and you’re doing unnecessary work and then you stop doing stupid stuff and you get fast but when you’re already thinking you’re pushing the limit and you have your data structures well thought out and all that stuff profiling yeah doesn’t really get you further what else doesn’t work taking performance advice from strangers and I am miss stranger yeah you should keep that in mind so taking performance advice from strangers is good for ideas and inspiration but it’s useless when you blight cargo cult style and as the lot of people do this they take advice maybe some subsets I’ll say or some stuff that’s v8 people who really know what they’re saying will tell you in terms of do this in your JavaScript would be fast well they send Jenn take that knowledge and they applied in all kind of situations where they believe I need fast however they never verifies their assumption and by that I don’t just mean that it’s a thing we’ll the optimization will most things tester but they will not verify that will make it faster for this use case it really matters if you have a loop that’s doing 1 million iterations per second or 50 million iterations per second what you’re doing inside Seth nuke some stuff you do at 1 million per second you can do the slowest thing ever inside some lube it doesn’t make a difference but at 50 million it may make a difference so it really depends on how hot your lupus and to judge which kind of advice applies and unless you tested in the actual context of what you’re doing applying all these micro optimizations will get you nowhere but they’re good for inspiration I’ll just give you stupid example so a lot of people will send me patches every once in a while where have a for loop that looks like this and they’ll be like

don’t you know that you can make four loops much faster can’t you just do this where basically you catch the length look up so you don’t need to access a radar lengths for every iteration it will make things so much faster and I actually receive poll requests where people do that in context where doesn’t matter and well there’s a bunch of problems with it first of all sure maybe there’s a situation where this makes my code faster but unless you can show me in benchmarks that’s relevant why should I do this makes my code weird on the second reason is these days actually the JavaScript engines optimizing this away I think this use case no longer applies it used to be in optimizations that was useful in some situations and yeah that’s pretty much the example that I have ser and there’s a lot of other like micro tricks a lot of people are obsessed with this Jasper tool where you can do all these micro benchmarks and they’ll paste links on Twitter and everybody starts doing it that’s it’s not good unless you test it in context so what does actually work you’re not going to like the answer but it is the answer benchmark driven development and benchmark proven development is basically similar to test driven development and I should stay dead just like test-driven development has most value when you care about correctness and there’s cases where you don’t you just want to get the broad to type out the door benchmark trivial development is only interesting when performance is an explicit design call and by said I don’t mean it’s like the tensing on your list of your requirements it’s like the first thing and that’s usually shouldn’t be the case so it’s it’s an exotic use case where you just want to play with some people on github for the stuff I was doing so the whole idea behind benchmark ribbon development is if you stop benchmarking early on and you do it all the time continuously you’re going to write code that is much faster as if you just write a lot of code and then try to benchmark it and then try to make it fast after this effect I said I’ve tried both passed several times and I’ve found that if I always care about performance continuously as I work on the project I get much better results so how do you do this ideally so the way you do is is you start with a function and the function is the work you want to do your benchmark and you just put a bunch of code in there and that a code should ideally be slow enough that it takes a few milliseconds to run at least maybe 100 milliseconds so if whatever you’re testing is not by itself slow enough yet just make a for loop that’s long enough just get enough iteration than that because when you do you can do very simple benchmarking tool here’s my tool of choice and I’ve done a little bit more sophistication but this is enough to get you started so you do an endless loop you take a timestamp when you start then you call your benchmark function this thing is going to take a few milliseconds to run then you get the duration and you print it out easy as that so how do you continue the next steps if you do is you implement a tiny part of your function basically just least amount of useful work that will get you closer to solving a problem and in my case it was basically passing the headers of a my sequel package which was the first four bytes that I get in and the whole package maybe hundreds of bite so I just care about these first four bites and ran the benchmark against the code that passes them and then what you do is you look at the impact of adding this functionality how much slower did I get words this essentially doing no work or very little work and then you start tweaking the code this is actually the part where you can remember the stuff that people have told you could make your code faster you try these things out but you’re trying some in context you know that this is actually the stuff I’m testing and then you tweaked the code and you do that often enough until you find a good improvement you pretty much most cases will find an improvement and what you’re going to learn from this is that a lot of your assumptions don’t hold with in your context so I’ll give you examples and you need to promise me to forget some real code but in my case I was under the impression said as soon as I rep something in a try-catch block it would get slower because people had shown that this was the case in some JavaScript engines for a long time and I had this in the back of my mind and I was trying to avoid try-catch well turned out for my use case didn’t make a difference and have wrapping a function that I was benchmarking a try-catch didn’t make a significant difference so that made it easier to get to code written another idea I picked up from somewhere I looked what other people were doing for passing sees network protocols and I saw a lot of these big state machines using a big switch statement and I thought well if they are doing it they really know what they’re doing for example Ryan doll did set for the HTTP parser let powers node they must know what they’re doing so just copied edge stuff turns out bad da da she does not optimized a use case I think there maybe have a patch by now that does it anyway the reason why i was also attracted to that big switch statement in my design was I thought function calls would cost me I thought if I do less function called my code would get faster well wrong the function calls are so fast these days that’s a benefit of having them in there to structure your code better makes a much better code base easier to get your stuff going it also makes a profile are more useful because the more fine-grain functions you have some more it can actually tell you even though it may not tell you what to change you get a more local idea of where the problem is and probably the biggest revelation a headset buffering is actually ok in the node communities there’s a lot of

idealism a-rod’s idea if not buffering data of streaming everything so when you pass something you just pause enough data that you can admit an event and then the next guy handles it so in my initial design it was actually trying to pause this little data as possible in the mice equal partner and amid events and kind of puts them together at higher level actually turns out it’s a stupid idea you need to do more bookkeeping to do this so buffering up a whole bunch of data and then going screwed in one pass much faster I also came around some creative ideas the one optimizations that I actually ended up using was loop unrolling using evil evil is basically a way to evaluate any JavaScript code at runtime but it has a good twin which is called new function and you can actually makes it work securely if you put your mind to it wouldn’t recommend it for most code but here’s basically the idea I had this kind of part in my code called pass row and in past row I get few arguments my columns in the passer and then I basically iterate for each row across all the columns and I attached basically the property of the road to it and reads the value for this row from my sequel data that have already buffered so I’m saying row columns i dot name equals positive read column value this turned out to be a significant bottle Mac what I was doing and I was playing around and back in force and how can it makes us faster and then I was basically just hand unrolling this loop to see what an impact it would make and I wrote this and turns out JavaScript engines can handle this much much more easily because they get a better idea through static analysis what’s the shape of that object will actually look like before they have to hit it at runtime so v8 was able to run this much better so how can i unroll the loop at runtime basically add a little heck like this I said I’m going to generate that code at runtime I’m going to generate a function and for each column I’m basically generates a code to pass this column and then i end up with basically six act same unrolled codes that I was just showing never do that for anything you really want to do but this is the pod way I mean if you want to be competitive with Ian Chavez group you’ll have to go this far there’s no other way actually I think this particular use case v8 engine could optimize but so I just have a long list of stuff to work on so anyway this is kind of support specific examples before I go into the other stuff please do not remember any of this do your own benchmarks do the stuff that matters in your context because it turns out it’s really different stuff for different use cases it depends on how hot your loops are what data structures yet the next thing I want to talk about is data analysis because I’m a little frustrated with the way that some tools do this currently basically here’s how you do really good benchmark write a script just likes this while loops that I was showing earlier with the console block and then produce as much data points as you can for this particular benchmark iteration and just outputs them to standard out using tap separation or I don’t know use commas whatever but basically it’s just do likes unix way of edit outputting sis data on line by line basis and what else do your output except for the run time well anything you can get your hands on especially metrics from the vm what’s your memory usage what’s your garbage collector doing all these things plug in there also store in what no she has worked me were using what what stuff was involved do you know that when later on somebody looks at the data when later on somebody looks at the data he can actually not only see the results but also kind of have more context on whether was happening and here’s the reason why advocates us you should never mix data from the benchmark and to analysis into one step and a lot of benchmarking performance tools that I’ve seen utterly fail at this they basically collect the data and then without any step in between without showing you the raw data they just show you the result like chairs / for example that’s exactly that many benchmarking tools do that and as actually huge problem with doing this because if you don’t have the raw data you’re only going to look at whatever visualization they’re going to provide from it and it turns out you actually need to look at the data in many ways to really understand what’s going on and I have an example on this in a second if you recommended tools on this so as I said I’ll put the stuff with two standard out and then you can use unix utilities for example t2 pipes of data into a file and also see it on your screen at the same time I’m a huge fan of using they are programming language who has used our before you guys should all check it out it’s it’s a terrible language but it’s the most powerful visualization thing we have its aesthetic statistical analysis tool especially when you use the chichi plot to library you get very interesting nice crafts with it so you said it’s really good and it can toss these types of rated value files really easily and then you can use stuff like make files and image mad shake and I use sched to do some annotations to do some of the tooling in between automated kind of generate images from the PDF all these tools so let’s look at the why why do you need to separate data generation from benchmark from the analysis so let’s go back to this what we have right here is the benchmark we’re in my latest

iteration I compared the fastest path I was able to ride with a previous version well i was looking at this data and was like okay this is interesting a is faster than be well great we must be great but then i was like interested well how is the distribution of this value actually looking at so i did a cheater plot and this is what the cheetah block looked like basically on the left side i have my son with slower library and on the right side as the faster library but what’s this what is these gaps here like why is there like some data points where the passer in some iterations is really slow and in some it’s really fast there’s a huge gap in here it’s about ten percent or something and so I was like wow that’s kind of awkward and so I actually started looking plotting SAS data over time or actually this is a saturation of the benchmark and this is really interesting not only do boast of CSF kind of these huge gaps in the distribution the distribution changes over time so initially both paws are start out fast and then after it’s the same amount of time time its iterations it’s the same amount of data parsed they both drop in performance and they’re completely different code bases I mean I wrote both of them the chance I made the same mistake but man this is a pretty weird pattern to see from my performance so it was like huh what’s that so good thing I actually used the separation between data locking and data analysis because now I could drill down further I also had some virtual machine metrics for this so I was able to look at so what’s that it’s a heap total usage so how much memory does v8 get from the operating system to place javascript objects in this is not necessarily the amount of memory really need it’s just stuff that’d Calif reserves for placing object them and what I see there is that both libraries at the same point in the iteration where the performance drops right here it’s the same amount of iterations that the M memory usage goes up for him like 20 megabytes 235 megabytes for heap total interesting so I look further I looked at the actual he abused and heap used this amount of actual living JavaScript objects that are in the heap at any given time and as you can see this goes up and down for both libraries it’s a garbage collector doing its work but as you can see at that exact same time spot vm basically takes more of a heap total and then ends up for each iteration running Sahib total to the maximum doing a garbage collection running back down so what do we have here well turns out the SMD eight bug the garbage collector is actually not doing the best thing because initially it was doing that work much more efficiently and then at some point just do starts doing it much more inefficiently even though I don’t leak memory I’m not allocating more stuff the same runtime profile so the good news is by doing this work doing the data analysis separate from the data logging I was able to actually get good data on this and submitted to deviate guys so the buck is kind of fixed now and by kind of i’ll show you what i get when i did this benchmark again today this is a basically profile that we had a node version 0 a 22 so there’s a number for durations hundred iterations or something almost and then there’s this ten percent performance drop and it’s the same for both libraries and here’s what it looks like in the news note version interesting to say the least two different code bases that previously exhibits the same garbage collector buck now have a different one so the faster we well first of all actually I can’t about to be eight people the thing just went from what’s the top here like 7000 megabits in the top profile to up to like nine and a half thousand megabits per second it’s a nice improvement just pop creating no chairs so i’m not really complaining but they still have the problem where for some periods of times the performance drops and then it gets fast again for short amount of time performance drops again goes faster drops whatever interesting for the older like not so fast library the pattern is not that clear it’s not going that much up and down it’s basically dropping again at this point and then well kind of shooters around but doesn’t go back to the full value again so what can we learn of this first of all virtual machines are magic there is no way in hell you’re actually going to get a good mental understanding of how these modern virtual machines work your best chance is to actually do a lot of benchmarking collected a lot of data and look at what’s really happening you may still not understand what is happening but at least you have something where you’re like okay this point is the virtual machines fold and done with my work now I submit this to the virtual machine people and yeah so more summery too long didn’t read write a benchmark if you want to make something really fast write a benchmark then write and change a little bit of code collect as much data as you can pull this iteration of the benchmark find some problems with it I’d like come up with new ideas change the code again then go to to and that’s pretty much it thank are there any questions okay no just me okay so this kind of stuff I mean performance tuning in general will make

you a little loopy if you have to have a very specific reason to do it so it seemed like yours was just competition yes I mean there’s there’s this imaginary youth case where somebody wants to my create a lot of data where this optimization was benefit from I did not actually have this problem i wish but not a real easy okay so can you talk about the point where you like deviled a couple times you just scrap it and rewrote it either upton army but normally like you know benchmark a little bit and try to figure out where the spot was did she scrapped it why do that because the reason I did it was I was basically looking at something that was not fast enough and I tried to write a benchmark for it and then asked the profile I hey what’s slow then profile tells me this function I go into this is functionary start changing a little bit of stuff around and I make incremental improvements I get a lot of ten percent another five percent but I couldn’t really like make a gap 22 times and it actually turned out I wasn’t doing anything particular so it was really stupid except for the last year’s case where I did said evil thing that was well the evil thing was stupid that made faster anyway before was the first time i did went through this process I didn’t do anything stupid in the first version of my password I just basically made a lot of small mistakes and it’s it turned out before me in my testing to be more work and more frustrating to reverse my small mistakes the sum of all of them instead of redoing it from scratch and not making these mistakes to begin with testing kind of every line of code as I went along it was easier for me this way and I get that a result and drive both ways it’s practical I’m not saying that’s erratically sound approach where do we buy your shirt this shirt actually subsec James Halliday made and he put it on t spring and I think he sold only a few of them so you need to ping him on Twitter and be like sup stick you need to make more more truth no life kidding no sad I can life code something i don’t know what i can write an i could show you how to start with writing like this but after practice that it will be terrible unless you guys want to be bored a little bit then i’ll do i’ll just wondering by a virtual machine he talking about the end of the JavaScript runtime were like an OS know as a JavaScript virtual machine da I was also wondering how do you are you pronate to find things like the heat usage in Harry video already at the node has an API called process that memory usage I believe is a function call and that gave me the heap total and Sahib usage it also gave me the RSS which is the month operating system the memory the operating system gives us the process I did not find anything useful in the other I didn’t show it but basically all that stuff came from there and I locked all of it away for analysis moving those stats altering the actual performance human host so if I’m constantly asking for that that’s a good idea I did not correct for that did not control Ford I suspect that it shouldn’t change the behavior but I should do a benchmark where I just leave her to process information and blocks a runtime performance iteration see if it produce a different graph had to do that it’s a good idea he is some sort of use cases of whimsy you know oh god now I have to rent a little bit don’t use no chairs ninety-nine percent of the people who use no chairs buy it because it promises like easy performance gains and 12 JavaScript and all that but realistically speaking if you can imagine your solutions that requires or that is so a problem that is solvable using a relational database and rails or similar kind of framework with the thousands of men years of experience in it you said you’re going to have much easier time developed software faster when to use note you should use note when you do exotic stuff WebSockets you just have a lot of web sockets clients or you have said exotic use case where you may want to have code sharing between your back-end and your front end and so both are in JavaScript so you can kind of pass validation code around but be careful with this I heard a lot of people using that as a justification for using node didn’t end up actually doing it in practice and then all they got in exchange was callbacks and I mean unless you really have a concurrency thing and you want to use JavaScript for because you’re unwilling to learn something else I actually really recommend go as he stays for similar problems I don’t know you use go because it’s going to take off it’s going to have a hype you’re going to get a job doing node soon I think it’s going to go somewhere but I don’t I don’t think it’s really appropriate for most of the problems people throw that actually and i’ll get i’ll get a lot of for this but i’ll say it k liberate our we need buy separate data

analyzers from what oh okay so separating data analysis from visualization yeah by that I basically just mean lock away the raw data from your benchmark like which iteration of the bench market was how long is this iteration took in milliseconds what’s a memory usage was at this point lock all of this data way into a file for each loop that you do in your benchmark and then keep the data in a file separate and then have another step that actually feeds the data into a visualization tool like our and visualizes it so those are generally that are just crash those are generated by all of them and but yeah don’t use the tool that does both at the same time and doesn’t give you the raw data because you’re limited by the visualizations that tool will offer and you’re going to miss stuff and this I think such a thing for it was about the stinkiness for you or not to enjoy buffer and I was strange and just like rather than processing the stream in this governor okay so buffering was non buffering there’s a one use case where you don’t want to buffer up an entire data package that you receive from my sequel before analyzing it and that is the use case where somebody has a smart idea of storing files in a relational database because people have done it that in that case you may get one row and that row is I don’t know 200 megabytes of data so in that case my database design kind of failed to you because you’re buffering that whole file into memory and you probably don’t want to do that especially if you get a row with a lot of these thoughts but that I think is an extremely edge case of people doing stupid stuff with the tool that they shouldn’t be doing so optimizing all my positive design for that didn’t make sense because most of the Rose I get from my sequel or like 10 kilobytes a row tops and buffering 10 kilobytes of memory per row doesn’t seem that and and I do streaming on the row level so you can get an event for each row I’m not passing buffering all the rows up but basically being able to buffer on the package level allowed me to write my parser in a much easier way because I can literally have one function where I’m just saying passer posts first two bites and turns them into an integer pass and x4 turn them into float and I can just write this eventually without having callbacks in between all of these this means i need to do that’s bookkeeping like callbacks and can write cleaner code and it gave me more performance and better code base so in that particular use case buffering was totally good solution but if you have another use case where you’re really concerned about the data so it’s going to be in memory do streaming I mean depends on the use case so in hindsight would you use the C library instead of doing all this today yeah no actually if i was i use a listen production but if i were starting a new project with note i had to talk to my sequel database i would use my marshall right now not be necessarily because it’s a good idea to do this in javascript but my module has been battle tested so there’s actually been hundreds of people using it in production successfully submitting lots of buck reports have about five or six people on github helping me maintain it whereas the other see libraries didn’t gain any traction not because they’re bad they just came to light and people already well there’s another reason windows getting any c atoms to compile on windows is a huge pain in the ass by using something that doesn’t require see compilation step people know that the portability is better and they made shoe set in a in certain team settings so there’s some benefits but if i could go back into in density solution first i would ya JS burgers canada and all we all so hungry life is there something that is an easiest or SS that any situation so you’re asking if there’s something that will have the features like Jasper that you can just apply to a problem you have those runners here’s one just like checking on facebook well depends if all you want to do is this micro benchmark to really find out which micro optimization makes sense use something like Jasper but before you go and apply that micro optimization in your actual problem domain you’re probably going to have to set a set up a benchmark well it also depends if it snowed project Jaspar it’s not going to give you the node API so you can’t use it if it’s purely front and thing and you can conceivably put the full thing into chase curve you can still have some problems that it’s not going to give you the data analysis it’s not going to show you the distribution of points but I mean as i said i wasted hundred fifty hours on this project it’s not very reasonable for most real-world use cases so if you just need to get an idea of something got fast or slow real quickly do it I mean just throw it into chase perfect it solves the problem use appropriate tools at appropriate time sorry if I didn’t make it clear I’m just

curious it’s a lot of it baby we’re saying that you go for the appropriate things what are you / 15 oh also super niche there’s not a lot of problem i would use go for but like in the problem domains where I’ve seen note being strong which means websockets high concurrency stuff I think go has a lot of benefits basically you get static typing but with basic type interference so it’s not as annoying to use this Java go places a lot into what java could have been and what c++ should have been the goal kind of tries to retake all that problem it’s it’s also good when you need a static binary so it produces one static binary so to you and it cross compilation is great so for example if you want to write a little tool that you can deploy to servers across platforms and that just runs they’re really good high performance wise go becomes very interesting these days because it tackles a multi-core problem that note doesn’t really address so and go you can actually say I want to distribute this problem on 8 course and it’s not going to use all of them where nodes the only way to really tackle this is by starting a denote processes and these are having no state in between them or sharing the state you can kind of load balance / file descriptors as far as incoming connections are concerned but you’re not going to get shared memory and sometimes you want it and go goes off basically it’s a call back he’ll buy a allowing you to do blocking I occult and whenever you do a blocking I oh call it kinda yields back to the scheduler so this operating system threat gets freed and it can use another they called go routine screen sweat and run that go routine on the on the same operating systems threat and so you can literally have 100,000 threats running but you’re not going to use operating system sweats you’re going to use green sweats very low resource usage and in terms of shared memory access you get channels which allow you to pass data structures between these like kind of message passing it’s a lot like curling except you don’t get to create the exactly create error handling your line has but you get a lot of Tydeus from earling and the language that is surprisingly easy to learn easy to use and very consistent very modern and it’s stable it’s go at version one that one now go to still at version 0 to 10 they’ve kind of built note in terms of building stable platform and actually see a lot of production for go so check it out if you have an exciting use case if you’re just going to build some things that takes database data in turns it into HTML use a tool that’s really specialized for that I guess in your opinion like what do you protect about a future for both like no dodgy ass NP 8 article become obsolete anytime soon or you could you know know everybody death note like it goes into enterprises all these companies buying it up because they have all these JavaScript people Travis I mean we all know JavaScript right we had to learn it at some point so there’s a huge amount of skilled workforce that can kind of use node and I mean companies are short-sighted they’re just like what’s the easiest tool we can reach for that’s going to cause the least inconvenience to us and I think node will go very fast I mean there’s also a new skis we’re good is still strong smaller projects anything below I don’t know there’s a cut-off point between a thousand and ten thousand lines of code we’re seeing somewhere in between there at the point where using javascript becomes inappropriate because it’s just an ester managed big JavaScript projects kind of tend to become very difficult and where I think go kind of overtakes the benefits but if you just need something very fast quick to write noticed all awesome for that just shuffled data from A to B simple task doesn’t if the scale infinitely do that a note you can write that in few hours so but I don’t think it’s going to be absolute it’s going places but I think so we’ll go go sees a lot of top team hosts eh she’s be only focused on my cheek how does on the nose side of things other ways to support other databases and how make sures conversion to me your bicycle library for no Togo note other other databases I think there’s a client support for all databases right now I haven’t used most libraries except for them singer used some couch a mongoose tough because I had to say work I think there’s enough community momentum into maintaining cease library so I think any database that has with reasonable adoptions these days you’re going to get a good and connect or no chairs it works but it’s open source sometimes you have to fix stuff anyway there’s no more questions let’s drink beer all of this is like my total personal opinion so you should question all the fit and try your own conclusions thank you guys you

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.