Contract Work

Saturday, July 12, 2014

Interesting Reads 6/27 - 7/10

Seems as though either lots of people are on vacation and therefore there is less to read OR I am just super crazy busy and have missed some great posts. Well, I've only got a few posts for today but they're good ones, so check them out.

Things I learned as a Software Engineer

An older but great post about making fat models skinnier

What makes a good startup engineer (as opposed to a regular engineer)

Tuesday, July 1, 2014

Asking the questions

When someone is asked to describe me, the word ‘shy’ does not come to mind. I am not a quiet person and I don’t fear asking questions. In fact, I know that pushing myself to never be embarrassed and to ask any question that came to mind as I was learning to code is absolutely what helped me progress to where I am today. However, recently, I’ve found myself holding back and keeping a bit quieter and why? Because I think that I should know better.

When I first started on this journey of learning to code, I knew nothing and I mean, nothing. I had never opened the terminal before, had never used the command line, really didn’t know all that much beyond basic information about computers in general. Because I had no knowledge, I decided I could ask anything. Nothing was off limits and nothing was a stupid question because everything was new to me. In my first full time contract position, I asked lots of questions. I was able to do this without fear and without concern because I told everyone that it was my first full time position and I still had a lot to learn.

I wrote a post a while ago about how the things you worry about change as your progress from super new to new and then new to first full time position, so I think this is a thing that happens when transitioning from a first position to a second one.

Recently, I’ve been afraid to ask questions. I know that’s silly and completely unwarranted. Everyone knows that for your entire life as a programmer, asking questions is the best way to learn! I hope to never stop asking questions! So, what’s been stopping me. Well, part of me feels like I should know better. I’ve been learning for a while now… I’m not completely brand new to programming anymore, but what does that mean? How much should I know? Is it okay to ask to be reminded of things? Or to need a refresher? Are other people judging me when I ask a question? Especially if the question seems super simple?

Logically, I know asking questions is great. The more questions I ask, the better I’ll get. And the better I get, the more confident I’ll become (or at least that’s what I think). But even though I know all of this and it makes sense in my mind, a small part of me is still worried about what other people think of me, about how much or little I progress on a regular basis, and about what I now or don’t know. I just keep reminding myself that asking questions is good! And that asking makes me better! And finally, that if I don’t ask… even the simple questions that I might be embarrassed about having to ask, then I’ll never know the answer and that’s even worse.

Saturday, June 21, 2014

Interesting Reads 6/14 - 6/20

Short on time this weekend, but lots of good reads both very technical and not. A few great reads for newer developers.


I recently got an Up and I love it. I also love when companies do interesting things with their data like Jawbone did in this post:

Mise en Place and organizing your day:

OoooOOOooo, Thoughtbot's playbook. A really interesting read for anyone working for a tech company or interested in working for one. I'm amazed at their level of documentation here:

Swift and Ruby:

This is a great, interesting resource… and I've actually been asked a bunch of these questions in interviews:

Wednesday, June 18, 2014

Landing the Gig

I recently started as an Engineer at General The past few weeks have been really great! After posting that I was starting there, I got a request via twitter to do a blog post on interviewing, so here it is and hopefully, it’s helpful.

General is actually my second paid programming position. The first, was as a developer on contract for Foodstand, a new startup powered by Purpose. My experience looking for my first position versus my second one was incredible different so I’ll go into both a bit.

The first job ever.

I’m not going to lie, this job hunt was tough! Yes, there are a lot of places hiring developers out there, however, not as many are open to hiring junior developers, especially when that junior developer has no previous paid programming employment and no CS degree. I completed an average of 5 interviews per company for my first search. Most of these interviews were short consisting of just meeting someone and having a conversation for about 30 minutes. Generally, there were two longer parts; the code part (which for me happened to be primarily a take home code exercise) and the more technical interview when we reviewed the code.

Of these five interviews, three were somewhat technically focused and two were just schmoozing and answering the same questions you would in any other interview. To get into a little more specifics, the first interview was generally just a phone screen. It had lots of getting-to-know-you questions (like “tell me a little bit about yourself”) and then some more broad technical questions. These were questions like “what does MRI stand for?” and “what is the difference between include and extend?” From what I understood, the expectation was not that I would know all the answers but that I would know some of them and exude a sense of broad general knowledge. For these, I brushed up on my ruby, went through ruby monk, took notes, and actually made flashcards to help me remember all the different pieces.

The second technical part of the interview was usually something having to do with code. Yes, I was asked to fizzbuzz with paper and pencil, but I also received a few take-home assignments. Build a bot was one. Another was giving me a set of failing tests and making them pass. These generally didn’t have timeframes associated with them and employers were open to asking questions as I completed the assignment.

The third technical part was usually reviewing this code work and/or asking some more in depth questions. They usually started with “so, walk me through how you attempted to solve this problem” and then conversation branched into different subjects from there. I, personally, didn’t have any interviews that involved pairing which I was relieved about at the time (but changed my opinion on that for my second search).

So, now here are some general tips and thoughts.

First, for my first job hunt, I did not apply to a single job where I didn't get a soft introduction meaning that at no point did I blindly send a resume to a company. This is where being involved in the community is really important. At RubyConf, I met a ton of people in person. My primary job hunt strategy was posting on twitter where incredible people made amazing introductions for me. It is rare, I think, for a company to hire a junior developer just from a blind resume drop so these intros were really priceless.

Second, I was very careful about interviewing them as much as they were interviewing me. While I wasn’t picky about what industry I was looking to be in, I was picky about the type of environment and culture I was looking for. Furthermore, as a women who is new to developer but has major career accomplishments pre-career transition, it was important that I not sell myself short and find a company that recognizes both my future potential but also current worth as an employee. I asked potential employers a lot of questions about support provided to junior developers, what sorts of structures and systems they have in place to foster learning, and other questions I felt were important as I made this transition.

Third, you should hear about positions and decisions quickly. Yes, there were a lot of interviews, but generally, there was a less than 3 day turn around time for scheduling the next interview. There is such a desire for developers these days that companies know that even if I’m not the right fit now, I might be in the future. Therefore, I found that companies are pretty transparent and quick to let you know if you’re moving forward or not… which is a welcome change from my last industry where you’d interview and then wait a month to hear if you were being offered a position or not.

Fourth, think about what you want out of this first position. I know a lot of newer developers who start as QA engineers, project managers, sales engineers, and other positions that involve code but aren’t specifically developer or engineer roles. When I interviewed at these positions, I was very careful to ask about the path to transitioning and becoming a full-time developer because I didn’t want to get stuck and I didn’t want my professional developer to be siloed away from the other developers. I was also pretty adamant about getting a developer role as opposed to one of the roles mentioned above but either path is fine, as long as you have asked the right questions, are clear with what you want your end goal to be, and are comfortable with being in that role.

The final piece that I think I realized more in hindsight that when I was looking for a position is that if I walked out of an interview feeling stupid, or awful about my abilities, then it was not the right place for me. Mostly, I felt interviews went well and it was great to meet lots of different developers and talk to them about what they were working on but sometimes, I left an interview feeling completely awful, like an idiot, and like I would never ever ever find a job. The places that made me feel that way, probably would have also made me feel that way if I was on the job and made a mistake or didn't know how to do something. An important thought to keep in mind when you’re slogging your way through.

The position I ended up with was great for me at the time. It was a full time contract position in a subject matter I was very interested in. The dev culture was just starting to develop so I was able to have an impact on what that looked like. The team seemed really great and while I was a little concerned about being the only fulltime developer on the project, there was outside support and structures in place to provide more support once I started. This was a great position for me at the time. I was there for 4 months with my contract being extended a few times. In the end, however, there wasn’t enough senior-level support so I felt my learning was moving as quickly as I knew it could.

Onto the second hunt.

This job hunt was completely different. After I got my first position, I continued to stay involved in the community. Others around me knew how much I was learned and I had shown that I was able to succeed as a developer so when I started looking again, I didn't have to look very far. I actually didn’t even launch a full job search. I spoke to two local companies I was interested in and General The other primary difference is that this time I had 1-2 interviews for each company as opposed to 5. At all three companies, I had great initial conversations. For the local ones, the interviews were relaxed because they had seen me progress over the last few months. At one, I had worked with most of the developers already in informal capacities so it was nice to already know a majority of the team. At the second company, I requested to go into the office to pair to get a better sense of what learning from the developers there would be like. At the third company, I had a soft introduction but didn’t know the folks there as well. I had a great initial conversation and then was set up to pair with two of the senior developers. These sessions were great. At one, we went through an old piece of code and discussed how I might refactor it. At the second, we looked at a piece of code and just discussed how I’d get a sense of what the code was doing and then we worked on a test and a method. And that was that… all the places I spoke with and all the interviewing I did.

The second time around, interviews were also so much easier because I had real experiences to pull from. The one thing that was a bit more tricky was being able to show code. The first time I looked for a position, I had just been building sites or projects and had everything on github. The second time around, I had progressed significantly in my coding abilities but EVERYTHING I did was private and I couldn’t share any of that code so it was challenging to prove how much I had grown. A nice work-around that someone suggested to me was to focus on blog posts instead. I tried to make time to blog about challenges or things I learned at work so, while I wasn’t able to show code, I was able to show a blog post and talk a bit about the related challenges.

In the end, I received offers from all three companies. It was a really really tough choice but General seemed like a really great fit. I thoroughly enjoyed their interview process (and, I mean, who enjoys an interview process?!), enjoyed the pairing I did, and they’ve got great structures and ideas in place for developers to keep learning and challenging themselves.

Monday, June 16, 2014

One Giant Post about Ruby Nation

This year again, I attended Ruby Nation. This year was great because I actually understood what was going on!! Admittedly, I took better notes on the second day than on the first because I was speaking on the first day but most of the session I went to were really great. Also, I apologize in advance because this is just one giant post, as opposed to breaking it up by session which I usually do after conferences.

The first session was talking about ruby and government with Sarah Allen. She was recently a presidential innovation fellow and spoke a bit about the interesting things she was doing and the really interesting digital data issues that the Smithsonian system has. Sarah brought up some really interesting thoughts about rails in general. One of the most interesting (controversial?) things she talked about was making Rails more accessible. She talked about other platforms like Drupal or Wordpress and the ease of adding certain functionality or widgets there. These are things that Rails doesn’t have. It isn’t super accessible because you don't have the same sort of ability to add widgets or different features. Her point was that potentially being able to make it easier to add widgets would make us more effective programmers and that by doing this, we wouldn’t have to re-work things or add simple, commonly-requested functionality on a regular basis.­­

Her talk closed with three interesting points to think about. First, framework leads to language. Second, what if we could add UI features at runtime, and what would that look like? And third, non-developers are co-creators so how to we give them a more active role in rails.

The second talk was by Eileen Uchitelle and was about CRUD and ActiveRecord. I thought this talk was fantastic, although admittedly got a little lost in the middle. The premise of this talk was that when we assume activerecord is magic and does all these magical things, we end up not paying enough attention to our code and to the actual SQL queries and how they’re functioning. Eileen then went through an example using each of the CRUD methods (create, read, update, delete) and went through ways that the queries could be better, faster, and more effective. For example, in create, she spoke about batch insert and in read she talked about using pluck. Her slides are super clear and definitely worth a read.

The third session was Terence Lee speaking about working with Ruby. He spoke a lot about how to commit and encouraged everyone to get involved with fixing ruby, committing, and getting involved with the core team. I learned that .to_f is to float. Floating objects “represent inexact real numbers.” He talked about the future of ruby being focused on trust, transparency, and onboarding.

The fourth talk was given by Davy Stevenson on Ruby as science, art and craft. I thought this talk was really great and looked at code from different perspectives and points of view. She started with science, looking at algorithms and talking about what complexity looks like. Davy went through a variety of big O notation types like linear, quadratic, exponential, and factorial. She described an algorithm as a set of instructions that will solve a particular problem… a great definition for someone who isn’t a math person. She then looked at the convex hull problem and demonstrated different approaches through different algorithms that connect the points and show this convex hull in different ways. I won’t go into them here, but if you want to google them a bit further, she discussed brute force, gift wrapping, and monotone chain algorithms.

Then Davy talked about art. The art part was lots of imaged of different interpretations of art and thinking about them. The best part of that segment for me was a quote she mentioned from @sferik where he said that programming is about building powerful things in simple pieces.

Finally, Davy discussed craft taking the idea of apprenticeship and artisan abilities and connecting it to code. She said we write code to enrich our own lives and the lives of the people around us and then showed us examples of beautiful code. Definitely take a look at the slides to see what people consider as beautiful code.

After this was lightning talks!! There were a few talks… one on muscle driven development, which talked about health and treadmill desks. One on debugging, another on language and thinking about how the language we’re using shapes how we look at solutions. A fourth on Ruby For Good which is going to be awesome and everyone shold check out. The fifth was on the Angelo gem and the last talk was about how you shouldn’t do ops as a dev.

From here, we went into the tracks. Again, the talks today, I won’t do justice because I wasn’t focusing as much as I could/should have been but at hopefully I can provide some notes. The first session I went to was JohnPaul Ashenfelter’s Machine Learning talk. He did this at RailsConf as well (a longer version). It was really interesting. Basically, he spoke about different gems and things developers could use to learn more about the users. First, he talked about the sex machine gem which isn’t perfect but can help determine the gender assignment of your user base. Second, he discussed free geoip to show location awareness. After that, he got into clustering a bit and looking at clusters in order to help determine patterns. He walked through the ai4r gem which puts people into clusters and then calculates the centroid (centroid being the center of the random clusters) and finally, it loops through to see if users are closer to the center or their cluster or another cluster. This enables you to see what cluster people are in which is super helpful is your clusters show, for example, what tier users are paying for in your system. Alternatives to this can be hierarchical clusterers or divisive hierarchical clusterers. Finally, he talked about a gem called linalg and how you can use it for collaboration purposes (I think… my notes get a little sketchy here).

After that, I went to an ActiveRecord workshop with Dave Bock. This workshop is fantastic, but very hands on so not many notes. Basically, Dave has a handful of scenarios that each table has to discuss and talk about how they’d architect them. Really helpful for continuing to learn about ActiveRecord, associations, and modeling.

The next talks I went to was about tests and having a messy test suite. Presented by Chris Sexton and Aaron Kromer, this talk was great. They went through testing best practices and showed examples of clean, clear test structures. Some highlights were that you shouldn’t deeply nest state in your tests and you can fix this by creating contexts, this way the state is top level and you can test everything. Tests should help organize and expose state. Class declarations should tell what behaviors to test. Finally, the two most important pieces of advice were to prioritize what’s most important to test and then iterate towards that goal and make your tests better as you work towards your goal.

The last talk of the day was about sidekiq. I don’t have a ton of notes from this talk as well but presented by Mike Subelsky, it was a great walkthrough of sidekiq and what you can tell from the dashboard when running it.

Day 2
I was able to pay more attention on day 2.

Day 2 started with Jim Gay talking about east-oriented code. This is a pretty cool concept that operates under the principle of tell, don’t ask. The core of it is that queries travel west and commands travel east. The talked touched on the law of Demeter and talked about east oriented code being different than that. There are four things to keep in mind with east oriented code. First, always return self. In this way the method is prevented from querying, can only tell. It leads to polymorphism, duck typing, and encapsulation. Second, factories are exempt form this. Third, always follow rule 1. And finally, sometimes break rule 3. After rule four, he showed a really interesting rails erb template that described this and walked through it. I left this talk sortof understanding everything but definitely would love to take a look at the code examples and slides in order to comprehend it all a little further.

The second talk I went to was on two programmers in one. I loved this talk by Jano Gonzalez. He talked about how each of us as developers are a little hacker and a little thinker. The hacker wants to get things done, and quickly which can sometimes lead to a maintenance nightmare. The thinker thinks about maintainability, abstractions and all the different layers but sometimes can get away from the problem that they’re trying to solve and can have analysis paralysis. He then told his story about going from hacker to thinker and coming to a good balance with both sides when using ruby. Jano spoke about how the hacker is useful for exploring new territory but the thinker is good from defining components and acceptance criteria. Developers need to deliver value but also diminish technical debt and it’s all about the balance between the two. My favorite part of the talk was when he spoke about the 3 stages of learning from Martial Arts. There is Shu, Ha, and Ri. Shu is when you’re new and you follow all the rules. Ha is when you move on and start adding your own knowledge and breaking some of the rules, and Ri is when there are no rules to follow and no rules to break, you just do it.

After that was a talk on google glass by Lance. This was a fun talk and one I just sat back and enjoyed since I’m not really developing with google glass but was just curious. Lance talked about apps as service layers. He then went on to discuss some of the glass specs, things that are great about glass and things that still need to be fixed. He then talked about the Mirror API, which I still have to check out, and how you don’t need to know android in order to program for glass. There’s a concept of static cards which add functionality and a new(?) tool called WearScript which is a bit like phonegap for glass.

Interesting talk and really fun to just see some of the programming side of glass.

Sam spoke about the anatomy of a mocked call which basically involved doing a deep dive into what a mocked call means technically with rspec, using their internal mocking library. He started by talking about testing and why you test/TDD. His main points were that TDD’ing helps programmers create a mental model. It helps them think through what the code does and then because things change as you program, it helps keep that mental model intact and keeps you on track with the problem you’re trying to solve. In short, tests force behavior.

The first part of this deep dive was discussing stubbing. Stubbing is basically faking a response to a method. Moching enables you to verify the collaborations between objects by testing the methods that get called. It creates a mocked expectation. so in the example
it “does something” do
allow(foo).to receive(bar)
expect( eq (nil)

The allow is actually allow=

allowanceTarget is a subclass of the TargetBase and calls delegate_to on TargetBase and, with a setup_allowance, TargetBase defines TO as an allowancetarget enabling .to to exist as a matcher.

Then there is receive which is also often used as a matcher. Once receive is set up as a matcher, similar to how to is set up as a matcher above, then receive#setup_allowance creates a mock proxy. A mock proxy is an object that manages the metadata of mocks and stubs on the object in the lifecyce of a test. Calling add_stub on this proxy sets up a emthod double which saves the original implementation. Then calling (the stubbed implementation) sends a message to the method double which sends a message to the proxy which invokes the stub which returns a value (gosh, I hope I got that right!).

When you run a test, rspec also runs setup and teardown before and after the test and at the end of the test, the teardown resets everything. This reminds me a little bit of how testing work in ember with quint.

Okay, now onto mocks. For explaining mocking, sam worked through the same example but using expect(foo) instead or allow(foo). In this case, expect = and goes through a similar process as the stub did, except this time with a mock. Here, once you have the proxy, the proxy callbacks checks to make sure the arguments are valid and the proxy raises if a mock wasn’t called.

Finally, I can’t remember at what point sam mentioned this, but during his talk (or when someone asked a question about using spies instead of mocks), he mentioned spies so now I know what spies are! Spies are whenever you do a a stub, it will record the information so you can set an expectation at the end of your test instead of at the beginning. Basically, you can set a spy to collect all sorts of information from you and when the object returns you can ask it questions about it’s experience. The way I envision this is similar to what the Dorothy’s were trying to do in Twister. In Twister, the tornado chasers weren’t able to learn more about tornadoes because they weren’t able to see or understand what was happening inside the tornado. They came up with a way to release thousands of little sensors into the tornado to collect all the information and have them transmit the information back so it could be recorded and studying. From my understanding so far, this is similar to what spies do.

Evan is great. He’s got so much experience and his talk was on remote pair programming. Now, I work remotely, so I’m familiar with a lot of the tools, but his talk was less focused on tools like madeye, nitrous, and screen hero and more focused on command line tools. He spoke a lot about vim and emacs (which made me really want to learn vim again… one day I’ll get around to that). He also spoke about tmux, ssh’ing into machines, and a lot of really interesting concepts. He spoke about how his remote pairing stack has changed over the years and different combinations of things he’s tried which allows people to pick any of these tools and configure it to their remote pairing liking. Overall, an interesting talk and some exposure to remote tools I hadn’t thought about in the past.

Mark spoke about wyriki which is a project that Jim Weirich was working on when he passed away. I’m gonna be honest and say that Mark lost me pretty early in his explanation but this is what I got. Wyriki is a different way to architect rails applications. It creates new structures of runners in between controllers and models which allow someone to isolate the business logic from everything else. The rest of my notes on this are sparse and it’s pretty obvious I got lost in there, but I think the core takeaway was thinking about different ways to structure apps, so not fat model, skinny controller and not everything in moderation but how to really separates different types of business logic in your apps.

Next up was Justin’s talk on breaking up with your test suite. I took frantic notes on this one and listened intently, but eve so I’m sure I missed stuff. Also, these slides were great and super understandable so check them out. So first was talking about why we should test. there were 8 or 9 ultimate reasons why code should be tested which fall into two essential categories. First, you can gain confidence from the test and you can gain understanding of the code through the tests. BUT he started by discussing that we should question before we test. When thinking about tests, we should think about the purpose, rules, and structure and we should expect that within our test suite these three items should be immediately clear to anyone looking at it. If we cram lots of different goals and motivations into each test, the test becomes unclear. The rules become debatable, the purpose becomes hazy, and the structure ends up being ad hoc instead of uniform. Every test suite should promote one type of confidence and 1 type of understanding. So, here are some different suites and for each suite Justin went through the user, what the understanding and confidence were, tips, and warning signs.

First up was the safety suite. The safety suite is for the browser It checks does the app work and is our product simple. If the tests do not fit inside 30 minute or if you can’t write a new test within 30 minutes then your product probably isn’t simple. These tests shouldn’t see the internal apis. They should bind to what is visible by the user and they should enforce a fixed time-budget. The warnings here are that failure from refactors are often false negatives here and that human intuition overvalues realistic tests. On this type of test suite, numerous releases and branches can get expensive and there is the idea of the superlinear slowdown meaning that the bigger the system, the slower the tests.

Next comes consumption tests You should verify behavior and demonstrate usage with these types of tests. The user is the repo’s customer and they are used to verify what you, as the programmer, is directly responsible for. It says, is this code usable? If it’s hard to test, it’s probably hard to use. For these the module boundaries should be meaningful beyond testing (ie- these things talk to each other so they should be tested together). It should fake all the dependencies. It should exercise the public, not private apis. And finally, it should be organized by the consumer’s goals and outcomes. Warnings-wise, these tests need to be fast and this is the only part of the suite that tells you “you just broke something”.

The next tests suite are contract tests. Contract tests are used by us. They represent our interests that live in someone else’s repo. It leads to faster feedback making sure that something in the system that someone else is working on, doesn’t break something you’re working on. For these, they should be written for first party dependencies and follow the same rules as consumption tests. These are NOT a replacement for actually going and talking to your fellow dev about what they’re working on.

Next is TDD. So, so far, in all these tests suites, none have talked at all about the design of the code. The main value in TDD-ing is to discover tiny, consistent bit that help with big projects. The user is someone concerned with implementation details and the inputs and outcomes. These tests are a sounding board and enable you to have confidence in building small things and learning about what roles our code is playing. Here, he showed a cool chart that had interesting structural points related to putting queries on the left, logic in the middle and commands on the right… reminded me a bit of east-oriented code structures. He also stated that commands and queries should have very little logic. The warning here is that discovery tests yield small disposable units so be okay with throwing stuff away.

Next are adapter tests (whoever knew there could be so many different kinds of tests suites!) These tests exercise the adapter API under just realistic enough circumstances. It warns us of outages or API changes. These tests also reduce the cost of swapping dependencies later. For these, don’t test them first and trust the framework. These are similar to contrat tests but contract tests improve communication between colleagues whereas these tests improve the feedback between your code and the 3rd party code. Last warnings are that these tests can be slow and outside of your control and that they can be tricky if you’re using some sort of CI.

Phew! that was that talk.

One note is that my confusion still lies a bit in integrating all of these different types of tests. Should an apps test suite have all of these things? What does it look like structurally? Are they in the same files or different files? But I think the idea of thinking about all of these different suites and the goals of tests and testing is pretty cool.

Production code analysis by Dan was a great talk. Dan talked about code cleanup and how to look at a monorail (single large application) and refactor/delete effectively. You want to look at what code is being run to allow for good code cleanup and that lots of these monorails have dead code. Some overarching tips are to celebrate clean up commits as a team and to start by finding large unused code sections by finding unused actions. There are a handful of tools he talked about to help find this dead code. First you can use new relic which can show you for any given endpoint, how often is it being hit and then you can do a route check to see what routes are completely unused. You can also use tools like graphte, statsD and redis to find unused actions. For example, statsD is pretty easy to implement, has lots of info from Etsy’s blog, and can be used to see both timing and emdpoint information. You can look at background jobs in redis to see which jobs aren’t being triggered at all and what events are related to those jobs.

For mailer, you can see which is the most popular and least popular mailers by hooking statsD into active mailer. Finally, if you find the actions not being triggered, you can often delete the related views that aren’t being rendered any more.

Another good place to look is at translations. Which translations are in memory but aren’t being used anymore? Use the gem humperdink to track these translations. Finally, you may have two methods that are doing the same thing in your code. For this, learn which is best by wrapping the methods in a split. Wrapping them in a split and tracking them with statsD will tell you which method is faster or more effective and allow you to make data-driven decisions.

Lastly, Dan talked about logs. Logs are great for cleaning up code. Logs should be searchable and in one place. You should try to standardize the log format and multiple apps that communicate should be in the same system. Once you’ve got good logs, you can do log queries or check out endpoints to see what’s arriving to them. For this, check out the gem imprint.

Russ Olsen spoke about going to the moon! Wait for this talk to come out on video. I had to leave about 15 minutes early which was a bummer but he’s such a good storyteller, I was on the edge of my seat the whole time.

Saturday, June 14, 2014

Interesting Reads 6/7/14 - 6/13/2014

Busy busy busy this week but I was still able to read a few interesting things. Even though I'm not using Ember on a daily basis anymore, it's still an interesting framework to me so there's an ember post, a little bit of debugging, and then two good soft skills posts. Finally, I haven't had time to finish watching all the railsconf talks I want to but I'm not sure when that will happen so i'll post the talks that are still on my list below.


Learn to code, learn to think:

Feeling like a fraud? Everyone does!

Debugging in Rails. A great collection of tools to use and posts on how to use them.

Terminology and what it means in rails vs. ember. This was a great post but I think the most confusing piece for me when learning was the use of the term routes and what that means in rails vs. ember so i'm a little disappointed that they didn't include it.

RailsConf talks I haven't quite gotten to:

The Rails of Javascript won't be a framework

All the small things

Aaron Patterson's closing keynote

Lightning fast deployment of your rails-backed Javascript app

DHH's Keynote

Wednesday, June 4, 2014

Haml anyone?

I announced to the social-media-verse earlier last week that I was starting as an engineer at General! I’m really excited to be a part of their team and it looks like I’m going to be working on some really interesting projects and products in online education, among other things.

One of the first things I did this week was learn some haml. Haml is a markup language that allows someone to “cleanly and simply describe the html of any web document”. Basically haml is cool because it lets you do the same thing you’d do with html or html.erb files but with much less code.

For example, you don’t have to close tags! So instead of writing
<h1>This is great</h1>
all you do instead is %h1 Amazing!!

I was really curious then as to how this affects being able to dictate classes and ids for the scss I’ve gotta put in. Well, reading through that was when I decided to write this post because it’s so clean and simple. You use the same symbols you would in the css doc but in the haml doc. For example, %div#awesome is equal to

<div id="’awesome’">
<code>. For classes, it’s similar to %div.more.amazing.stuff is like typing <code> </code></code>
<div amazing="" class="’more" stuff="">
The only annoying part (so far) is that you have to be careful about your spaces/spacing so it will error if you don't have the right number of spaces as you nest.

There’s lots more and the docs are surprisingly good so check out the resources here:

And here:

Saturday, May 31, 2014

Interesting Reads 5/24 - 5/30

There's a whole boatload of good stuff this week! First, There are links to three more talks from RailsConf. Then another talk about leveling up. After that a few interesting blog posts. Finally, agood explanation of dotfiles.


Sean Marcia is awesome and this is his great talk about Bees and saving the world:

Harking back to my management days a bit, this is a good talk about decision making and team working in tech:

Okay, I think I've only made it through 5 databases so far but interesting overviews:

Another talk, but not from RailsConf, about taking your engineering role to the next level:

Information anxiety and how to parse through what you need/want to learn:

Great post on mentorship:

Interesting post on coding principles every engineer should know. Do you agree? Which would you add?:

Lastly, this week I had to set up my first new computer for development. A friend passed this along to me. It's a great step-through of dot files which, I think, seem super intimidating, but aren't IRL:

Sunday, May 25, 2014

Interesting Reads 5/17 - 5/23

Lots of really great links this week. There are a few more RailsConf talks since I had a chance to watch a few more of those this week. Some great posts on terminal and keyboard shortcuts and tricks, and some fun team-related posts.


Amazing tips and tricks in this post for keyboard shortcuts.

AND amazing tips for terminal in this one.

Leading a team is really hard. Jessie does a great job talking about some of the ways to actually be a GOOD manager.

Some great tools and thoughts for thinking from railsconf.

Love this!

A really interesting talk about imposter syndrome

I've always looked at the stages of group development in the context of leadership groups and immersion experiences but this is an interesting post on the high performing team dynamic in programming

Does the size of a team affect the quality? This post talks about some of the data behind it.

Saturday, May 17, 2014

Interesting Reads 5/10 - 5/16

I have 8 RailsConf talks bookmarked to watch but I know I won't be able to watch all of them before posting, so I guess I'll have to include them in next week's roundup. That being said, below are a good handful of talks that I've already had a chance to watch and they are awesome so enjoy!

Coraline's great talk on apprenticeship:

10 years keynote:

Reading Code good (but also is more about becoming better programmers specifically through this method of reading code):

In more personal news, I'm starting a new job next week!! I'm very excited to start at a new place but worried I'll be the weakest person on the team (a very common fear, I've been told). While I don't necessarily feel like a phony, I just feel new and still pretty inexperienced. This is an interesting read about how a lot of others feel similarly:

I'm trying to start getting into hardware hacking a bit more and arduinos fascinate me, but I've had a hard time figuring out where to start. Ruby Rogues did a great podcast on hardware hacking! This was one of the super helpful resources:

Monday, May 12, 2014

Setting up Aliases

For months and months I’ve heard people talk about setting up aliases and profiles and all these custom configs for their computers. I’ve done some basic googling but have stayed away from setting things up because I hadn’t found any simple, easy posts and I had this vision that this whole “set up” thing was super complicated. Guess what? It’s actually ridiculously simple. This is what I get for being too embarrassed to ask about something for an extended period of time.

What are aliases? Aliases are basically shortcuts for things you type all the time. The same way that hotkeys work to open up programs and execute certain actions you find yourself constantly using, aliases do that for your terminal.

The best place to start is to think about what you type all the time. For me, I type “git status” over a dozen times a day, so that was aliased to gs. Chris suggested that I set up an alias to open my bash_profile file so that I can quickly add more aliases as I discover what I type the most. As I mentioned, doing this is super easy.

1. Open up your bash_profile file
2. Set up your aliases by typing alias gs=“git status”

As you see, the left of the = is what you want the shortcut to be and the right of it is what the shortcut stands for. Once you’re got all your aliases in, quit terminal and reopen it to start using them.

Besides aliases, I also learned from JP about using homebrew more effectively. Homebrew is a package manager and with homebrew you can add different packages including git bash completion which allows you to tab to complete things like git branch names and more.

The final piece of this setup is naming your computer. Apparently this is a thing. I’ve got my Disney princess naming convention for my servers so now I need one for my computer(s). I’ve decided on 80’s cartoons starting with the very-applicable “penny”. I look forward to naming future computers Brain, Jem, and, I’m sure, a number of CareBears.

Here's a good article for additional reading on setting up git aliases:

Saturday, May 10, 2014

Interesting Reads 5/3 - 5/9

Here are some really great reads from this past week. Lots of good fodder for self-reflection and thinking critically about programming and life, in general


The hardest parts of software development

An interesting post about being more interesting… not sure I agree with all the entire post but interesting to think about.

Love Love Love this!!:

The importance of looking up and recognizing what's around you.

Saturday, May 3, 2014

Interesting Reads 4/26 - 5/2

LOTS of really good reads this week on a whole variety of topics. There are some great ember tips and tricks to start with. The a few good iOS pearls. After that, some excellent articles and slides on Rails/ from RailsConf. And then finally, mixed in are some good pieces on imposter syndrome, onboarding, and coding in general. This week is definitely filled with a bunch of articles and posts and slides that are worth spending a few moments on.


Some great ember tips… still digging into most of these

Great iOS tools

Rails tricks and tips from railsconf

Interesting read on great qualities of software with more in-depth reading options as well

Fascinating read on refactoring in a super disciplined way

Awesome ember modal action

Just worth a read... in a weird way, it kinda reminds me why I love programming:

I really love the "starter kit" idea: Growing up in Young Judaea, when you got a leadership position on the board, you were passed a box. This box contained the info you needed to be good at your position but it also included fun items, toys, games, OLD YJ stuff, advice, etc. One of the best parts of having a leadership position was getting this box and then adding to it when you passed it down to the next person. You can't do this the same way in a work position, but I think the starter kit is close.

You are NOT an impostor slides

An honest, amazing piece abut impostor syndrome

Writing fewer bugs:

Tuesday, April 29, 2014

Building Analytics

A key part of any build is figuring out what colleagues want to track and how to effectively manage that data. In this case, there is a lot of stuff to track. I recently finished building a hefty analytics components to the app and wanted to share my approach and some things that drove me crazy.

First, I had to really think through which analytics were important. The initial ask was for about 30 different types of analytics, but then, when adding in timeframes, and additional category components, we were talking about close to 400 queries… not super useful for an early launch. Additionally, through my experience with Neighborsations, I could look at these requests and know which ones were most important for raising funds, for identifying user paths, and which ones wouldn’t really be useful until we had a larger critical mass using the application. I also made sure to ask my team what the most important success factors were to them (ie- if this number isn’t what we want it to be, then the business is not successful and we need to make some hard choices, quickly… that’s actually a vital part to determining core metrics and something I learned from Steve Wendel who’s awesome at pushing you on those hard questions.)

The easiest way to do the queries was through writing DB queries, putting them into a model with methods and then creating a view. The app is an ember-appkit-rails app which kinda mushes the rails api and ember together but I decided to keep this simple and just run everything outside of the ember piece and just keep it as typical rails.

There were also a couple of options for running the number… we could have done a rake task or set up a chronjob. Right now, the approach is just to have the business folks hit that page whenever they want new numbers (with the understanding that they shouldn’t hit it too often because it kicks off a bunch of queries that hit the database).

To seed the data, I was originally leaning towards factory girl but decided to start with just creating the data I needed in the tests. What I didn’t realize is that the app automatically pulls in fixture data, so everytime I had a to eq it would fail because the number would be incorrect. That taught me… after spending probably too much time trying to figure out how to not have the test pull in the fixtures, I realized I should just embrace them, learn how to set up the fixtures for my tests correctly and use them to generate the data I needed.

Then I created a model with the class AdminAnalytics. Each analytic was a separate method. I think if I wanted to go back and continue to refactor further, then I could easily break the model into separate classes. For example, instead of having one overarching AdminAnalytics class, it would probably make more sense to have a Users class, a Posts class, and Ingredients class, etc.

Then it was down to the queries. I didn’t have much SQL experience and some of these queries were pretty complicated so it helped me to first write out all of the steps that I was looking for before translating it into an actual query. Once I did that, I could take those queries and use ActiveRecord to give me the rails magic that made the queries a little easier. For example, when joining tables in queries, you don’t have to note the join table… you can just note the two tables and activerecord will figure out the relationship between the two tables on its own. Then, because a lot of the queries involved profile type names or time parameters, I refactored by making those things arguments on the method that could be put in place in the view.

Then I created a controller which was literally one line and was able to create a view that just called the instance variable @analytics and the method with whatever arguments I needed. Bam! Analytics.

Now, once this was all done, we actually decided to pull the analytics out of the app. Instead of bundling these things together, we made it a separate app entirely that talks to the primary app in order to get the numbers needed. When we pulled it out, we used sequel which made it easier to pull those queries in (although still more difficult than just doing it right in the app). The nice part about the app was being able to use ActiveRecord but it also made the analytic piece dependent on a bunch of different models. For example, to get the total number of users, you just do User.count or to get the total number of posts, Post.count. You’re already depending on two different models here, User and Post. So, part of pulling this piece into a separate app was so that we were no longer relying on multiple models in order to get these numbers.

The other thing I really wanted to do was set up a simple dashboard using Dashing. I’d seen people whip these dashboards up in no time flat, so I figured I’d take a stab at it… boy was I mistaken. I think my journey to a dark place started with the decision to use dashing-rails instead of dashing. See, I figured, if I used dashing then I would have to create the connections to the database in order to get the information for the queries I was running. If I used dashing-rails, then the dashboard would be a part of the app and this part would be easier. (If you’re thinking about the paragraph above and thinking, wait a second, you pulled the whole thing into it’s own app anyway in the end, yes. I realize this and boy is hindsight 20/20). Dashing-Rails involves a bit more setup. You’ve gotta set up concurrency, you have to use a database that has lots of threads like puma, etc. The issues started there and just didn’t stop. First, I had an issue with puma and so I upp’ed the number of threads possible and that seemed to fix it. Then, there was a database connection issue. The DB connection would time out almost immediately. We got this working as well, but it still times out. After a certain number of rounds, the thing just kicks the bucket. Then, I had an issue where all my dashboard widget boxes would show up but nothing would show up in them. Sometimes, if I commented things out and then reimplemented them one at a time, they would work again. And there was no rhyme or reason about when the dashboard would start up and kick off, versus when it wouldn’t.

So, here’s the really annoying part. I FINALLY got the issues fixed (with the help of lots of pairing with a few different, very patient people) and pushed the dashboard to production, where everything promptly broke and nothing rendered correctly. Then we got it rendering correctly, but the queries still aren’t running correctly. All the while, I’m kicking myself because I was the one who said, “oh, and I’ll build this awesome dashboard to go along with the simple analytics view. It’ll be fun and shouldn’t take too long.” I think the dashboard was mostly a lesson in when to give up. I should have scrapped it after the first week and a half, but I was a little too stubborn and a little too determined to get it done.

So, that’s how I built the analytics. For my final thoughts, I think the moral of this story is to always keep it simple. Start small, get that shipped and continue to add and improve.

Friday, April 25, 2014

Interesting reads from the week 4/19 - 4/25

Just three short interesting reads for this week... I imagine/hope there will be more next week with some good blog posts coming out of Railsconf so enjoy these and look forward to some more next week.

Design for good and interesting thoughts

Interesting read on TDD

    In response to this initial post

Thursday, April 24, 2014

Errors in Ember

Most recently, I had to add a back button to a 404 page. I know, sounds simple, and it should be simple, but it was actually a little more complex than I thought which led me to learning about errors in Ember.

When originally looking in the codebase for where this change would need to happen, I found the ember template which simply said {{status}} {{statusText}} and a ember routes files that didn’t have anything in it except the standard code. I couldn’t figure out where the info being passed into the 404 was coming from and what made most sense for changing it.

When it comes down to it, the answer is actually pretty simple. Ember data has an automatic error route. In this case, we were looking in the user route. In this route, there is a model hook sets up a promise. always returns a promise.

If the promise is returned from the model hook or in other words filled with user data, then the path completes and you arrives at the correct node in the user path. However, if the promise isn’t returned or is rejected, then you get thrown into the error route. Ember always waits for a promise to be fulfilled and if it isn’t, you are led to an entire default error state.

The ember guide gives a pretty good explanation and walk through of what happens to get to these error substates here:

Saturday, April 19, 2014

Interesting Reads 4/12 - 4/18

This week, we've got some ember, some soft skills, some git practice, and more.


One of my goals is to understand and get better at git. This looks like a good resource... now I've just gotta make time for it!

To do, three things:

A great article on thinking through giving feedback.

Interesting read, not sure I agree with all the parts… what are your thoughts?

Since Ember is my first javascript framework, i'm always looking for good articles that explain the similarities and differences between what's out there. This was a good comparison.

This is the sort of beginner's guide I've been waiting for in Ember! If you're curious about getting started, check out this link to walk through building a simple app.

Video on code architecture and making things better

Friday, April 11, 2014

Interesting Reads 4/5 - 4/11

Some really good reads this week on a variety of different topics.


Read on effective flows

Conference proposals and a good step-through process to create your proposal

Really in depth thoughts on crafting a talk

Improv and coding

Scientists and Code reviews

Write Code Everyday. I talk about this a lot in regards to learning to code. This post breaks things down really well.

Saturday, April 5, 2014

Interesting Reads 3/22-4/4

You'd think with two weeks of reading I'd have more interesting things, but not so much that past two weeks. The reads below are really good but I've been a little more heads down recently and haven't had the chance to read as much. Anyway, there are some good, interesting posts below, so check them out and enjoy!

Doing Your Best Work:

The Last Developer was Terrible:

Old but still interesting:

A good first day:

Monday, March 31, 2014

Other Great Things I Learned At EmberConf

As a newbie in development, I like to do a post on random things I learned during the conference. These often end up being CS terms, nerd culture related, or other interesting items that didn’t quite fit/make sense in a session post. Here are those from this conference.

Primitives: basic types in javascript like booleans and numbers.

Orthogonal: not a computer term, it just means adjacent.

Grok: means to really really understand something. It’s actually coined from a book about an alien and it’s understanding of humans. Check out the link to the book here.

D3 is a drawing library mostly used for graph’s or charts

Donut charts are pie charts with a hole in the middle

Truthy/falsy: this concept is more or less important based on the language. Basically, when stuff isn’t strictly true or false, javascript tries to help you out and guess which one it’ll be. A quick google search shows that there are lots of better explanations out there on what it is. This one looks pretty comprehensive and interesting:

CLI is a command line interface and it is how you interact with the app on the command line. (A perfect example of things I’ve been doing but didn’t actually know the name for)

This is a great background blog post that a few sessions referred to:

Finally, for other great notes from the conference, check out these links:

Final Keynote

And here we are… coming to a close at Emberconf. The final keynote was given by Dave Herman about evolution. He started with a cool little video about putting the source code for the internet online. He used that to talk about evolutions versus revolutions. Revolutions are good but only when there is a need for one, otherwise evolution is great. He talked about some of the current Javascript issues of speed, jit compilation, etc. A revolution might be a new byte code language but an evolution is talking about the current and improving it. He spoke about formalizing patterns, closing the gaps to make things better, building a JS compiler, and studying it to optimize the code.

There is a process to evolve Javascript. He then spoke a bit about echoschript and es6 modules. Adding features that were backwards compatible. All this can lead to 1JS. It leads to focus, consistency and adoption. Consistency meaning orthogonal, composable, etc. and adoption meaning that it is easy to adopt and comes with as few evolutionary issues as possible.

He spoke a bit about “use strict” which fixes some compiler issues (although I’m not completely sure I grasp what and where “use strict” is used for). ES6 modules are strict by default which means a more smooth path. He talked about how features were better than forks and that to pave better paths for the future, we need features that can be adopted into existing code bases. Based on that idea, modules are a better programming model than modes.

He then went a bit into the extensible web manifesto and how that is the process of how we can work together to evolve the platform. Basically, good design is motivated by use cases and work flows. Good design is built from small, orthogonal, and composable primitives. We need to think about the end-to-end system and how it all works and then build those in small pieces. His main point was that developers need to be a part of that process to help iterate, evaluate and create standards. So, in three steps, extensible web works like this:
1. Add missing primitives
2. Enable userland polyfills and compilers
3. Work together (browser vendors and developers)

Snappy Means Happy

Matthew Beale gave us a lot to think about and showed me a bunch of tools I had no idea existed in this talk about performance. He started by giving us some good things to think about… mainly, what does fast mean to you? He showed different speeds and what that would mean in terms of actually viewing an app (ie- animations, etc.). You can use that “fastness” scale to then look at your code and see what piece is taking the longest. Is it the network? The javascript? Or the render? Thinking about each of these and looking at times for each part help you narrow down what tools to use and what part needs to be made snappier.

When you’re ready to dive into performance and have pinpointed what part needs working on, then you can move on to the recommended methodology. This is 1) gather the facts and isolate the problem. 2) analyze and theorize about what’s going on. 3) change a single thing… if you change a bunch of stuff and performance is better then you don’t actually know what made the difference. And 4) confirm the theory. The example Matthew used was loading the ember.js website on your phone.

So, first you need to reproduce the immobile latency reliability… in this case you can use slowyapp, Charles, or network line conditioner. You then create a clean browser (this was also a new concept to me) which means you have no extensions, a private window and therefore nothing interfering with the site you’re working on. Then you measure and analyze in the network inspector. In that inspector you can check out the timings tab which gives you a bunch of information.

Part of that information shows us that if you look at the timeline, you can see the load order of things and how long each element is taking. Here, you end up being able to move a script tag which makes the page’s load time much faster.

Next, we looked at “janky” animation. To solve this issue, you first need to understand browsers, then you measure with the timeline tool (another one of the inspector tabs). You can highlight a specific section to get more information about it. You can look at frames which show how long something takes to get to the screen. Green = paint, yellow = javascript, clear = upload to GPU, compositing. AND THEN, render console has a bunch of additional tools you can use to record and generate the data.

If you’re a little lost on where all these tools are and how to access them, the slides show it all pretty clearly.

Also, at this point in my notes I had written “OMG, so many tools!” which I thought was worth sharing here.

In this case, the solution to the issue is a webkit transform that keeps it on the GPU and then uploads the whole thing to the graphics card. By adding a Z translation to the animation, it forces the GPU to say this is 3D and should be put on the graphics card.

Finally, we talked about ember.js property change notifications. For this methodology, first you need to understand observers and then look at the profiler. Observers are synchronous and fire when a set thing occurs. There are two options .setProperties or Then you look at the profiler. The profiler has processing and memory. You run a profile and get back information in form of a flame chart. A flame chart allows you to see the stack that is being run. In the list view you can use the profiler to pinpoint the issue. Here, you refactor to create a buffer instead of pushing items into an array which fires 1 change notification instead of many.

The important thing to remember here is to have a methodology to solve issues, to remember that web performance does not equal ember performance and that there are a heck of a lot of awesome tools you can use to help you out.

Sunday, March 30, 2014

Controlling Route Traversals with Flow

This talk was all about routes. Nathan Hammond started by talking about URLs in three different categories… resources, actions, and flows. Resources don’t change the state of an app and are available all the time. We want them to be in our history stack. Finally, they are plural or singular nouns depending on the controller. Actions are things like post or put paths like /login or /recover-password. It receives all the users input at once. It results in the user being presented with flash or a new resource. Finally, it should always contain a verb (see the examples from a few sentences ago). Finally, there are flows. Slows are series of actions across routes. Flows lead to a completed application state.

A state machine allows you to jump into the flow. BUT designing a flow is really hard. You need to make sure you cover every components of the flow. There is a four step process to do this.
Step 1: inventory our routes in the routers
Step 2: list linear paths (ie- login flow traversal path)
Step 3: convert to node graphs
Step 4: identify state change to traverse each path
        Then you do a complete enumeration of the state or if you’re passing in a wait
Step 5: Identify backwards traversals (ie- where does the back button lead to each time you press it).

The CS term for doing this is a directed graph.

The result is a state machine that describes exactly how a user moves through and app. It is the picture that shows ALL of the steps we just outlined.

Looking at this demo,, gives us a sense of flows. To “code the flow”, the general strategy is that you start by loading the session state. The you reset the controller (if needed). After, you delegate identification of where you’ll go next. Then you traverse the longest route so you know you visited every node. AND you can use replaceWith instead of transitionTo which Nathan thinks is awesome.

A better strategy is to start with one place that defines flow and load the flow and flow state (explained as where the user is and what state they’re in). Then, you delegate the id of where to go to in the flow. After, you call back into the flow to progress. You start with a definition file with an edge list that dictates a from and a to path with conditions. The conditions are what needs to be checked, like ‘isAuthenticated’. Then you inject the flow logic. beforeModel looks up the current flow and identifies where the user should be and then you set the action. The action will include var Flow. Which sets state on the flow inside your routes so there’s no processing of that information in the route. Or further reading, check out ember-flows which is almost ready:


This talk was crazy. It was really academically intense and while I was able to follow the basic ideas, there was a lot of important information and concepts that were ran through really quickly. Fortunately, it seemed like most of the conference participants also thought this talk was also presented really quickly, so I wasn’t alone on this. Also, every time I went to write a note, I feel like I missed the next two points and came back into the conversation on the third, so hopefully those notes don’t seem really disjointed. I will say that Chris seems like a really smart, nice dude so it’s definitely worth connecting with him for more details on this if you want to dive into it more.

There are distributed computing issues and you’re building a distributed system. Basically, if you’re pushing state to the client, it means you’re caching validations which can lead to consistent data issues and more. This is an issue regardless of the database you’re using and this fact means you’re building a distributed system.

Some of the main issues are dropped messages, rendered messages, race conditions, partial failures, and custom merges. There are TCP incast, TCP slow-start, and Nagle’s algorithm… these are cases of latency spikes… I think.

Then he went a little into distributed systems theory and consistency. Consistency is a contract and if developers follow the contract then there will be predictability. There are three kinds of consistency. Strict linearizable which is the total order of all events in a system and for single servers. Then there’s eventual consistency which means you eventually see all the events. It is a weaker form of consistency. Finally, there’s causal consistency which is basically, I observe an update immediately but another person may see the change a little later. When dealing with consistency, it comes down to safety vs. liveness.

You need consensus to deal with consistency. For consensus, you’re basically looking at termination, agreement, and validity. Termination means eventually the info will show. Agreement means it will all have the same value. Validity is that the value will be part of consensus. It comes down to a generals problem which are academic concepts. There’s two generals or the Byzantine generals. There are algorithms used to solve this. Some of those algorithms are paxos, raft, 2pc, and 3pc.

I’m not sure how this lead into vector clocks but then we discussed vector clocks whick allow us to define all possible orderings in a system. There are also dotted version vectors which are things having to do with events and actors.

We then went into CRDTs. CRDTs are conflict free replicated data types. These are data structures that store something. There are two types of CRDTs: state based and operations based. State based CRDTs have monotonicity which means functions where as inputs increase, outputs increase. We care about associativity (which is a binary operation… addition is associative), community (also a binary function, addition is also commutable), and idempotence (also a binary function). All three of those exist in programming.

If that wasn’t quite enough for you, we then spoke about bounded join semilattices… which is another math term, but this one has a decent diagram in the slides so check that out.

So, why is all of this important? Because consensus is hard. We want to avoid coordination so the system can progress and we want a weak consistency and higher availability. The conclusion is that you’re building a distributed system so you need to be thinking about this.

The Unofficial, official Ember Testing Guide

This was one of the sessions I was anxiously awaiting and it did not disappoint. First, a big hand of applause for Eric Berry for giving an excellent talk (his first one!). The slides were excellent and I can’t wait to continue looking into the new ember-qunit.

Testing looks at assertions. Assertions test the state of your code and QUnit is the default assertion library. In ember tests, there are always two parts, the setup and the teardown (those are callbacks). Now, there are assertions that are already provided (see the slide for those). One thing that Eric noted was that mocha and jasmine are not excluded in ember… you can use either of those for testing as well, but qunit is the happy path (a phrase I’ve come to know and love since working with ember).

Helpers help us guide our app to the state we want to test in our assertions. Looking at the callbacks, the setupForTesting sets up the router, etc. then you call injectTestHelpers which sets up the helpers so then, in your actual test, you write your helpers and then your assertions. Ember.test runs the helpers and qunit runs the assertions. Additionally, there are a bunch of different types of helpers starting with asynchronous helpers, synchronous helpers, and wait helpers. Asynchronous helpers wait for the proceeding helps to finish before they run. These are visit(), fillIn(), click(), and keyEvent(). Synchronous helpers run instantly. An example is find() and finally wait helpers wait for asynchronous helpers to complete before running. An example is andThen().

Then there are custom helpers. Use registerHelper() to create an asynchronous helper. Use registerAsyncHelper() to create a new async helper. A final reminder at the end of this part was that you still have to call injectTestHelpers to make sure you run the helpers regardless of whether they’re custom or not.

So now, the new things that will be in Ember 1.5. There will be new integration helpers. For example, triggerEvent() which takes three arguments: selector, event, and anything additional you need to add. Others are currentRouteName(), currentPath(), and currentURL().

At this point, there was an awesome example of testing search, so if you’ve got search in your app, check it out.

The next part looked at how to test in isolation. So instead of having to test all the pieces all the time, you want to test specific aspects. This is where ember-qunit was introduced. Ember-qunit is a library that lets you perform unit tests without loading the whole container. Ember-qunit was inspired by rspec. You start by setting up the globals is emq.globalize() and then you need to set up the resolver (described as the thing that can find anything… ie- mom). It also provides module helpers. These are moduleFor(), moduleForComponent(), and moduleForModel() . For the example moduleFor(“route:index”), it’s basically saying “hey resolver, I need you to pull this (the route:index in this case) from the container. moduleForModel() is specifically for testing Ember-data. There are super descriptive, awesome slides for each of these module helpers.

The last piece I noted was in controllers. In those tests, you’ll notice a “needs” field. Needs is used to bring in the dependency that the test need to bring in, in order to run… in the case on the slide, the controller for application.

To get going with ember-qunit, you just need to do bower install ember-qunit. It’s in ember-appkit and ember-cli already.

Finally, the last big announcement was that the team is redoing the testing guide on the ember.js site.

Happy Testing!!

Ember Components

The next session was actually entitled “Ember Components” (and wow, writing up these notes, I really am realizing how much components were spoken about but this talk like totally blew my mind). Alex Matchneer started by talking about embracing the controller. The questions he posed to us were what belongs in the router versus the controller? And what’s up with query params?

For those who don’t know, query params is the hash of info and it looks like this /?query=params. So, should query params be in the router or the controller? Putting them in the router makes sense, but it leads to a bunch of issues (outlined in the slides). It was decided that query params would go into the controller, so let’s look at a controller center API. Here, sortby is a property in the controller. queryParams: [‘sortBy’]. Doing this means that there’s no need for custom observation, query params are bound to controller properties, and there are additional add-ins that make it easier and nicer.

But why is this in the controller and not in the router? Well, the controller manages app state and also wraps the model with additional information for the templates. The router is in charge of navigation and is the link between URLs and controller/templates. The router serialized hierarchy into the path and the controllers serialize the app data into the query (this is practically verbatim from the slide).

When looking at router paths, transitionTo (one used often right now) is great but only for complex hierarchical things.

So, will a property be remembered or not? In router-driven controllers, property will live forever, but in an item/other controller, there are shorter lifecycles.

The primitive that is missing is the model dependent state. This is a state, accessible to controllers, tied to a specific model. The store/restore controller properties are scoped to the controller’s model. This can be used for QPs, caching, indexed DBs, local storage. For example, a global cache object gets injected in a controller. The controller decides what bucket that state lives in. And then inside the cache there is a bucket for each bucket key and you are in control over bucket allocation. Ie- it could be in local storage proxy, could be a POJO (Plain old Java object), etc.

Phew. That was a lot to explain. The slides are really excellent for this talk so definitely check them out.

Ember for Children

I thought it was nice that this session was included in the first ever Emberconf. With so much information to cover and so many interesting components of the technology, it was nice that the organizers made sure there was a talk that focused on the community and what we can be doing together for underserved communities in technology. Highlighting an achievement like this, really shows that even though we’re cranking out amazing technology, building the community is important and giving back is core to that idea.

DeVaris Brown (who also does ember hot seat) talked about a new initiative he’s launched which takes at risk youth and teaches them about programming and code. He started by talking about the bootcamp idea (something I have mixed feelings about and have written on before) and wanted to provide a bootcamp-type opportunity to high school students that wouldn’t be able to afford something like that. He walked us through the curriculum he used to teach these students, the time he put in, and where the students were today. He worked with Black Girls Code and other organizations to find students. He also spoke honestly about the challenges he faced like reasons students couldn't come to class or the basic typing skills necessary (for which he recommended

DeVaris received a standing ovation and got a lot of questions from people on how they can help and get involved. I think it’s great that the community is so interested in providing these opportunities and focusing on these sorts of initiatives. I hope that they can work together and some of the already established organizations to make more things like this happen instead of trying to reinvent a new initiative.

Ember Cli

Next up was a talk on Ember-Cli. Cli means command line interface. From what I understand, Ember-Cli has also had a bit of a sorted past like Ember-data but lots of work is being done to make it much better. This was another talk that primarily talked about the highlights of what’s coming.

Stef Penner presented a few problems and their solutions. The first problem was coupling. The solution here is inversion of control. You have a container (which abstracts coupling) and the resolver (which finds the code for the container to use). The second problem are globals which leads to coupling based on load order. The solution here is es6 modules. Rather than writing this glue code, the app should build that code. This leads to thinking about tooling and shipping to the browser. A third problem mentioned was build stability and the solution is a build pipeline accomplished via new tools like broccoli.

Ember-Cli tries to solve these problems by creating a tested cow path that we’re all working on together. Two highlights are that in ember-cli, you can subscribe to different releases. For example, you can subscribe to beta releases and roll back if necessary. Secondly, cli adds a yeoman-type analytics data as an option. I’m sure there’s much more coming but those are the highlights.


Day two started with a bang with Ryan Florence talking about {{x-foo}}. Speaking more about components, Ryan explained components as tags with a unique style and behavior. They are custom elements with an optional template, with an isolated ember view and the view is the context.

The most important part of this conference was obviously when Ryan pulled out the drone!!!! What’s a good conference without a drone demonstration?! So, in this case, Ryan flew the drone showing these ember components and using Ember. For example, the x-wing was a components and you can checkout the components which outlines all the actions.

Then we moved on to getting the components to talk to the outside world. Two ways were discussed: data binding and actions. Data binding are attributes that components will 2-way bind to. Actions are the actions that the component will outline. For data binding, he used the example of ic-tabs. The use case is to persist to query params. Basically, the controller sets the context of the template so that you can bind to those properties. You can see an example here: Then you can find the query param attribute on the tags. For actions, he discussed ic-menu. These components allows for a solution to the popover edge cases. It outlines on-select=“remove” and same for save and copy. And then those are saved in the controller. You can see an example here:

Static tabs and dynamic tabs can both happen via parent and child components. Child components can handle their own state.

Sub-components are identified by events or if you need to manage a more specific class or attribute.

One interesting this that Ryan mentioned was that he is biased towards not having a template connected to a component and in his opinion, if you have a template, then you missed an opportunity for a possible abstraction in your component code.


And we’ve finally arrived at the last session of the day. As my brain power waned, I was excited to hear about the new things coming in HTMLbars that will make apps faster and better. This session talked about the exciting things to come with Erik Bryn and Kris Selden.

HTMLbars is a templating library on top of handlebars. It understands markup. In htmlbars there is no need for {{bind-attr}}. It will build DOM fragments instead of strings which means no more script tags! These changes will dramatically improve the performance of large lists because htmlbars can rapidly clone DOM fragments.

Binding update order… instead of changing all observers, it’ll update based on the parent. The presenters then showed a flame chart that shows a stack of currently executing code. They also talked about the re-render which is an anti-pattern. Instead of new re-rendering, the content will update itself instead of the entire re-render. It will have smart caching and cloning of the DOM which will enable the re-render to be much faster.

Looking at just the DOM, there will be more of an ability to use willInsertElement because you’ll have the element that you ant to insert. And you’ll have more animation ability. For example, you’ll be able to set an animation off the screen because it’s coming up (you want to use the animation soon) and you can move it onto the screen in css transitions. There is custom rendering which then interacts with the DOM and will be secure by default.

It’ll also be easier to step through the render path to show more of the template to the DOM path which will clear up some of the “magic” that ember usually does automatically. I’m not exactly sure what this means IRL but I’m looking forward to finding out. HTMLbars will also bake in support for server-side rendering and looking at this, they’ve focused on the SEO use case.

Last but not least, they’re predicting a 2-3x performance improvement based on these changes.

There isn’t a lot out there right now about HTMLbars but it looks like it’s coming soon and once it arrives, I’m sure there will be lots written about it.