Quip’s Edmond Lau on how to become a more effective engineer

So many startup employees share the same story. They worked unearthly hours building a feature only for users to blithely ignore it, or worse still, it fails to ship at all. What went wrong?

If you ask Edmond Lau, an engineer working on user growth and engagement at Quip, those outcomes have little to do with whether or not you worked hard enough.

Following engineering stints at Google and then at smaller startups like Ooyala and Quora, Edmond took a hiatus from the codebase to analyze why hard work didn’t guarantee success. He picked the brains of engineers and leaders at DropBox, Airbnb, Instagram, Lyft, Square and more about how they solve problems – and what the best engineers behind their solutions all shared. The result is a collection of lessons chronicled in his book, The Effective Engineer.

Edmond joined me on our podcast to share what makes an effective engineer, the layers of product and organizational complexity that often block them, strategies for prioritizing work, and much more. Additionally, he’s curated some free content for Intercom listeners.

If you enjoy the conversation check out more episodes of our podcast. You can subscribe on iTunes or grab the RSS feed. What follows is a lightly edited transcript of the interview, but if you’re short on time, here are five key takeaways:

  • Edmond measures workplace effectiveness through a metric called leverage, which is your rate of impact divided by the time you invest.
  • To fight the hidden costs of code complexity, system complexity, product complexity and organizational complexity, engineers must optimize for simplicity and cut unused features.
  • To stay focused on high-impact tasks every team needs to connect their priorities to two guideposts: their team mission and a top-level metric everyone, regardless of discipline, can work toward.
  • The core values for Edmond’s engineering team at Quip include a culture of experimentation, building intuition by validating hypotheses and always testing the most simple option.
  • Always decompose your hypotheses into smaller ones that you can test incrementally, because if testing a hypothesis takes months, you’re losing out on a lot of potential learning.

Adam Risman: Edmond, thanks for joining us. To get started, could you give us the CliffsNotes of your career and a feel for what you’re doing today at Quip?

Edmond Lau: For the past decade, I’ve essentially been working in Silicon Valley. I joined Google’s search quality team right out of college, worked there for two years, and it’s been a whirlwind of different startups since. I worked at one called Ooyala, which focused on online video, later acquired by Telstra. Then I joined a team of a dozen people at Quora, a question and answer site, and during my three years there we grew the team to about 70. I led the growth team and also built out the onboarding mentoring programs there.

I took roughly a year-long sabbatical after Quora to write my book, The Effective Engineer, and now I’m at Quip. I joined nearly three years ago, when there were also roughly a dozen people, and I focus on how to get users more engaged with Quip, as well as how we as a team empower more engineers and technical leaders at the company to really perform at the highest levels.

What makes for an effective engineer?

Adam: Your research process for The Effective Engineer was really interesting. Like you said, you took a sabbatical from the code base and actually went out and had conversations with engineers and leaders at other startups across Silicon Valley. When did you begin to seriously question this idea that there’s a more effective way for engineers to work? Was there an aha moment?

Edmond: To provide some context, I grew up with a strong hard work ethic. It partly came from the fact that my parents grew up in Communist China and going to college was not an option for them. They immigrated to the US when I was very young, and I grew up with this sense that I have an opportunity here they didn’t have and that I should be working hard to make the most of that opportunity.

Throughout college and the first two startups I was at, this idea of working hard to make the most of the opportunity was a very core idea in my mind. I’d be working 70-80-hour weeks at Ooyala. At Quora, I worked 60-hour weeks. It was reaffirming the story in my head that I had to work hard in order to make the most of the opportunity I had. Then there were a few incidents that made me start to question whether that was the right premise I should be operating from.

We had really talented teams of engineers at both Ooyala and Quora, and there would be projects that we’d spend months working on. There was a analytics module at Ooyala that we spent a few months building for a customer, and then the customer just never used it. There would be a feature that we’d spend months designing and launching at Quora, and it had no impact on metrics. It started to make me wonder, what if we hadn’t worked on that at all? What if we had just twiddled our thumbs for those few months? Would our impact have actually been that much different? If I were honest with myself, the answer was not really, because we worked on the wrong things. The fact that we were very well-intentioned and we were a talented team didn’t really manifest itself in the impact that we were creating. I knew there was a missing variable. There was something else that we should be paying attention to. That’s what got me into this quest to figure out what is it that makes an engineer more effective.

Adam: You have a framework that was a product of that investigation called leverage. How do you define that, and how did you come to the conclusion that it was the most effective metric for what you were seeking to measure?

Leverage is defined as your rate of impact over the time you invest.

Edmond: Leverage is defined as your rate of impact over the time you invest. It’s your return on investment. A lot of us have heard of the 80-20 rule, that 80% of your impact is created by 20% of your effort. That 80% are the highest leverage activities. The way that we be more effective as engineers is by being very conscious of applying our limited time and our limited energy toward those leverage points.

Say you have a really, really big boulder. It’s really hard to move, but if you have a lever to amplify the force that you’re able to put in, you can move mountains. That’s the mindset effective engineers bring. They look for those leverage points that can really amplify the effort they’re putting in, because that amplification effect is where they can scale their impact beyond the limits of their own time.

What Instagram can teach us about fighting complexity

Adam: You went out and found real examples of companies that were either doing this well or had learned this lesson the hard way. One is Instagram, who at the time they were acquired by Facebook had 40 million users with five engineers. What were they doing that startups today should take note of?

Edmond: They had this very brilliant insight – a lot of the cost that engineers pay comes from sources of complexity. They were very wary about introducing sources of complexity. When we think about the concept of complexity, there’s complexity on a bunch of different levels. There’s code complexity, which is something a lot of engineers are familiar with. If you have complex code, it’s really hard to ramp up on the code, it’s really hard to debug what’s going on, it’s hard to understand, it’s hard to refactor, and it’s hard to change. On some level, engineers tend to be good at identifying or seeing code complexity, and they have a desire to fix that.

There are other layers of complexity that often get ignored. If you go one layer above code complexity, there’s system complexity. What are all the different types of systems that are in play for your product to work? A couple years ago I was teaching a five-week workshop at Pinterest. One of the engineers shared a story about how in the early days of Pinterest they actually had seven different data storage systems. They were using MySQL, Memcached, Membase, MongoDB, Redis, Cassandra and Elasticsearch. Their backend team only had three engineers, which was more than two systems per engineer.

What does that mean? That means that any effort to really build shared understanding around how these systems work and how they fail gets fragmented. It means libraries they built for any one system they work with isn’t as strong as it could be. It means that you have to understand the failure mode of each and every system that they adopt. While they might have adopted these systems because each one pitched a benefit that they thought they could achieve theoretically, in practice it just meant that they had to operate and maintain each of these similar systems. Every new engineer had to understand seven different data storage systems in order to be productive.

Eventually, they figured out the way to really scale their systems is not by introducing new types of systems, but by having systems they can scale vertically. They can add more servers of the same type, really understand those types of services and develop a much stronger expertise around them. That’s where system complexity comes in.

A lot of the cost that engineers pay comes from sources of complexity.

If you go one layer above that, there’s product complexity. As engineers, oftentimes we’re building a lot of features and we think, “Oh, wouldn’t it be great to basically have this feature out for users?” One thing that we don’t think about is, what is the cost that having this feature imposes upon our development process? Every feature that exists in the product is something else that you have to think about when you’re developing something new – how this new feature interacts with all the old features that exist. Over time, it becomes harder and harder to add each incremental new feature because it may interact with the existing features in any number of ways. There’s a cost to having a very high surface area in the product because it means that you have to do more testing, you need to maintain more systems to keep those features running, and you’re spending a lot of mental energy basically keeping track of this huge feature space.

You can even go one layer above product complexity, which is organizational complexity. When you have a lot of different products, when you have a lot of different systems, you end up needing a lot of different teams. You either fix that by hiring a lot of engineers or you have a lot of one-person teams. You basically make your teams really small so that you can have more of them maintain all the pieces of systems and products that you have.

They both have their downsides. When you have a lot of one-person teams it’s easy for that one person to get demotivated when things don’t work. They don’t really have someone to bounce ideas off of. There’s less shared learning that’s available. When you hire more people to support all these things the friction of communication grows exponentially and it becomes really hard for people to stay on the same page. You spend a lot more effort keeping people up to date. That’s another tax you now have to pay. If you’re not conscious about all these sources of complexity, you end up introducing these taxes on your development process. You introduce these burdens that you need to basically pay whenever you operate your service.

Going back to Instagram, what they did really well was ask for any design someone came up with, “Is this the simplest thing?” They had a mantra, “Do the simplest thing first.” They’d challenge themselves in design reviews, “Is the solution you’re describing the simplest thing that we could build?” They recognized that every piece of complexity they introduced was another potential fire that they would have to put out. With a five-person engineering team, if they were fighting fires all the time they wouldn’t be able to actually build and grow their product.

Adam: A lot of our listeners are at early stage companies where the people representing all teams can fit in a single room and have that conversation. At a foundational level, looking ahead to alleviate those taxes, what can they be doing?

Edmond: Step one is being very aggressive and rigorous about what you actually need to introduce in order to build and scale the product. This other step is to develop a habit of reviewing the taxes that may exist in the existing product. Maybe there are features that were built a while back that don’t have as much usage as people would like. Could those features be cut? It could be a hard conversation, but the benefits of not having to maintain and think about it and deal with bugs and the customer support issues that arise from it, those are very concrete wins that you can get from cutting that particular feature.

In a typical week, how much of your mental head space is spent dealing and maintaining all of these different systems that you might have or all these different features that you might have? Be introspective and if it’s actually worth the time you’re putting in to maintain all these different pieces.

Working toward better prioritization

Adam: Prioritization is a massive challenge in every startup. How do you keep your teams focused on high impact projects as opposed to either snacking on things that are low effort but very low impact or juggling too many of high priority tasks at once, thus nothing ever getting any of them done?

Edmond: It’s a very hard problem, and there are a few different ways that I’ve found effective at approaching it. At a very high level, articulating a mission statement is something that can be very grounding and very clarifying for the team.

On the enterprise engagement team at Quip, which I lead, we went through an exercise where we articulated our mission on the team, which is to systematically accelerate the adoption of Quip and unlock its value for users. That type of clarity helps us decide what we should be focusing on and what things we’re not responsible for. Knowing that means we have this lens to view all the tasks on our task list. Does this particular item contribute to our mission? In what way? Without that mission statement, without that clarity, it’s very hard to juggle two different tasks and know which one is more impactful.

In addition to having that mission, having some top-level metric that you want to optimize for, is also incredibly valuable. It allows you to really compare different tasks in terms of the impact and the leverage they might have. We’ve spent a lot of time both at Quora and at Quip just developing that top-level metric. What is that north star that we want to use to evaluate the impact of all the tasks that we’re working on?

Adam: That’s across teams, too, right? It’s not just an engineering thing.

Edmond: Introducing that metric as a language that you can use to talk with non engineering teams is actually extremely helpful because it allows even people in other functions, like customer success, to start to quantify the impact they have in terms of this top-level metric. It also means that engineering can then decide, “If we invest in more tools and make customer success more effective, what is the payoff of that versus other changes we would make in the product?” Having this single language that you can use to compare across a variety of different aspects of work you might do as an engineer becomes incredibly empowering.

Quip’s engineering values

Adam: Is there a set of engineering values you use at Quip?
Edmond: I’ll share some of the values on the enterprise engagement team. One is always be experimenting. Because our team is focused so heavily on growth, experiments are where we can really connect the ideas that we have with actual impact on users. It’s also the launching of those experiments that finally connect the work that we do with actually meaningful impact on numbers. This idea of always be experimenting, of building a cadence of experimentation, that’s something that’s incredibly important to our team.

Another is building intuition by validating hypotheses. For every experiment that we do, we take the time to actually formulate a hypothesis around what users are doing or around what we expect to change. The experiment should either confirm it, build confidence around the hypothesis or invalidate it. By applying the scientific method to our experiments, each time we run an experiment we make sure that we’re actually learning something. We make sure that we’ve constructed an argument for why this experiment could have meaningful impact on numbers, and then through the experiment we develop a much stronger understanding of whether that’s true or not. Then we can decide whether we should continue investing in that area in the future.


Shipping small pieces of code to learn and build momentum is a shared engineering value between Intercom and Quip.

Another important value on the team is just do the simple thing, very similar to the value that Mike Krieger and Instagram had in the early days. Quip in total only has 20 engineers, so being strategic about which corners we can cut in order to build the minimal viable product is super important. We want to learn as much as we can as quickly as possible, and it’s only by aggressively cutting parts of the product test that aren’t necessarily needed that we can basically optimize for our own learning.

What’s actually minimal and viable

Adam: You mentioned the MVP. How do you define what’s reached that threshold of viability?

Edmond: It’s really the smallest unit where you feel like you can learn something from that product that you’re building. A little less than a year and a half ago we did a redesign of Quip, and one tool that my team used was continuous user testing. We would gather around every week and have movie time – only we weren’t actually watching movies; we were watching videos of user tests.

We would make some hypothesis about this redesign that we were doing, build out a small part of this redesign, and then share a link to this new version of the product on usertesting.com. Within an hour we’d get back videos of users self narrating their usage of the product. It was incredibly valuable just to have that short feedback cycle, as well as that detailed qualitative feedback about what made sense and what didn’t make sense, and to see them interact with the product on the screen.

At our peak, we were running 12 user tests a week, and it gave us a lot of information that we couldn’t have gotten nearly as quickly with a numerical A/B test. On each iteration, we would try to identify the core thing that we want to test and design a user test around it. Every time we ran on one of these tests, we built a lot more intuition about which things seemed to be working and which things didn’t seem to be working. We were able to iterate really quickly just by doing that.

Adam: Do you see often that the hypothesis you’re looking to test can actually be stripped down a few levels and there’s a smaller test that you should be running to support that? There’s a story in your book regarding Etsy learning this lesson.

Edmond: The Etsy story is fascinating. They at one point wanted to introduce infinite scrolling into their search results page. They spent a few months building it out, and then when they finally decided to test it they found that it actually hurt revenue. It didn’t have the positive impact they thought it would.

They spent a lot of time trying to figure out why that was the case and whether they could’ve figured that out sooner. In their retrospective process, they realized they could’ve actually decomposed infinite scrolling into smaller, testable hypotheses and that would have helped them spend less time on this area. For instance, one hypothesis of infinite scrolling is that by showing more search results to users, they would buy more. You could easily test that by increasing the number of results on the search results page. They ran that simple experiment and found that it did not actually increase revenue.

Similarly, the other hypothesis was that infinite scrolling means people can see more results, faster. They got a little creative to understand how latency affected revenue. They did that by actually running an experiment where they introduced latency. Since it was a lot more work to actually make the page faster, they did a comparison asking, “What if we introduced additional latency to the search results page? How would that affect revenue?” In fact, neither of those two experiments gave them confidence that infinite scrolling would have worked.

How can we decompose our hypotheses into smaller ones that we can incrementally test?

Had they actually tested out those two hypotheses initially, they would’ve saved a lot more time over that investment to build infinite scrolling. That’s a very common lesson that I think a lot of engineering teams can learn. How can we decompose our hypotheses into smaller ones that we can incrementally test? Because if testing a hypothesis takes months, you’re losing out on a lot of potential learning. If you can reduce it down to a week or even less, all of that learning will compound and inform the future versions of the test that you’re going to run.

Lessons from beyond the world of software

Adam: We’ve talked a lot about stories and lessons inside of software, but one thing that you’re a big proponent of when it comes to increasing leverage in the workplace is learning outside of the workplace. You study things like productivity, team building, psychology, self help, etc. What lessons have you learned from these disciplines that can be applied back to engineering work?

Edmond: I try to read maybe on average a book a week. One of the biggest lessons I’ve learned in the category of self improvement and business books is the importance of being mindful of your own energy levels. There are a lot of scientific studies around willpower, your ability to say yes to the things that you want and no to things that you shouldn’t be doing. The amount of willpower you have depletes as the day goes on.

How do we use that learning to our own benefit? One key insight is to be mindful of during which parts of the day you have the most willpower, or where you have the most energy, and to spend that time on the most creative things you can be working on or the highest leverage things that you tend to procrastinate on. A lot of times engineers might know something is really important, but it’s also really hard, really tiring, or they just don’t really want to do it even though it could be really high impact. If you pay more attention to which parts of the day you have more energy and you can be better about making an effective call on whether to work on it or not, that’s actually going to be huge in terms of helping you be more effective.

For myself, I tend to be a lot more creative and energetic in the mornings, so I will schedule the things I procrastinate on the most or the things that require the most creative energy for the morning. It’s really easy at the end of a work day when you’re tired to say no to things that you know might be good to do. For me scheduling those things up front in the morning has had a huge impact.

Setting a foundation for work culture through onboarding

Adam: You designed onboarding and mentoring programs for engineers at Quora and are doing some of that now at Quip. What type of investment needs to be made there, particularly if you’re a team that’s hiring for potential, to make sure that your engineers can grow into a more high leverage role?

Edmond: There was one summer at Quora where our engineering team doubled in size, which was super scary. We were doing continuous deployment at the time, so any commits would just immediately go to production, assuming it passed all the tests. There was a strong fear that with the introduction of so many new members on the team that things would just be breaking all the time.

When we were building the onboarding and mentoring programs at Quora, we spent a lot of time thinking about the goals we wanted to achieve with this program. Some of the goals were around wanting new hires to ramp up as quickly as possible. We wanted them to be familiar with our breadth of technologies and code bases. We wanted to socially integrate this new member so they really feel like they’re part of the team. Having those goals allowed us to then figure out the actual details and tools we could introduce to help onboard new people.

Google Codelabs

When we first started, it was relatively simple. We just assigned a mentor to each person and we had these goals in mind. Over time, we added more things that we thought would be helpful. We added onboarding talks for common abstractions and tools that we had. We borrowed a concept from Google called Codelabs. Google had these wonderful documents where people would write out things like, “What’s the reason behind this core abstraction we have at the company? Why was it designed? What problem is it solving? What are the key pieces of code you should look at to really understand what’s going on?” They also provided some exercises to validate your understanding. That’s a practice that we borrowed during my time at Quora. We would write out Codelabs for a lot of the different core abstractions that were unique to Quora. It helped make sure that everyone had a shared language when they were talking about the codebase or when we were discussing designs. That set the tone and the foundation for how they might then continue growing at the company.

Adam: Before we go, where can our listeners find more of your work and advice?

Edmond: I have a blog, and if you’re interested in the book, you can get paperback copies on Amazon, digital copies for Kindle, and EPUB and PDFs on TheEffectiveEngineer.com/book. As a special bonus for Intercom listeners, you can also go to TheEffectiveEngineer.com/intercom and there will be a bunch of goodies like videos and some of the most valuable lessons I’ve learned professionally.

If you’re interested in joining the R&D team to help build Intercom, check out our current openings here.