Because abstractions aren’t free, sometimes we’re better off duplicating code instead of creating them.

If that claim doesn’t make sense to you, read Martin Fowler’s “YAGNI” or Sandi Metz’s “The Wrong Abstraction” or watch Dan Abramov’s “WET Code” talk or Kent C. Dodd’s “AHA Programming” talk.

Each of these programmers give advice on when to duplicate code vs. create an abstraction, advice that broadly falls into two camps: either we are advised to follow some rule of thumb or we’re told to ignore rules of thumb, trust our feelings and to only introduce abstractions when it “feels right.” Fowler, Metz, and Abramov are in the first camp. …


Because abstractions aren’t free, sometimes we’re better off duplicating code instead of creating them.

If that claim doesn’t make sense to you, read Martin Fowler’s “YAGNI” or Sandi Metz’s “The Wrong Abstraction” or watch Dan Abramov’s “WET Code” talk or Kent C. Dodd’s “AHA Programming” talk.

Each of these programmers give advice on when to duplicate code vs. create an abstraction, advice that broadly falls into two camps: either we are advised to follow some rule of thumb or we’re told to ignore rules of thumb, trust our feelings and to only introduce abstractions when it “feels right.” Fowler, Metz, and Abramov are in the first camp. …


Good programmers are good at forecasting. They can often predict roughly how long it’ll take to accomplish a particular programming task. They can also predict when and to what extent a project will see ROI from a particular technical investment.

Unfortunately, this skill isn’t guaranteed to develop as we gain more experience programming. In Superforecasters, the authors note that many experienced people are surprisingly bad at making forecasts and that time and experience often don’t make us any better.

They present a framework for how to improve as a forecaster, and since I’ve started my new job, I’ve been using that framework. I’ve seen a small — but measurable — improvement in my forecasting ability, and I’d like to share the specifics around how I’ve used this framework to that effect. I’ll break the explanation into two parts: first, I’ll briefly explain the simple math behind the framework presented in Superforecasters. Then I’ll get into how I’m using a simple google spreadsheet and some slack reminders to track my progress within that framework. …


When programming, always follow the camping rule: Always leave the code base healthier than when you found it.

- Martin Fowler, Refactoring

The Boy Scouts of America have a simple rule that we can apply to our profession. Leave the campground cleaner than you found it.

- Robert Martin, Clean Code

Many of us share the attitude expressed by the above Fowler and Martin quotes. The attitude presumes that code we’re working on now will change again soon, and we’ll reap the benefits of a refactor when that happens.

Here’s another common attitude: we don’t get enough time to refactor.

These attitudes are related: insofar as we hope to make future work easier, the above camping rule can lead to sub-optimal decisions about what code gets refactored. …


In my experience, most applications are a mess…Changes are commonly made under urgent time pressure, which drives applications towards disorder…Velocity gradually slows, and everyone comes to hate the application, their job, and their life.

-Sandi Metz, “The Half-Life of Code”

Why

Many of us work in codebases that are not easy to work with, codebases that we want to make better. The way that we typically choose what parts of the codebase get made better, however, is sub-optimal. The two dominant methods I’ve seen are:

  1. Fix code in areas of the codebase we happen to be currently working in. (“I’m here. …


I wrapped up my job search recently, and I’m happy to say that I’ll be joining a YC-backed startup called “Heap.” I thought I’d share a little bit about my job search in case the information may benefit other job-seeking devs. I’ll go over the pipeline of places I applied to and the result of each application. I’ll also talk about things like salary, resume formatting, coding challenge prep, and interviewing. I don’t intend any of this to be advice. …


Some thoughts on the cost of automated testing

Testing seems to be like going to gym. Everyone feels like “yeah. I should be testing. I should be going to the gym everyday.”

Koushik Goupal, Fragmented, “Episode 13,” 12:01

Remember those gimmicky fitness products that made you think you could “get fit” without actually going to the gym/dieting/etc? Because I live in Orlando and have seen the Carousel of Progress at the Magic Kingdom a bunch of times, the first example of this kind of gimmicky product that comes to mind is a thing called an “exercise belt.” Its the thing on the right:

I also remember this product that came out in the 90s that would stimulate your muscles with electricity so that you could just watch tv while you got buff. I guess these are still…


An open learning exercise

Introduction

Gradient descent is an algorithm that’s used to solve supervised learning and deep learning problems. Here I’m going to try to give you an idea of why the algorithm works and how you’d implement it in Kotlin. I’ll also show the algorithm working with a simple kaggle dataset involving video game sales and ratings.

Everything I cover here is covered in Andrew Ng’s excellent Coursera machine learning course with the exception of the Kotlin implementation of gradient descent. If you really want a clear and definitive introduction to gradient descent, I recommend that course over this article. …


Dagger adoption frustrations and how I’d do it differently next time

…in software, feedback cycles tend to be on the order of months, if not years…It’s during the full lifetime of a project that a developer gains experience writing code, source controlling it, modifying it, testing it, and living with previous design and architecture decisions during maintenance phases. With everything I’ve just described, a developer is lucky to have a first try of less than six months…

-Erik Dietrich, “How Developers Stop Learning: Rise of the Expert Beginner”

A few years ago, we started using Dagger 2 in our applications. We saw some quick wins and were able to do some neat things like mock mode for testing and better support our white-labelling process. However, as time went on, several members of our team developed an aversion to working on the Dagger code, and I must admit that even I occasionally found it frustrating to work with. …


I recently delivered a presentation of our (outcome-based) roadmap. Several people approached me after the presentation to tell me that they found it useful and informative, so I thought I’d jot down some of the things that I think contributed to the warm reception of the presentation. Hopefully, these tips will come in handy both for others and for my future self.

Organize by outcome, then by persona

A corollary to the “outcomes, not output” way of thinking is that features should be grouped and presented according to the outcomes we hope those features will achieve. One of the opening slides of my presentation simply outlined (with three short bullets) outcomes. Once you’ve grouped features by outcome, then subdivide into groups according to a persona. …

About

Matt Dupree

Wannabe philosophy professor turned wannabe tech entrepreneur.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store