Because abstractions aren’t free, sometimes we’re better off duplicating code instead of creating them.
If that claim doesn’t make sense to you, read Martin Fowler’s “YAGNI” or Sandi Metz’s “The Wrong Abstraction” or watch Dan Abramov’s “WET Code” talk or Kent C. Dodd’s “AHA Programming” talk.
Each of these programmers give advice on when to duplicate code vs. create an abstraction, advice that broadly falls into two camps: either we are advised to follow some rule of thumb or we’re told to ignore rules of thumb, trust our feelings and to only introduce abstractions when it “feels right.” Fowler, Metz…
Because abstractions aren’t free, sometimes we’re better off duplicating code instead of creating them.
If that claim doesn’t make sense to you, read Martin Fowler’s “YAGNI” or Sandi Metz’s “The Wrong Abstraction” or watch Dan Abramov’s “WET Code” talk or Kent C. Dodd’s “AHA Programming” talk.
Each of these programmers give advice on when to duplicate code vs. create an abstraction, advice that broadly falls into two camps: either we are advised to follow some rule of thumb or we’re told to ignore rules of thumb, trust our feelings and to only introduce abstractions when it “feels right.” Fowler, Metz…
Good programmers are good at forecasting. They can often predict roughly how long it’ll take to accomplish a particular programming task. They can also predict when and to what extent a project will see ROI from a particular technical investment.
Unfortunately, this skill isn’t guaranteed to develop as we gain more experience programming. In Superforecasters, the authors note that many experienced people are surprisingly bad at making forecasts and that time and experience often don’t make us any better.
They present a framework for how to improve as a forecaster, and since I’ve started my new job, I’ve been using…
When programming, always follow the camping rule: Always leave the code base healthier than when you found it.
- Martin Fowler, Refactoring
The Boy Scouts of America have a simple rule that we can apply to our profession. Leave the campground cleaner than you found it.
- Robert Martin, Clean Code
Many of us share the attitude expressed by the above Fowler and Martin quotes. The attitude presumes that code we’re working on now will change again soon, and we’ll reap the benefits of a refactor when that happens.
Here’s another common attitude: we don’t get enough time to refactor.
…
In my experience, most applications are a mess…Changes are commonly made under urgent time pressure, which drives applications towards disorder…Velocity gradually slows, and everyone comes to hate the application, their job, and their life.
-Sandi Metz, “The Half-Life of Code”
Many of us work in codebases that are not easy to work with, codebases that we want to make better. The way that we typically choose what parts of the codebase get made better, however, is sub-optimal. The two dominant methods I’ve seen are:
I wrapped up my job search recently, and I’m happy to say that I’ll be joining a YC-backed startup called “Heap.” I thought I’d share a little bit about my job search in case the information may benefit other job-seeking devs. I’ll go over the pipeline of places I applied to and the result of each application. I’ll also talk about things like salary, resume formatting, coding challenge prep, and interviewing. I don’t intend any of this to be advice. …
Testing seems to be like going to gym. Everyone feels like “yeah. I should be testing. I should be going to the gym everyday.”
Koushik Goupal, Fragmented, “Episode 13,” 12:01
Remember those gimmicky fitness products that made you think you could “get fit” without actually going to the gym/dieting/etc? Because I live in Orlando and have seen the Carousel of Progress at the Magic Kingdom a bunch of times, the first example of this kind of gimmicky product that comes to mind is a thing called an “exercise belt.” Its the thing on the right:
I also remember this product…
Gradient descent is an algorithm that’s used to solve supervised learning and deep learning problems. Here I’m going to try to give you an idea of why the algorithm works and how you’d implement it in Kotlin. I’ll also show the algorithm working with a simple kaggle dataset involving video game sales and ratings.
Everything I cover here is covered in Andrew Ng’s excellent Coursera machine learning course with the exception of the Kotlin implementation of gradient descent. If you really want a clear and definitive introduction to gradient descent, I recommend that course over this article. …
…in software, feedback cycles tend to be on the order of months, if not years…It’s during the full lifetime of a project that a developer gains experience writing code, source controlling it, modifying it, testing it, and living with previous design and architecture decisions during maintenance phases. With everything I’ve just described, a developer is lucky to have a first try of less than six months…
-Erik Dietrich, “How Developers Stop Learning: Rise of the Expert Beginner”
A few years ago, we started using Dagger 2 in our applications. We saw some quick wins and were able to do some…
I recently delivered a presentation of our (outcome-based) roadmap. Several people approached me after the presentation to tell me that they found it useful and informative, so I thought I’d jot down some of the things that I think contributed to the warm reception of the presentation. Hopefully, these tips will come in handy both for others and for my future self.
A corollary to the “outcomes, not output” way of thinking is that features should be grouped and presented according to the outcomes we hope those features will achieve. One of the opening slides of my presentation simply outlined…
Wannabe philosophy professor turned wannabe tech entrepreneur.