A concept vital to improving as a programmer is mental modelling. It is something we do without realizing it, but if you start paying attention to it consciously, much can be gained.
Humans in their normal state of consciousness cannot perceive reality as it is. Instead, the best they can do is take sensory input (visual, auditory, etc), and in their brain construct an image of what it is they are looking at. This image is a “mental”, i.e. “in your mind”, “model” of reality. It includes both “facts”, or at least the closest we can get to them — raw sensory input — and all the properties derived from “facts”, like predictions of how an object will behave when acted upon.
It helps to be aware of the quality of your mental models. For every property of an object in question, what’s your certainty about how the property matches reality? Is it a no-brainer, pretty close to a “fact”, or is it a plausible but somewhat far-fetched assumption about how something works? When troubleshooting, this distinction is critical. Errors in mental modelling can make it difficult to do root cause analysis effectively.
Here’s an example of mental modelling, and certainties, in the real world. Say you’re walking down a dark alley in the shady part of town. You are probably already tense — the mental image of this sort of dark alley is connected with some small chance of having your wallet taken away from you. How likely is this to happen? Are you walking alone, or together with three more friends? This changes the probability drastically. Are you walking alone, and suddenly hear footsteps behind you? The probability of getting mugged certainly just increased, but based on just footsteps, the uncertainty in this model is fairly high. You turn around, and see a grandma carrying a bag of groceries. Now your updated mental model of the situation has much higher certainty, and the probability of getting mugged is much lower. If instead of the grandma it was a rough-looking guy walking quickly and aggressively towards you, the uncertainty would still drop, but the probability of getting mugged would increase. Though who knows, there’s a low probability chance that the guy just got mugged, is stressed out of his mind from the adrenalin, and is looking to borrow a cell phone to call the cops.
So mental modelling is an activity of gathering input, drawing conclusions from it, and labelling these conclusions with levels of certainties. For any conclusion that is multiple hops away from the input data, uncertainty is multiplicative, so in the interest of having useful models that are successful at matching or predicting reality, it’s important to actively work on reducing uncertainty. This is sometimes referred to as derisking.
When searching for the root cause of a bug, you might think along the lines of “I know this library has been working fine for months, and nobody has touched it recently. Seems unlikely that something in there would break, even though the symptoms suggest that the bug is related to this library. On the other hand, the code that’s calling the library has been modified yesterday.” You’re building out a mental model of the code base and the development process, along with the crucial input data that determines the likelihood of bugs in various parts of code. If there’s a module of code that seems related to the bug, and you don’t know whether it was edited recently, the certainty on “this module is fine” should be low. If you check git history, and notice that there hasn’t been any changes to it in a month, the certainty goes up.
Much of what we do as programmers is about building mental models, verifying assumptions, arriving at conclusions, reducing uncertainty. We do this subconsciously every day. Thinking of this process consciously will make you that much better at it.
Comments
Post a Comment