I’ve often struggled with viewing Past Me’s work or thoughts due to their level of seemingly obvious (in retrospect) lack of quality or inaccuracy, apparent when I come to revisit them.
Despite the unpleasantness involved, it seems this result is actually a necessary condition for progress.
If it so happened that I realised I held exactly the same views today as I did, say, a year ago - what might this imply of my conduct up until now? It seems there are a small number of explanations:
Failure to update beliefs, given new information
(Or simply rank unwillingness to heed existence of evidence that might not support my existing hypotheses; A possibility that is all the more pertinent given the likelihood of our residing in Social Media echo chambers. 1)
Each piece of observed evidence contrary to my view should cause me to make a nonzero numerical revision of my belief. Negligence with regards to this should lend credence to the possibility that I’m being dogmatic or am assuming my own infallibility 2.
Why might it be that I refuse to expose myself to contrary opinions? If they happen to be fallacious, this might give me more reason to be confident of the views I hold - and if they are true, I am all the poorer for not taking heed and updating my beliefs to a more accurate map of the way things actually are.
If I do in fact hold a true belief already, then updating anyway will not matter, as we converge at the “truth” with evidence, given correct revision. Not doing so will only lessen the potential for the richness of our model of the true state of things.
But realistically, what is the prior probability that my initial beliefs about a proposition are close to being true, a priori? Especially if I have not thought hard about something for a day, a month, a year.
Holding views that are infinitely certain
It’s actually not that uncommon in informal talk to hear somebody say things of the form “X is never 3 permissible!”. This type of view has analogs to holding opinions with infinite certainty (with probability 1, or 0) - viz. there exists no piece of evidence (not even that I could conceive of) that will cause me to revise my views.
It’s constructive (and easy) to reduce such views to absurdity by taking them to their logical extremes - “If 10m human lives are lost if you reject X, you are nonetheless logically compelled to reject X still, if it is indeed never permissible” - This kind of exercise might seem absurd itself, but it’s useful for pointing out flaws in absolute credences, and getting people to hash out exactly what people consider commensurate evidence to revise their views. (99.9% of the time people don’t actually deeply hold views with probability 1 / 0, obviously.)
The inevitability of being wrong
So, it seems unavoidable that if I wish to refine the accuracy of my views iteratively in this vein, I must concede that I must necessarily be not at the exact truth most of the time, and continually finesse and hone beliefs with the goal of always increasing my degree of accuracy w/r/t how the real state of things. Looking back and seeing my mistakes is an unfortunate consequence of improvement.
- 1 - This is a genuinely hard issue to combat. Nobody loves following people who hold (what they feel to be) repugnant views. Plus, algorithmically-curated timelines are a fantastic time-saver even if they do have a propensity to perpetuate confirmation bias.
- 2 - Granted, this does presuppose that I am committed to finding truth above all possible other goals. If, for example, I cared most about aligning my views most with *Political Party X*, rather than aligning them with "truth", this would render the argument above invalid.
- 3 - Swap out the word "never" for "always" etc., as you please.