On Kilter was created to share ‘the toolkit’, a set of rules and practical guidance for navigating any information hellscape.
The full, original version of the toolkit… is very long. And exacting. It was designed for use by someone who wants to hold themselves to the highest possible standard.
Think of what follows as the speedrun version: it is stripped down to essentials, and focused on the most approachable parts.
We will be talking about this introductory version of the toolkit extensively over the next few weeks.
INTRODUCTION
There is one universal starter goal: Don't get stuck believing wrong things.
Why this goal?
If you care about something (anything at all), then supporting it with your actions ALWAYS requires two things:
Keep your actions aligned with reality
Keep your actions aligned with your goal
If you don't do these things... then when you act, your action may not have the 'cause → effect' you are hoping for. You may fail to support the outcome you care about. You may even work AGAINST what you care about, unknowingly.
So to avoid self-sabotage: 1) we need to always understand 'real' reality, and 2) we need to clearly understand how our actions relate to our goal, so that we can keep them aligned with each other.
But — we have an information problem. Sometimes we may make mistakes. Others may make mistakes, and we may not be in a position to notice. People may lie to us. 'Base facts' may not be confirmable. On and on and on. There are infinity-and-counting ways to mess up, even when we mean well and try hard. So how can we always be aligned with reality, and keep our actions consistent with our goal, when we full-stop can't know everything with certainty all the time?
Well...there is nothing you can do or say or think that will ALWAYS prevent you from making an error. What's worse: what you do to prevent getting fooled in one scenario, may make you susceptible to getting fooled the next time. Circumstances and luck are just part of the picture.
So let's get off total prevention. We can instead replace our two goals, with one goal that has equivalent intention.
Don't get stuck believing wrong things.
This replacement goal allows room for the idea that we're going to get things wrong sometimes, in little ways or big ones, and says that our goal is just to not get STUCK like that. If we're drifting from reality, or if we're getting inconsistent, then we want to be able to see any warning signs as soon as possible, so that we can correct course.
This, at least, is doable in theory. And it gives us our best chance of acting in a way that actually supports the things we care about — since "know everything and act perfectly all the time" is off the table.
...But this goal, as-is, is not very practical. 'Don't get stuck believing wrong things'. I can say this, and agree with it, but I don't really know how to do it — at least not in a moment-to-moment sense. What should I actually do, what specific actions should I take, if I want to achieve this goal?
That's what the following 'toolkit' is for.
Working off our one goal, 'Don't get stuck believing wrong things', we add smaller, more specific categorical goals that all point in that direction. 'Categorical' meaning that, if done, achieving these goals would ALWAYS, INEVITABLY support that main goal. Then we repeat the process, breaking each new goal down into smaller, more specific goals still. Until we end up with goals that we can actually see how to achieve, all of which would categorically support the goal we want.
These smaller, more specific goals are 'the rules'. Just apply them, in all circumstances, as often as you can remember to do so. No worries about exceptions, because there are none.
You should inspect the following rules to make sure that they do indeed seem to be categorically bent towards fulfillment of the main goal. They should, in fact, seem necessary for it.
If, on inspection, you agree that following all the rules would be needed for our main goal... and if you agree that 'don't get stuck believing wrong things' is a goal that applies in all circumstances... then these are rules to live by.
PREVIEW OF MAIN TECHNIQUES:
Atomize your judgments
Default to "I don't know"
Bin every claim you hear into either 1) a prompt for a question or 2) emotional white noise
Note explicitly when your interest in a claim or question surges, and when it drops off
Be specific about what question you're answering (especially if/as it shifts)
Map thoroughly every time you want to answer a question
Look for and favor good mapping in your casual sources for info
1 — ATOMIZE YOUR JUDGMENT
1-1 — Atomize your judgment. Handle everything in the smallest units possible. For example, multi-part claims/arguments should be broken down into individual claims.
2 — ON HEARING A NEW CLAIM, WHAT YOU SHOULD DO INTERNALLY:
2-1 — Default to saying/thinking "I don't know".
2-2 — Acknowledge that if you want to change 'ignorance' to 'knowledge', or verify knowledge you think you already have, it will take further particular steps. If you are not willing to consider further steps, back to 1.
Re-upped rule: 'Atomize your judgment' can apply to "I don't know". You can separate a claim into details you do know for certain, and elements you say "I don't know" to.
3 — ON HEARING A NEW CLAIM, WHAT YOU SHOULD DO WITH THE CLAIM:
3-1 'Bin' the claim into one of two bins: it’s either a prompt for a question, or 'emotional white noise'.
Keep in mind 1-1. You may need to break one claim you've heard into multiple components to be considered.
3-2 If you find a prompt, name the question you have (as specifically as possible).
If you want to turn this prompt/question into 'knowledge', it will take further particular steps. Unless and until you have done that work, you only have a question.
3-3 If you end up with 'emotional white noise', observe and name the emotion(s) involved.
Otherwise, discard the inputs. Keep in mind 2-1, and 3-4.
3-4 Reassign bins whenever appropriate.
Re-upped rule: 'Atomize your judgment' should apply to binning. You can separate a claim into elements you're interested in turning into a question, and elements that are 'emotional white noise'.
CONCEPT INTRO: ‘MAPPING’
When you want to move off "I don't know" and get some answers, the critical concept is that 'good information' = 'complete mapping'. 'Complete mapping' means putting ALL possible answers to the question in the same field of vision: you will need to make the strongest possible version of each answer, analyze each individually, and compare them to each other, in order to be able to see the 'map of answers'.
Complete mapping inevitably involves LOTS of research in order to find the 'raw facts' that form the base of each argument, as well as how each argument strings together in the eyes of its supporters. Training in research skills, as well as formal logic, will help at this stage. (Both are very learnable, and we can get into them if you ever get stuck or want to supercharge your efforts.) A good general knowledge base is also useful (eg learning the 'jargon' around a topic, and as much relevant history as possible), so that you can see the fullest possible context and provide additional facts/arguments for the map.
You 'win' mapping by having the most complete map possible, handled in the most rigorous way. Each argument should be made with high specificity. A person supporting any argument should be able to look at the map, see their position represented, and say "yes, that".
Answers, if any, just shake out from a carefully made map, so the map itself is what needs to be prioritized. Note as well that maps are ALWAYS conditional: if any relevant facts/arguments are discovered or overturned, the map (and any tentative answers) will always have to be revisited.
The good news:
Just caring whether the map is complete, and whether arguments are represented fairly, will go a long, long way towards being good at this.
Mapping allows collaborative research, even among people who disagree. Hypothetically, any people who value accuracy and fairness will ALL CONVERGE ON THE SAME MAP. So the burdens of research can be shared.
Often, the basic facts are NOT in dispute between different positions. So making a map is often a shorter process than it may seem upfront.
To summarize, and really stress: MAPPING. IS. CRITICAL. Prioritize the completeness and the process of the map. Good mapping is doable, and it's the only way to get usable 'information'.
4 — WHEN TO MAP
When you are ready to consider escalating a prompt/specific question into a mapping effort:
4-1 Ask yourself whether you care enough about your prompt/question to put a lot of effort into thoroughly mapping it.
4-2 If you don't care enough to do this:
State to yourself that you don't really care about this.
Return to IDK for your own stance.
Re-bin the claim that prompted your question as 'emotional white noise'. Name the emotion(s) involved. (What was the 'real issue' that made you care about this claim?)
4-3 If yes, acknowledge that unless and until you've actually put in all the work needed to map the question, you still only have a question.
5 — GOOD MAPPING IN PRACTICE
5-1 Specify your question until it is 'researchable'. Always center on ONE specific question.
As you map one question, you are likely to generate more as you go, or have to make a sub-map in order to proceed. It's no problem to shift your question as you go, but always work on ONE question at a time, and be explicit about what that question is at any given moment.
If you move off a question because something else is easier or more urgent to answer, just make a note of the question you want to come back to. Keep a running list/connection web, so that you can map all related questions over time.
5-2 MAP THOROUGHLY.
Mapping recap: A 'map' is all the possible answers to a question, held in the same field of vision. Make the strongest possible version of each possible answer, gather all available evidence, analyze each argument in light of the evidence, and compare them to each other. A person supporting any possible answer should be able to look at the map, see their position represented, and say "yes, that". The map should remain open to all new inputs in future, and should be revisited every time something changes about the relevant information.
To re-stress: MAPPING. IS. CRITICAL. Prioritize the completeness and the process of the map. (CARE whether you have a good map, and that will go a long long way towards having one.)
PS: Since mapping is the crux of the whole 'real information' problem, it's the most 'academic' part. It's not that hard to do once you get the idea, but also — even if it always seems like a slog, it's important to not take shortcuts. If you're inclined to say SNOOZE, and NERD, and OH MY GOD THIS IS ABSURD while doing this part...yes. But push on through.
STEPS FOR MAPPING A QUESTION:
1) Specify your question until it is 'researchable'. Write out your specific question.
For ease, start with the most specific, binary, verifiable-claim question you can that speaks to your core concern.
2) Dump all possible answers to that question onto the field.
This includes answers currently being proposed, and hypothetically-possible answers you arrive at yourself.
3) Working 'answer' by 'answer':
Strip each answer for its 'arguments' — explicate the logic chain as fully as possible.
Assess which facts are used to support each argument. Highlight these factual claims.
4) Survey the full set of 'facts' (verifiable claims) now on the map.
Which facts are NOT disputed between all answer sets?
Which are disputed?
5) Sub-map each disputed fact/claim.
If clear evidence overturns a 'fact', strike it from the field (no penalty on the argument that provided it).
NOTE: You can be extra stringent, if wanted, for overturning a claim. But you SHOULD overturn in cases where eg an eyewitness's report is the totality of the 'input', and they themselves retract it.
If, after research, multiple factual interpretations are still possible, highlight and annotate these claims accordingly.
Think about what deciding evidence would look like, if any.
If more evidence is ever relevant to deciding the question, return to update the map.
6) Survey the arguments on the map in light of the shared/disputed fact set:
Has the assessment of facts changed the strength of any of the arguments?
Did any argument rely totally on facts that are now overturned?
If possibly redeeming evidence, make note of "If - then”
If no possible redeeming evidence, strike that 'answer’
If an argument relied partially on facts that are now overturned, re-explicate it in the strongest form possible
If other evidence possible available, find, repeat steps from top to here
What is the individual assessment of each argument's logical chain?
If broken irrevocably, strike
If rewritable in a stronger form, do so
7) Survey the arguments on the map relative to each other, going argument by argument:
Which do all of the following: 1) account for all known verified facts; 2) speak directly to main logic/concerns of other positions; 3) are logically strong and self-consistent themselves? Make note of these favorably.
Which do ANY of the following: 1) fail to account for some of the known facts, 2) rely on facts that were overturned, 3) do not have verified facts (only unverified, if any) 4) do not speak to the main logic/concerns of other positions; 5) are logically poor (weak inductive, invalid deductive, or inconsistent internally).
WARNINGS FOR THE HARDEST PARTS OF THIS PROCESS (from my perspective): separating specific questions; explicating full argument chains AS THEY WERE INTENDED; chasing down input data to make strongest version of each argument.
6 — CURATING SOURCES FOR EMOTIONAL WHITE NOISE (a mapping shortcut)
6-1 Spot-check your most frequently used sources for good/bad mapping.
Make note of which sources achieve complete mapping, and on which topics.
Make note of which sources present faulty/incomplete maps (any topic).
Re-upped rule: 'Atomize your judgment' should apply to sources. For example: you can consider a network → an individual reporter → an individual reporter on one topic → one article that person's done regarding a particular topic → different claims within that article (on a section, paragraph, sentence, or word level)...
RECAP OF MAJOR POINTS
Atomize your judgments
Default to "I don't know"
Bin every claim you hear into either 1) a prompt for a question or 2) emotional white noise
Note explicitly when your interest in a claim or question surges, and when it drops off
Be specific about what question you're answering (especially if/as it shifts)
Map thoroughly every time you want to answer a question
Look for and favor good mapping in your casual sources for info
BONUS CONCEPT: ‘ARATIONAL CORES’
Way, way down, at the very bottom of every 'argument' (ie each potential 'answer' provided to a map) is an arational belief. This is a belief that can't be justified any further by logic/factual support, and is simply asserted.
When you hit these beliefs, a rational approach to analyzing them is no longer appropriate. (This is why we use the word 'arational', instead of 'irrational': 'irrational' implies that rationality is the right framework, but is ‘being done wrong'.) These beliefs could be thought of as our deepest values, and/or our deepest aspirations for both ourselves and the world: logic genuinely can't be used to support them or tear them down. These are the beliefs we simply have, and will act on. If given arguments about them, we will ignore these.
Again, EVERY argument terminates in an arational belief. In a given argument, we call this the 'arational core' of that argument.