In trying to learn more about other methodologies to find useful processes, concerns, and concepts, I recently learned “outcome switching.” COMPare: Tracking Switched Outcomes in Clinical Trials specifically tracks outcome switching with clinical trials, as explained in their “Methods”:
Before carrying out a clinical trial, all outcomes that will be measured (e.g. blood pressure after one year of treatment) should be pre-specified in a trial protocol, and on a clinical trial registry.
This is because if researchers measure lots of things, some of those things are likely to give a positive result by random chance (a false positive). A pre-specified outcome is much less likely to give a false-positive result.
Once the trial is complete, the trial report should then report all pre-specified outcomes. Where reported outcomes differ from those pre-specified, this must be declared in the report, along with an explanation of the timing and reason for the change. This ensures a fair picture of the trial results.
However, in reality, pre-specified outcomes are often left unreported, while outcomes that were not pre-specified are reported, without being declared as novel. This is an extremely common problem that distorts the evidence we use to make real-world clinical decisions.
I find this to be a very useful concept in relation to how we often talk about plans and assessment with SMART goals and such (SMART being Specific, Measurable, Achievable, Realistic, and Time-bound). I often push back against things like SMART goals because much of my work is highly agile, iterative, and unknown. I’ve described, and heard others describe my work as jumping off the cliff and building my wings on the way down, constructing the building and laying flooring in order to walk on it, or designing the plane while flying it. SMART goals are difficult for me to construct in these cases, at least if I want them to be meaningful in terms of the outcomes instead of the little parts that may or may not point to the whole. That said, I don’t think I should allow myself to simply say SMART goals are dumb (anything done poorly can be dumb, but they can be super useful, and the process is useful) or that my work is complex as an excuse or opt-out for assessment processes–my work is complex, and that means I need to do the work to make it communicable for optimal success. Viewing assessments more on outcomes-as-specific rather than actions-as-specific, where I might vary actions to get to outcomes, with these alongside consistently identified and pre-specified outcomes–is helpful for me in thinking about how I construct programmatic work as well as personal annual activity and assignment documents.
Other concepts that are helpful in thinking about how we utilize and incorporate assessment into our work? I’m interested especially in this for new practices with work that is interdisciplinary and public like digital scholarship.