I wish I could say that’s it’s surprising that this entire Spectrum piece by Isabel Smith, Kate Tsiplova, and Wendy Ungar on "detecting a signal amid noise in autism early-intervention research" never once mentions the single most-important question about such research.
The authors focus on “three key points that readers should look out for”:
Was there a suitable study design?
Did children have access to a range of behavioral interventions?
Were measures appropriate and sensitive enough to capture important information about services?
Missing here is any consideration of the moral or ethical value of the interventions themselves. As in: is the intervention being studied in fact not even in the best interests of the autistic person as an autistic person?
Before we even reach the authors’ “three key points” we need to address that one. If the interventions are about nothing more than suppressing autistic behaviors, increasing masking, or simply teaching neurotypical social skills (rather than, say, also teaching self-advocacy so that neurotypical learn to meet autistics at least halfway), none of these other questions matter.
One of the problems with autism early-interventions is that often they’re either not truly evidence-based or the cited evidence amounts to little more than, “Autistic subject successfully was made to behave more neurotypically.”
Smith et al. to a degree in their piece seem to be trying to offer cover for research that doesn’t seem to show evidence of an intervention’s success, suggesting that perhaps the research design simply wasn’t sensitive enough to detect that success. Further, they appear to define such success based upon “autism characteristics”.
All of this means that studies of interventions must be designed so that children who receive the treatment can be compared with those who don’t, with regard to autism characteristics as well as personal and family demographics. Studies must also have a large enough sample and long enough study period to detect an effect if one is present.
They needn’t have framed it all the way they do. At the very least they could have framed it simply as an inability to judge an intervention’s effectiveness either way absent certain research design criteria. It’s weird instead to emphasize, repeatedly, “It’s inappropriate to assume that failure to detect a treatment effect necessarily indicates an ineffective treatment.”
Certainly that framing provides more than enough breathing room for parents intent on eradicating their child’s “autism characteristics” to keep trying interventions not sufficiently backed by proof.