Just how ‘all that’ are we?

Jordana Cepelewicz’s profile for Quanta Magazine of new discoveries about a tiny, multicellular animal is just the sort of life sciences writing that I’m into, detailing findings that demonstrate in “lower” animal capacities that we tend to reserve in our thinking and our assumptions for “higher” animals.

In this case, the discovery that through purely biomechanical processes, a placozoan named Trichoplax adhaerens “moves and responds to its environment with agility and seeming purpose, yet it has no neurons or muscles to coordinate its movements”.

For some time now I’ve been fascinated with science that I clumsily (and inaccurately) categorize as We Ain’t All That science. Basically, the idea that just because we evolved complex nervous systems topped by mind-bogglingly complicated brains, it doesn’t necessarily mean that the sorts of things we do are entirely unique. We might do them in some unique ways, but we aren’t necessarily the only ways nature figured out how to do the same, similar, or same-seeming things.

In a trio of preprints totaling more than 100 pages — posted simultaneously on the arxiv.org server last year — he and Bull showed that the behavior of Trichoplax could be described entirely in the language of physics and dynamical systems. Mechanical interactions that began at the level of a single cilium, and then multiplied over millions of cells and extended to higher levels of structure, fully explained the coordinated locomotion of the entire animal. The organism doesn’t “choose” what to do. Instead, the horde of individual cilia simply moves — and the animal as a whole performs as though it is being directed by a nervous system. The researchers even showed that the cilia’s dynamics exhibit properties that are commonly seen as distinctive hallmarks of neurons.

Cepelewicz makes reference to slime molds, which lately have been something of the go-to example for I guess what you could call “embodied cognition”. (Sidenote: I once compared the way slime molds offload problem solving to their environment through chemical trails and the like to the way I offload memory tasks like reminders and appointments to my cell phone.) I see in my Pocket archive a couple of good articles on slime molds, including Lacy M. Johnson writing for Orion Magazine.

Throughout their lives, myxomycetes only ever exist as a single cell, inside which the cytoplasm always flows—out to its extremities, back to the center. When it encounters something it likes, such as oatmeal, the cytoplasm pulsates more quickly. If it finds something it dislikes, like salt, quinine, bright light, cold, or caffeine, it pulsates more slowly and moves its cytoplasm away (though it can choose to overcome these preferences if it means survival). In one remarkable study published in Science, Japanese researchers created a model of the Tokyo metropolitan area using oat flakes to represent population centers, and found that Physarum polycephalum configured itself into a near replica of the famously intuitive Tokyo rail system. In another experiment, scientists blasted a specimen with cold air at regular intervals, and found that it learned to expect the blast, and would retract in anticipation. It can solve mazes in pursuit of a single oat flake, and later, can recall the path it took to reach it. More remarkable still, a slime mold can grow indefinitely in its plasmodial stage. As long as it has an adequate food supply and is comfortable in its environment, it doesn’t age and it doesn’t die.

Ian McCluskey writing for OPB News descibed a bit of what paces researchers have put slime molds through.

The Japanese researchers tried another experiment. They knew that slime molds avoid unfavorable environmental conditions, such as intense light or salt. They created an unfavorable condition in their lab with pulses from an oscillator as a slime mold moved toward food. As expected, the slime mold reacted by slowing its advance. Then the researchers had the slime mold do it again, but this time didn’t create the pulses. Yet, the slime mold slowed down in the same place the unfavorable condition had been, as if anticipating.

[…]

Dussutour wondered if slime molds could learn to tolerate, even adapt, to uncomfortable conditions. She forced slime molds to cross over various substances they disliked in order to reach food. The first time, it took the slime mold 10 hours to reach the food. After a few days, the slime molds began to pick up their pace. After a few more days, they cruised over them at regular slime mold speed. They had apparently “tuned out” the distraction placed in their path, or what scientists call “habituation,” which is a form of learning.

Slime molds are crazy but as the Quanta piece shows, they certainly only the only crazy thing out there that seems to exhibit a kind of, as I say, embodied cognition. “The behavior of slime molds seemed inexplicable,” writes McCluskey. “Was it merely a biological stimulus-response mechanism, or some form of intelligence?”

I think this sort of thing grabs me in part because it makes me wonder just to what degree our own behavior is not all we crack it up to be, and makes me think of things like Dan Folk writing for Aeon about consciousness.

Another problem centres on what consciousness actually does. As a philosopher would put it, what causal role does it play? Does it cause matter to move about? Or to put it another way: does consciousness impact behaviour? By Chalmers’s account, the zombie is supposed to behave exactly like us – even though we have conscious experiences and the zombie doesn’t. The implication seems to be that conscious experiences play no causal role in the world. But in that case, why even postulate its existence? The usual response is that consciousness is something we immediately experience; we can’t be wrong when we claim to be conscious. But when we reach for a glass of water, aren’t we doing so because of the conscious experience of being thirsty? If we are, then consciousness does, in fact, seem to impact behaviour; and if we aren’t, then consciousness seems to be nothing more than what philosophers call an epiphenomenon, a kind of secondary phenomenon. As Hanrahan puts it, consciousness would be like the humming sound that your computer makes – it’s always there when the computer is on, but it has no bearing on what the machine is actually computing.

I’ll admit that with no particular sense of the evidence base in any direction, I’m of the personality type (for lack of a better phrase) to be inclined toward this idea of consciousness as epiphenomenon. I feel like there’s a degree to which our sense of ourselves as animals might be overinflated by our compulsion to tell stories about everything. I do, in fact, think that we probably didn’t “decide”, per se, to reach for the glass of water or to go to brunch the other day instead of having a bowl of cereal at home.

Which is where it’s worth bringing in Mark Balaguer writing for The MIT Press Reader about free will, adapted from his recent book on the subject.

Suppose that you’re in an ice cream parlor, waiting in line, trying to decide whether to order chocolate or vanilla ice cream. And suppose that when you get to the front of the line, you decide to order chocolate. Was this choice a product of your free will? Well, if determinism is true, then your choice was completely caused by prior events. The immediate causes of the decision were neural events that occurred in your brain just prior to your choice. […]

I’m not so sure I am a determinist, per se, but when I read things like this I really do just sort of feel the pull of a gut instinct that says neural events happen and then we tell ourselves a story about them after the fact. It’s not so much for me a matter of “free will” versus “determinism” as it’s a matter of our self, whatever that is, being far more (than we’d like to believe) a narrator than an author.

I think that once we evolved the ability to tell stories—it’s a surefire advantage when we can turn to the early hominid next to us and tell them we’ve figured out that every time the mammoth herd visits the local watering hole, that small one strays behind for a bit afterward and we can probably take him—we can’t help but tell stories about very nearly everything, and of course that includes telling stories about what we ourselves do, whether or not they are true.

We tell ourselves and each other a lot of stories, and stories to a large degree are us seeing—or making—patterns out of things. QAnon and Richard Hoagland might tell stories about societal or governmental behavior that sure seem to be picking up on patterns to reveal a hidden truth, but in fact they are complete bullshit. Why not so with what we tell ourselves about ourselves?

I think that in a very many ways, we are full of both ourselves and shit. Nature seems to have found an awful lot of ways to do complex and complicated things with “lesser” brains and nervous systems and even with no brains or nervous systems at all. When we write rapturously about octopus intelligence or whale communication, I think we feel as if we somehow are “elevating” them, whereas I feel like maybe the lesson is that we are not as much of a “higher life form” as we tell ourselves.

As I say, I guess I’m just the personality type to believe that much of what we find so great about ourselves might be a post-hoc fabrication. This is not even dime-store philosophy here: it’s just (yes) the story to which I’m the most drawn, maybe because I’m fond of us being put in our place, which we so often aren’t.

Tiny animals “decide” where to go, fungi “solve” mazes, and forests “talk” to themselves. I’m not always entirely quite so sure why those behaviors deserve a quotation quarantine while our own do not, except that it’s the story we prefer to tell.

The real impairment.

For the most part I’m looking to avoid single-source reaction posts here, but I’ve been struck by an eighteen-year-old piece by Sunny Taylor on impairment, disability, and work. It came to my attention from that piece in The Baffler about burnout that I addressed a couple days ago. It’s hard to tease out just one or two things but her discussion of institutionalization leapt out at me.

Crippled and elderly people have an especially precarious relationship to the machine that is production and consumption. People work hard, they age, their efficiency inevitably lessens and, unless they are fortunate enough to have some savings stashed away, they are too often put in nursing homes where their new value will be as “beds.” As Marta Russell has astutely pointed out, the institutionalization of disabled people “evolved from the cold realization that people with disabilities could be commodified…People with disabilities are ‘worth’ more to the Gross Domestic Product when occupying a bed in an institution than when they’re living in their own homes.”

This really is striking. We decide that since someone’s value and worth isn’t inherent and natural but arises only as a result of being economically useful, when they are disabled they must be made to be of productive economic use to someone. So we warehouse them for the financial benefit of institutions. Taylor points out that it’s more expensive overall to pay for the services of an institutionalized person compared to one living independently, which means we’re more interested as a society in transferring wealth to the institutionalized care industry than letting people live on their own but “on the dole”.

Taylor also provides one of the most useful and explicative outlines I’ve come across of the social model of disability by drawing the distinction between “impariment” and “disability”.

Disability theorists make this clear by making a subtle but significant distinction between disability and impairment. The state of being mentally or physically challenged is what they term being impaired; with impairment comes personal challenges and drawbacks in terms of mental processes and physical mobility. To be impaired is to be missing a limb or born with a birth defect; it is a state of embodiment. Being impaired is hard. Without a doubt, it makes things harder than if one is not impaired. However, more often than not, the individual accommodates for this impairment and adapts to the best of their ability. For example, I am impaired by arthrogryposis, which limits the use of my arms, but I make up for this in many ways by using my mouth.

Disability, in contrast, is the political and social repression of impaired people. This is accomplished by making them economically and socially isolated. Disabled people have limited housing options, are socially and culturally ostracized, and have very few career opportunities. The disabled community argues that these disadvantages are thus not due to impairment by its nature, but due to a cultural aversion to impairment, a lack of productive opportunity in the current economy for disabled people, and the multi-billion dollar industry that houses and “cares” for the disabled population that has developed as a consequence of this economic disenfranchisement. This argument is known as the social model of disability. Disablement is a political state and not a personal one and thus needs to be addressed as a civil rights issue.

I don’t want to spend too much time on this here, because I think there’s not much more to say about it. What I want to pull out of the above is this idea of our “cultural aversion to impairment”. It’s as if the presumptive idleness of the disabled puts the lie to the mythology telling us that working hard will get you ahead (despite the literal everyday in-our-faces evidence that this simply is not true for the vast majority of people), and we just can’t bring ourselves to acknowledge that lie.

Too many of us, I assume, have so deeply internalized the intrinsic field that rather than seeing the predicament of the disabled as evidence that there’s something fundamentally flawed about the way we’ve allowed things to be constructed around us that if fixed would benefit everyone, they instead dig in their heels and ask why someone else should get “something for nothing” when their own lives aren’t a picnic either.

The point is that it shouldn’t be this hard for anyone, and if we properly addressed the way we misconsider and misconstrue impairment and disability, and the degree to which the impaired and disability might need assistance to live dignified and independent lives (and deserve that assistance because our human worth isn’t calculated by our contribution to the GDP), we’d discover that maybe the rest of us shouldn’t have to work so hard to live such lives, either.

Than to fade away.

In the post with which I rebooted, once again, this blog, I talked about how an offhand remark by someone I know, as Ukraine was being invaded, that “going from Trump to Covid to this is a lot” made me realize once more that I’ve been in a state of hypervigilance since 2015, while those without my privilege set have been in one for much longer.

It’d already been rolling around in my head a bit because of this Kelsey McKinney piece for Defector about how after two years of the pandemic going “back to normal” simply wasn’t enough, even though it probably was what the powers-that-be were seeking. McKinney made me realize that it wasn’t just that I was suffering a resurgent autistic burnout (although I was, and anyway it also had never fully abated), but that everything that was happening made you want to wave your arms around explaining that what was wrong was, you know, all of this.

This notion of a ceaseless hypervigilance is in the air, as evidenced by today’s Anne Helen Petersen newsletter.

In many cases — including the current one — we don’t actually leave the previous crisis behind; it just wanes in urgency, with a promise that it will certainly wax again. It demands a sort of cyclical vigilance — and it’s been the norm for the last two pandemic years, with their ongoing waves of high-alert anxiety, but it’s also characteristic of the ongoing climate catastrophe, of the erosion of voting rights, of the threats to trans kids and the families and health care professionals and educators who affirm them, of outbursts of horrific racist violence, of school shootings, of giant steps back it comes to women’s bodily autonomy. It happens, then it happens again, then it just keeps happening.

Petersen is careful, as we must try to be, to underscore that people less privileged than she (or I) are not only now waking up into such a state of hypervigilance, and also runs down the helpful and distressing list of “all of this” type of things that in fact were happening before Trump was running for president.

It’s the way things happen sometimes, but I can’t help but notice that this hypervigilance discourse simply cannot be divorced from the burnout discourse, and not just because I’ve once again linked Petersen. All of this is connected, and in precisely the sorts of ways, I argue, that explode the lie that burnout somehow purely is an occupational phenomenon.

(To some degree, as highlighted by Whizy Kim last year for Refinery29, it’s the very fact that more privileged people started to experience some degree of chronic stress over the state of things that a more visible conversation started.)

Steven David Hitchcock, writing for The Conversation back when Petersen’s book about “millennial burnout” came out, notes that in reality “medical experts are starting to see burnout as a society-wide issue” and “mental health groups have identified burnout as a product of long-term, or chronic, stress”—and “not necessarily a product of the workplace specifically”.

Kim’s piece focuses on work but like many such pieces on occupational burnout can’t help but use language that when tweaked just a bit (illustrated here with bracketed words) yields to the idea that burnout isn’t just about work.

Maybe a telltale sign of burnout is when you start thinking in such extreme terms, ruminating on life and death as it pertains to your [life] satisfaction. If you’re wondering what would happen if you died tomorrow, and weighing how deeply your [friends and family] would feel the loss of you, you’re not just tired. You’re preoccupied with existential questions related to meaning and purpose. And they’re all related to your [life].

Burnout is not depression, per se, although there’s a degree of overlap in symptomatology and certainly they can co-occur. As I posited yesterday, effectively burnout is neurasthenia, in which your nervous system and your psyche simply overload like an electrical circuit with too much plugged into it. Electrical wiring is rated for its load capacity. Humans don’t have a mathematical calculation to tell us when our load is too high, but we do have circuit breakers that trip and shut things down. We have burnout.

Shannon Palus, writing for Slate in 2019, about how burnout is not just a millennial thing, tried to zoom the picture out a bit.

Ultimately, burnout isn’t a “millennial condition,” as Petersen argues. It’s the condition of being human in a capitalist society. The specifics may be new (Slack allows for nonstop work, Instagram makes the fruits of your work feel small and dull), but I have trouble believing that my great-grandmother, working on a farm in Pennsylvania with a dozen-ish kids, never had tiny to-do list items that rolled over from one week to the next—what Petersen dubs “errand paralysis.” I have trouble believing she never felt the day-to-day of running a household went underrecognized in those years way before The Feminine Mystique sought to blow the lid off of unrecognized labor.

Emphasis added, although I think there is a historical difference between our great-grandparents and us: they were not being bombarded with updates on their world, whether those updates personally impacted them or not, twenty-four hours a day, seven days a week, three-hundred and sixty-five days a year. While for a great many people the work world arguably improved, the size of the world itself also dramatically increased, as did its access to our capacity for attention.

Work can cause burnout on its own, but unless we happen to live in the world of the Apple TV+ series Severence, work also does not exist separate and completely apart from everything else trying to plug into our load-limited circuitry.

The condition of being human in a (neoliberal) capitalist society is one, per L. M. Sacasas citing Jacques Ellul, which “is constructed in such a way that the human being qua human being becomes an impediment, a liability to the functioning of the system”. The reason I talk about an intrinsic field guiding us to conform and perform is because it’s systemic in a way that transcends work—or, if you like, it turns all of our waking moments (and perhaps our sleeping ones, too) into a kind of work.

Not to strain credulity by haphazardly cobbling together a theory of everything, but I think this all is partly why Arthur C. Brooks myopic celebration of seeking to be an outsider rankled me so: the intrinsic field already is a kind of self-othering force, alienating us from ourselves “in such a way that the human being qua human being becomes an impediment”.

Outsiderness isn’t a vacation. We live our every day and the everyday under the chronic stress of a “technique” which—falsely—entices us with the prospect, finally, of getting to be inside, forcing us to become outsiders even from our innate sense of our own value and worth as individual human beings.

McKinney’s piece, that made me gesture flailingly at everything, is a reminder that getting back to normal might be the worst thing we possibly could do. It’s mathematician and astrophysicist Tricia McMillan telling passengers, “We have normality. I repeat, we have normality. Anything you still can’t cope with is therefore your own problem.”

Why wouldn’t we burn out? Why shouldn’t we?

What’s of use.

As I established earlier, I’m interested in the conversation around so-called occupational burnout primarily because I’m sensitive to whether it will elevate or obscure the conversation about autistic burnout, but also because I worry that the focus on work belies that our culture generally breeds a disconnect between the mythology of what we think is good for us and what actually is good for us.

To repeat myself: our society’s innate and implicit peer pressure to be “productive”—to conform and perform—results in a fundamental disquiet that reaches across populations and surrounds far more than our work lives.

Over at The Baffler, Charlie Tyson examines how burnout “became the buzzword of the moment”, using as a jumping-off point Jonathan Malesic’s recent book, The End of Burnout. Tyson notes that many popular “journalistic treatments of burnout […] tend to emphasize the heroic exertions of the burned-out worker”. This not only valorizes acceding to the pressure of conform and perform but also continues narrowly to focus only on occupational burnout, failing to grapple with whether or not it’s merely an aspect of a much more structural psychological malady.

The psychologist Christina Maslach, a foundational figure in burnout research—the Maslach Burnout Inventory is the standard burnout assessment—sees burnout as having three components: exhaustion; cynicism or depersonalization (detectable in doctors, for example, who see their patients as “problems” to be solved, rather than people to be treated); and a sense of ineffectiveness or futility. Exhaustion is easy to brag about, inefficacy less so. Accounts of the desperate worker as labor-hero ignore the important fact that burnout impairs your ability to do your job. A “precise diagnostic checklist” for burnout, Malesic writes, would curtail loose claims of fashionable exhaustion, while helping people who suffer from burnout seek medical treatment.

Last time, I talked a bit about how a list of burnout symptoms accurately reflected my four decades living unknowingly as an autistic person, and certainly Maslach’s trinity of “exhaustion; cynicism or depersonalization […]; and a sense of ineffectiveness of futility” perfectly encapsulates that life experience. It’s my experience with autistic burnout which informs my engagement with all discussions of occupational burnout and which leads me to consider that they each are but one tentacle of a deeper Eldritch horror undulating beneath society as a whole.

Continuing to focus on work, Tyson writes, “If burnout stems, as Malesic says, from the discrepancy between the ideal and the real, then burnout is punishment for idealists.” A framing of the ideal versus the real might make sense when restricting ourselves to talking about work, but I question its fit when talking about that intrinsic field pushing us to conform and perform more generally.

It isn’t idealism to want to be healthy. It’s a natural drive toward claiming a natural right. This is one of the things we don’t notice if we limit our discussions of burnout to the world of work.

Tyson is right that much of the burnout discourse centers an elite for whom it “resonates with affluent professionals who fetishize overwork”, but I think he miscues when, citing Malesic, he compares burnout (which he deems “a transitional term”) to an earlier “historical parallel”: neurasthenia. The miscue isn’t in the comparison, but in the blithe dismissal of the utility of “burnout” because of its elite connotations, like neurasthenia before it.

He quotes American Nervousness, a 19th-century work on neurasthenia by George M. Beard, which compared “the human nervous system to an electrical circuit”. It’s worth taking a minute to read the entirety of the long, somewhat repetitious passage from which Tyson quotes.

Edison’s electric light is now sufficiently advanced in an experimental direction to give us the best possible illustration of the effects of modern civilization on the nervous system. An electric machine of definite horse-power, situated at some central point, is to supply the electricity needed to run a certain number of lamps—say one thousand, more or less. If an extra number of lamps should be interposed in the circuit, then the power of the engine must be increased; else the light of the lamps would be decreased, or give out. This has been mathematically calculated, so that it is known, or believed to be known, by those in charge, just how much increase of horse-power is needed to each increase in the number of lamps. In all the calculations, however widely they may differ, it is assumed that the force supplied by any central machine is limited, and cannot be pushed beyond a certain point; and if the number of lamps interposed in the circuit be increased, there must be a corresponding increase in the force of the machine. The nervous system of man is the centre of the nerve-force supplying all the organs of the body. Like the steam engine, its force is limited, although it cannot be mathematically measured—and unlike the steam engine, varies in the amount of force with the food, the state of health and external conditions, varies with age, nutrition, occupation, and numberless factors. The force in this nervous system can, therefore, be increased or diminished by good or evil influences, medical or hygiene, or by the natural evolutions—growth, disease and decline; but none the less it is limited; and when new functions are interposed in the circuit, as modern civilization is constantly requiring us to do, there comes a period, sooner or later, varying in different individuals, and at different times of life, when the amount of force is insufficient to keep all the lamps actively burning; those that are weakest go out entirely, or, as more frequently happens, burn faint and feebly—they do not expire, but give an insufficient and unstable light—this is the philosophy of modern nervousness.

Six months ago, I mentioned to my therapist that I’d thought of a new metaphor to describe my autistic life. I’ve been trying to find useful metaphors in part because at some point I am going to have to convince the nation’s disability system that I am, in fact, disabled. I’d been using the metaphor of having built the foundation for a house (my ability to live independently) but my inability to build anything atop it (for example, employment and so economic self-sufficiency), lest the foundation collapse.

The new metaphor I’d hit upon for my autistic experience and the limitations it places upon me was that of a house’s electrical wiring.

What I mean by all of this is not that we need to pay more attention to me, or to autistic burnout, but that we need to release the burnout discourse from the shackles of our (pre)occupation with work. What’s really happening is both more pressing and more basic than simply re-engineering the way we think about our jobs. It’s (excuse me) baffling that Tyson calls upon the spirit of neurasthenia only to undermine burnout merely as an elite phenomenon rather than to recognize in each the signs of a more fundamentally unhealthy one.

The body, both politic and personal, suffers from always teetering on the edge of an overload. Some of us keep falling in. Defining the value of our lives only by how little we deviate from some normative standard of how much use we can be cannot help but do anything else.

You won’t know it if you’re doing it right.

Joe Pinsker writes for The Atlantic that “the narrative structure of COVID—defined by its false endings, exhausting duration, and inscrutable villain, a virus—would be unwatchable” as a movie, notwithstanding, per psychology professor Monisha Pasupathi, “a taste for the avant-garde”.

The coronavirus’s volatile arc has thwarted a basic human impulse to storifyreality—instinctively, people tend to try to make sense of events in the world and in their lives by mapping them onto a narrative. If we struggle to do that, researchers who study the psychology of narratives told me, a number of unpleasant consequences might result: stress, anxiety, depression, a sense of fatalism, and, as one expert put it, “feeling kind of crummy.”

This must be especially true when this unnarratable experience also has been serving as backdrop to accumulating traumas (the pandemic, but also Trump; the pandemic, but also police shootings of Black people; the pandemic, but also the invasion of Ukraine by a nuclear power). “If dealing with an ongoing pandemic and the rippling effects of an overseas war seems like too much,” Alexandra Frost writes for Popular Science, “it’s because it is.” Just so.

“When you want reality to match a story line you’re accustomed to but reality doesn’t comply,” notes Pinsker, “that’s stressful.” He adds that Angus Fletcher, a professor of story science (a thing I did not know existed), “said that this idea—that we’ve been deprived of the life story we wanted to be living—stresses us out because it implies a loss of authorship over our personal narrative.”

Which brings me to Megan Marz writing for Real Life about the purported “decline of ‘storytelling’ or ‘narrative’ itself” and a recollection of reading blogs in the mid-2000s.

I don’t really remember the specifics of their posts. I certainly don’t remember the overarching plots, because there were none. There were voices and there was a sense of ongoingness. […] The action took place in real time, in the world I knew, and it wasn’t always “action.” […] I didn’t know where it was going, and they didn’t know either. Reading a blog wasn’t something you could do over a weekend, like reading a novel. It was part of your daily life, until it wasn’t. 

Blogs, for Marz, “appeared to leak literary expression back into the daily flow, making everyday life, for a minute here and there, feel as meaningful as art”. For a time, didn’t the onset of the pandemic have a similar effect?

Marz also talks about “the mass of retrievable data” about one’s life and the world, and whether they only “could be meaningfully apprehended […] through […] database logic”. This tension between “topic” and “story” somewhat defines what eventually propelled me away from social media, whose feeds I’ve elsewhere described as the overwhelm of a database as compared to the structure of narrative. Surely, per both Pinsker and Frost, this only has exacerbated the mental health stresses of life during the pandemic.

When so much of what you’re taking in about a continuous event comes “surrounded by jokes, lists, random thoughts, impromptu book and movie reviews, and recipes” from a context-collapsed mass of people (things that for Marz, and for me, made a more narrative type of sense on the blog of an individual), it’s no wonder we can’t seem to form a cohesive, comprehensible pandemic narrative.

Citing psychology professor Dan McAdams, Pinsker notes “that people, and perhaps Americans especially, have a strong desire for, even an expectation of, ‘redemptive’ narratives […] but the pandemic’s story has withheld that positive resolution and refused to end, let alone end well.”

McAdams thinks that instead of grasping for a redemptive story to tell about the pandemic overall, we might be more at peace if we select a frame that’s humble and realistic. “I like this idea that we’re going to have to ‘learn to live with the virus.’ I think that’s right—it’s not like a war that’s going to end and we’re the victors,” he said. Instead, we can acknowledge “that there will always be adversity and that we need to be clear-eyed about that, and learn to manage adversity when it cannot be fully overcome.” Accepting that story, even if it’s bittersweet, beats holding out for a Hollywood ending that will never arrive.

Which brings me, at the last, to Tanya Lewis writing at Scientific American, who cautions that knowing when the pandemic is over “may lie more in sociology than epidemiology”.

“I believe that pandemics end partially because humans declare them at an end,” says Marion Dorsey, an associate professor of history at the University of New Hampshire, who studies past pandemics, including the devastating 1918 influenza pandemic. […] “Every time people walk into stores without masks or even just walk into stores for pleasure, they’re indicating they think the pandemic is winding down, if not over,” Dorsey says. Whether or not there is an official declaration of some kind, “I don’t think anything really has a meaning until, as a society…, we act as if it is.”

Living in a pandemic isn’t something you can do over a weekend, like reading a novel. It’s part of your daily life, until it isn’t.

Intrinsic field subtractor.

It didn’t occur to me until later, but my thoughts about the burnout discourse could have used a reference to L. M. Sacasas and his observations late last year, expressly citing “the writing of Jonathan Malesic and Anne Helen Petersen”, around the idea that you can’t optimize for rest.

This is yet another example of the pattern I sought to identify in a recent installment: the human-built world is not built for humans. In that essay, I was chiefly riffing on Illich, who argued that “contemporary man attempts to create the world in his image, to build a totally man-made environment, and then discovers that he can do so only on the condition of constantly remaking himself to fit it.”

Illich is echoing the earlier work of the French polymath Jacques Ellul, to whom Illich acknowledged his debt in a 1994 talk I’ve cited frequently. In his best known book, The Technological Society, Ellul argued that by the early 20th century Western societies had become structurally inhospitable to human beings because technique had become their ordering principle.3 These days I find it helpful to gloss what technique meant for Ellul as the tyrannical imperative to optimize everything. 

So, recall Petersen’s observation about the robot being the ideal worker. It’s a remarkably useful illustration of Ellul’s thesis. It’s not that any one technology has disordered the human experience of work. Rather, it’s that technique, the ruthless pursuit of efficiency or optimization, as an ordering principle has determined how specific technologies and protocols are to be developed and integrated into the work environment. The resulting system, reflecting the imperatives of technique, is constructed in such a way that the human being qua human being becomes an impediment, a liability to the functioning of the system. He or she must become mechanical in their performance in order to fit the needs of the system, be it a warehouse floor or a byzantine bureaucracy. It’s the Taylorite fantasy of scientific management now abetted by a vastly superior technical apparatus. […]

Emphasis added (it’s the section, in fact, I highlighted when I sent Sacasas an email about autistic burnout) because that right there is both aspect and avatar of our society’s implicit and innate peer pressure to conform and perform. It surrounds us always, not just at work; a kind of intrinsic field permeates our everyday.

He quotes Ellul:

The human being is ill at ease in this strange new environment, and the tension demanded of him weighs heavily on his life and being. […] But the new technological society has foresight and ability enough to anticipate these human reactions. It has undertaken, with the help of techniques of every kind, to make supportable what was not previously so, and not, indeed, by modifying anything in man’s environment but by taking action upon man himself.

That’s the intrinsic field working. This idea, certainly, that it’s not our environment that must change but us is something with which actually-autistic people (and people of color, and gay people, and trans people, and et cetera, nearly ad infinitum) are intimately familiar.

When I use terms like “implicit”, “innate”, and “intrinsic”, it runs the risk of communicating that this peer pressure somehow is a natural force at work in our lives. This is not the case. It’s entirely a construct of the people and systems which shape the society in which we live. It’s entirely an artifice, and like any such can be undone and remade.

Long Covid to the rescue?

I’m too wrapped up, I think, in my own experience of being autistic to know exactly where I come down on Nancy Doyle’s depiction of Long Covid as “acquired neurodivergence”. I know I’m just sort of automatically inclined toward irritation, but something sure seems to be happening to people’s brains.

A team of doctors working in the UK were able to analyse before / after neuroimages from a reasonably large cohort of 785 adults, of whom 401 had received a positive diagnosis for covid-19. These were compared with the 384 who had not. Even after the 15 severe (hospitalized) cases were removed, there were still noticeable shrinkages for the covid positive individuals affects the processing of smell and taste as well as an overall decrease in brain volume of 2%. The team also found evidence of significant cognitive decline for the covid group.

(I should also say that Doyle’s piece literally is the very first place I’ve ever seen anyone discuss “brain fog” as a problem with executive function. A few quick searches shows this isn’t an uncommon way to talk about brain fog, but it never seems to make it into general reporting. I’ve never considered the two to be related, but now it feels like it should always be talked about this way. Seemingly everyone at this point grasps “brain fog”; no one ever knows what “executive function” means.)

Long Covid reporting is all the rage right now, I guess intentionally coincident with the “end” of most pandemic restrictions. Just in the past couple of weeks, I’ve seen six stories about Long Covid, not including Doyle’s; I’ve got one more in my queue (from Katherine J. Wu at The Atlantic, which I’m anxious to read).

To be perfectly clear: it’s good that attention is being paid to Long Covid, but I haven’t yet felt settled in how I feel about it. Will the attention paid to it yield results for people who’ve been dealing with analogous conditions all along, such as people dealing with chronic pain or chronic fatigue, or autistic people like me? Or will Long Covid suck all the air out of the room, leaving these others, again—still—behind?

These are ugly thoughts, but we have them because our society is structured in such a way as to place us too all often in a zero-sum game. Attention and resources for thee means less attention and fewer resources, if any, for me.

My thoughts about Long Covid are complicated partly because many of the symptoms are things I deal with as a matter of routine (although possibly less severely than many of the Long Covid cases that make it into all this reporting), and partly because I’m absolutely certain that at some point I had a mild or asymptomatic infection and Long Covid hasn’t been limited to people with advanced or severe cases.

I keep wondering if my cognitive struggles over the past two years have been the natural result of an already struggling autistic brain being in a global pandemic, or have they been the result of a Covid infection? Really, if the latter it wouldn’t just be Covid: it would be a one-two punch of autism and Covid.

There’s all sorts of talk about Long Covid and the potential for increased risk of dementia. As an autistic person, I already carry an increased with of dementia. How many increased risks can pile atop one another before I have to start thinking about whether or not I’m rapidly going to go downhill, and when?

Will all these people struck with Long Covid get disability benefits, when I know I’m in for a long, uphill struggle to get my own due? Or will Long Covid be the rising, salving tide that lifts all our long-suffering boats?

Stay gold, Ponyboy.

In retrospect, it should have been glaringly obvious what was nagging at me about Arthur C. Brooks’ paean to becoming an outsider: only someone whom normative society does not consider already to be an outsider can contemplate the idea of freely choosing to become one. It’s not that there are no benefits to being an outsider (especially when the relevant insides, whatever they may be, are rotten), it’s that being an outsider is not some of kind of rejuvenating, spiritual idyll.

Brooks does take a paragraph out of his rhapsody in order briefly to glance at this flip side of being an unintentional or circumstantial outsider.

Outsiders do tend to face particular genres of hardships, especially distrust by insiders. Despite the biblical injunction “Do not oppress a foreigner,” even believers often disregard friendliness in favor of tribal instinct when it comes to immigrants. You don’t have to move to a new place to feel the ill effects. People at the margins of society, by virtue of the language they speak or the lifestyle they choose, often bear the brunt of hostility. Joseph Stalin, for example, felt a particular animosity for the people he and his supporters called “rootless cosmopolitans”—generally, Jewish intellectuals, who he considered to live outside of mainstream Soviet society despite the fact that they lived in Soviet cities.

(“Particular animosity” here a bit underplays the concerted campaign to harass, fire, ban, and kill said rootless cosmopolitans, which included among other things a fabricated accusation resulting in the dismissal, arrest, and torture of a number of doctors, and the execution of a dozen poets. I only bring this up because I suspect that other readers might, like me, have had only the loosest, most general idea of Stalin’s actions and Brooks doesn’t bother to fill them in.)

We’re meant to be convinced, in part, by Brooks’ own experience, which he doesn’t relate until the end.

In candor, I’m approaching this topic with a fair amount of bias. Being an outsider early in my adulthood was the most positive experience I have ever had. At 25, I moved to a foreign country where I didn’t speak a word of the language, and knew not one soul save for a woman I hoped to marry, but who spoke little English. It was brutal, but life-changing in the best way. After a few years, I had lost my fear of new things, whether it was an unfamiliar language, working with strangers, new love, or a community hostile to foreigners.

I’m going to go out on a limb and say that however personally challenging this might have been, moving to Spain when you’re twenty-five is not quite the same level or kind of outsiderness experienced by any number of less privileged people and populations even when they never stray from home. However hard it might have been to know “not one soul” in Spain (actually, according to Wikipedia, he was “the associate principal French hornist with the City Orchestra of Barcelona”), his path led him not only to the American Enterprise Institute, but to Harvard’s Kennedy and Business schools, and to the pages of The Atlantic.

Not quite the life of most outsiders. Outsiderness isn’t some sort of life hack or wellness strategy. True outsiderness is an almost complete lack of both power and the opportunity or invitation to access it—something even more foreign to Arthur C. Brooks than the city of Barcelona.

Abort, retry, ignore, fail?

In an incisive newsletter edition about cancel culture, Charlie Jane Anders says a thing that had it come just a day or two earlier would have made it into the post with which I restarted this blog, about living in an age of hypervigilance.

It turns out that living through a plague, a barbaric war, a slow slide into climate apocalypse and an ongoing attempt to subvert our democracy is fucking stressful. Who knew? People are on edge, and it’s harder and harder to believe that we can do anything to fix these messes before it’s too late. Traumatized, anxious people don’t always handle provocation well. Shocking, I know.

I don’t especially want to rehash any arguments about counter culture itself, and I don’t especially have any quibble with what Anders writes about it. I’m distracted, though, by that word “provocation”.

Once upon a time, on an earlier blog (I can’t even remember which incarnation or which domain) I complained about a comic book writer who wanted to be able to write a provocative story but not have to deal with readers being provoked by it. This is a frequent mindset of self-described provocateurs who are “just telling stories” or “just telling jokes” or whatever “just” they think serves as their particular insulation.

(The particular bugbear at issue for me in that instance was, and is, a long-standing one: if you’re going to tell serialized stories, you have to accept serialized reaction.)

I’ve started to wonder if, in fact, many of these provocateurs don’t actually understand that provocation literally comes from the word provoke. Maybe it’s that one has a “c” and one a “k” that confuses them. Do they somehow just not know what the word means?

Anders, to get back to the topic, dislikes the passive phrasing common to cancel culture in which someone “has been canceled”, suggesting instead saying that someone has disgraced themselves, if only because “disgrace implies the possibility of grace”.

I’m not sure why, but the entire thing randomly made me think of an old MS-DOS drive error.

General failure reading Disgraced Person.
Abort, Retry, Ignore, Fail?

Thanks to this, I’ve started re-reading the classic 13th Gen: Abort, Retry, Ignore, Fail? by William Strauss and Neil Howe (which never has been released as an ebook), available as a scan on the Internet Archive. Memory is not my strong suit, but I’m pretty sure I was fairly deeply enamored of this book. I do know for a fact that I wrote to Strauss and Howe about it, although I’ve no idea what I said.

It’s worth noting here that in some ways 13th Gen is a provocation, and one they anticipate: it’s answered by the book’s third pseudonymous author, Ian Williams, who as the provoked online persona and Gen-Xer known only as “crasher” interjects throughout the book.

Lest you think that surely I can’t manage to tie all this together: America’s right-wing media somehow just last year implored Gen-X to save the world from cancel culture, a suggestion which landed on members of my generation—a generation which in many ways itself was canceled—just about how you’d expect.

The burnout discourse.

In a previous incarnation of this blog, I took issue with Anne Helen Petersen’s well-circulated “millennial burnout” article for BuzzFeed News. Well, more like umbrage. To be completely fair, I was a snot about it.

While I stand my by critique of its framing of burnout almost as a lifestyle issue (Petersen called some tasks she had to accomplish “boring”, which rankled me given my own very real neurodevelopmental difficulties with executive function), it’s a stance less reflected in Petersen’s later writings on the topic. In the end, I regret at least somewhat not modulating my tone.

That piece’s greatest contribution, perhaps, was helping to open up the conversation about burnout. Whatever “exaggerations” might have come since, there’s a clear through-line from Petersen to Eve Ettinger in Bustle, in which they concern themselves with the possibility that burnout actually might be better called by another name: trauma——and, yes, I understand that this term also has been getting something of a workout lately.

Here’s the thing that seems always to escape the discourse around burnout: despite the official definitions, burnout is not about work. It’s about our society’s implicit and innate peer pressure to conform and to be (seen as) productive. Naturally, this pressure can exert itself pretty forcibly in the realm of work, but it’s hardly limited to work.

My therapist has been a strong believer in the idea that my experience of unknowingly living as an autistic until midlife amounted to a very real kind of trauma. We’ve talked a lot about PTSD, but I’ve been resistant mostly because of its element of recurrence. Due to my aphantasia and severely-deficient autobiographical memory, I simply don’t relive past experiences.

Without the recurrence element, considering PTSD would leave me feeling like I was intruding upon a domain to which I have no claim. The lack of recurrence, however, doesn’t mean that what we legitimately can call, at least, the chronic stress of living unknowingly as an autistic wasn’t somehow nonetheless laid down in my brain.

When my vocational rehabilitation job placement shortly after diagnosis regularly sent me scurrying to the men’s room for fifteen minute sobbing fits, that wasn’t “occupational burnout” (the typical framing) per se. It was autistic burnout—but the underlying dynamics echo each other: the stresses to conform and perform were beyond my neurodevelopmental capacities. That is about the autism but it’s also about that innate peer pressure which affects everyone.

Whatever the pathways and mechanisms, in the months after quitting that job placement, for example, any trip I took on public transit which involved the same routes would spark my anxiety response. Recurrence? No. Some form of impressed trauma? I’d think so.

Just today I saw one list of burnout symptoms that read: being easily frustrated; sadness, depression, or apathy; blaming others; irritability; poor hygiene; feeling tired, exhausted, or overwhelmed; lacking feelings, or being indifferent; feeling a failure; feeling there is nothing you can do that will help; feeling a need for alcohol or drugs to cope; and feeling the inability to do their job well.

This literally describes my entire adult experience of unknowingly being autistic (for values of “job” equaling both “job” and “life”), as well as the experience of what you might think would be a post-diagnosis period of recovery. The damage done by four decades of that implicit peer pressure to conform and be productive was so profound that even a part-time job at an accommodating place with the resources of vocational rehabilitation at my disposal was not enough to prevent a near-total breakdown of capacity.

Recently on a subreddit for autistic adults, someone posted a meme. The first panel of a school bus driving along is labelled, “Me doing well in school thinking it would translate to career success.” The second panel shows the bus being totaled by an oncoming freight train. The label: “Lack of people skills.”

My response suggested that the second also could just as easily be “neoliberal capitalism”. It’s true that people with neurodevelopmental conditions face a unique set of challenges in this regard, but my point, really, was that maybe more things about our society contribute to chronic stress than we are permitted to recognize in polite company.

Jonathan Malesic isn’t wrong that use of the word “burnout” in some quarters has descended into a sort of mad, exaggerated trendiness (although that itself could be a kind of gallows humor, as we impotently and plaintively gesture at everything), but at our own peril do we let that obscure that something very real is happening—and that it’s happening across populations and extends well beyond the realm of work.

Whether we want to call it burnout or C-PTSD or “just” chronic stress, what needs to win out in the burnout discourse is the idea not just that it’s real but that it’s pervasive, and that it’s built into the fabric of the society in which we live, rendering it beyond the capacity of any individual to solve for themselves.