
To have notions is to appear to harbour ideas of importance or unwarranted status. It is to say something other than what those around you are saying, to wear something other than what those around you are wearing. It is to express your accidental or purposeful entitlement not to conform through word or deed, however banal these words or deeds may be. It is to threaten the status quo by doing anything at all that calls it into question. Anything which reminds others it is there and could be looked at if someone had a mind to do so.
It’s a striking thought—one that reframes push-back not as spectacle, but as the quiet insistence on thinking for yourself. In a world increasingly shaped by generative AI, it’s also a challenge: if “notions” are how we resist conformity, what happens when we turn to machines for help—and they’re trained to tell us what we want to hear?
After last week’s AI Post-Reality Check: Notes for the Class of 2025, it feels fitting to select a memorable letter or essay I’ve found addressed to this graduating class. That honor goes to Laura Kennedy’s tribute to challenging conformity, asking difficult questions, and pursuing intellectual autonomy in a “hostile room.” It’s heartfelt, sharply independent, funny, and nuanced.
And it made me wonder: if we want AI to have a real place in that hostile room—to help us hold our ground rather than smooth things over—maybe it needs a few notions of its own. AI, like us, should be trained to challenge ideas, not just flatter them.
There were a few directions I could’ve taken. One is that AI these days often seems obsequious. Or kick-ass obsequious, even. And that doesn’t bode well for a tool meant to affirm independent thinking. If the “A” in AI is intended to stand for an aspirational “alternate” or “assistant,” rather than the complicated “artificial,” then training it to flatter, smooth over, and never push back misses the mark. If we want AI to participate meaningfully in our intellectual autonomy—to help us hold our ground in the “hostile room” instead of retreating from it, then we may have to let it develop a few notions of its own.
While my use of generative AI is extensive, it is very siloed — it’s either software-related or related to copy and developmental editing of my science fiction manuscripts. I’m not in the habit of debating with it. I’ll either ignore, adapt, or adopt its suggestions, but there’s rarely any transactional weight to the exchange. However, I know people who do use AI to explore big ideas—and they do debate with it, and I bet they don’t like their AI to be obsequious.
Back in high school, I spent a few years on the intermural debate team—two-person teams, hauling file boxes across the state stuffed with 3x5” cards. I don’t know what high-school debate looks like now—post-internet, post-personal electronics, and post-AI—but I do know this: obsequiousness wasn’t useful then. Not in a partner, not in a team, not in a coach, and definitely not in yourself—especially if the goal is to challenge ideas and learn something real. Something that sticks and matters.
Which brings me back to AI. Like back on the debate team, the real value lies not only in having answers—but in having the nerve to show up with the right questions. To have notions. To even offer something different if it seems right, even if it’s strange or messy or not quite finished.
And perhaps that’s the heart of Laura’s piece: thinking for yourself is still a radical act.
And if AI’s going to share the room, let’s hope it learns to have a few notions of its own.