Skip to content
Go back

When Good Design Backfires: The Hidden Accessibility Cost of Conversational AI

Claude’s emotionally intelligent design is a gift to ADHD users — until its limitations make it unusable.


It started with a simple, utilitarian question: “Do I still have to be careful with usage limits here?”

I expected a quick answer — maybe a list of numbers. Instead, what unfolded was a thoughtful conversation that gently unpacked a much deeper tension in AI design. It almost felt like Claude led the discussion, asking the right questions back, steering us into nuance.

The very things that make Claude feel human can make it hard to use.

The Design Tradeoff

Claude has strict usage limits that vary based on conversation complexity:

This creates a fundamental accessibility barrier: forced context switching. If you don’t experience ADHD, that might sound small. But for me — and many others — it’s not.

Switching threads mid-thought is like having your train of thought derailed by a brick wall. It takes real effort to re-orient, reload the context, and try to continue where I left off. When I’m finally in flow on a complex problem, getting kicked out means starting over — not just technically, but cognitively.

The Accessibility Paradox

Claude’s design is built around high-EQ interaction:

But these strengths demand more computational resources per interaction, which drives the strict usage limits.

So we land in a paradox: Claude is designed to be more accessible through conversation quality, but its limits create less accessibility for sustained usage.

This isn’t bad intentions — it’s a classic design conflict. The feature that’s supposed to help me also makes it unusable for the way I work.

My Hybrid Solution

Like many users, I’ve built a workaround:

This isn’t ideal, but it’s realistic. No tool is perfect. Each serves a role.

Still, the experience has stuck with me. It’s a reminder that accessibility isn’t just about what a tool can do — it’s also about what it lets you keep doing. Continuity, for some of us, is everything.

Beyond the Individual Workaround

The deeper issue isn’t just personal frustration. When forced context switching disrupts sustained work, insights get lost. Complex problems that require iterative thinking get abandoned. The very users who might benefit most from AI collaboration — those who think differently — get systematically excluded from longer-form engagement.

This matters beyond individual productivity. If AI tools are going to be genuinely collaborative, they need to accommodate different cognitive patterns, not just different conversation styles.

Final Thought

We talk a lot about AI alignment and safety. But we need to talk more about usability — especially for people whose cognitive patterns don’t fit the norm. Accessibility isn’t always about screen readers or alt text. Sometimes, it’s about not being kicked out of the room just when your brain finally clicks into gear.

That’s not a bug. It’s a design gap. And it’s fixable — through tiered usage models, better session persistence, or simply acknowledging that some users need sustained access to think effectively.

The question is whether we choose to see it.

References Anthropic. (2023). Claude’s Constitution. https://www.anthropic.com/news/claudes-constitution

Anthropic Help Center. (n.d.). Does Claude Pro have any usage limits? https://support.anthropic.com/en/articles/8325612-does-claude-pro-have-any-usage-limits


Next Post
AI as an Accessibility Tool: Finding My Voice