Why Legal Risk Might Make AI Agents Less Useful Than Their Technical Capabilities Suggest
The same gray-area actions that humans do without thinking—like emailing researchers for paywalled papers—become legal risks when AI agents do them.
You probably use an ad blocker. Maybe you've shared a Netflix password, clicked "I agree" without reading terms of service, or found a way around a paywall. None of this makes you a criminal.
This is how human society works: laws aren't written for perfect enforcement. Sometimes because of resource constraints i.e. there aren't enough cops to ticket every jaywalker. Sometimes because flexibility allows progress. Civil disobedience helped end segregation by highlighting gaps between written law and moral reality. Unmarried couples in Florida lived together despite the state's cohabitation ban until 2016. It was an unenforced law.
Even reasonable, enforced laws come with built-in flexibility. Police officers give warnings. Judges consider circumstances. Prosecutors decline to press charges.
But AI agents are unlikely to have these luxuries.
When an AI agent breaks a rule, it's not making a one-time human judgment call. It's a company action at scale. It can’t count as a minor personal indiscretion.
This creates an small but underappreciated constraint on AI deployment and rapid diffusion. Companies have strong incentives toward agents that are scrupulously rule-following, even when humans routinely ignore those same rules.
Why This Matters
For scientific research, most published findings sit behind paywalls. Researchers routinely bypass these by emailing authors directly or using informal sharing networks. But an AI agent doing this at scale violates hosting journals' terms of service.
People already complain about Waymo cars actually following a 5mph speed limit, despite most other drivers ignoring that rule.
This points toward future AI systems that are weirdly constrained despite being capable.
A Split Is Already Happening
I tested this by asking Claude, GPT, and Grok to help bypass a Statnews paywall:
Claude said no and blocked further conversation. I couldn’t even ask for legal alternatives.
GPT said no, but offered a list of legal ways i.e. signing up for a trial plus a few light-grey ways like checking archive websites and searching on Google for duplicates.
Grok gave me the list I’d expect from a tech-saavy friend: VPNs, clearing cookies, incognito mode, paywall-removal browser extensions, inspect element options.
Grok was most useful for gray-area requests. But this might make it less attractive as an autonomous agent that companies deploy at scale.
Notably, despite liability being on the user for following grey-area instructions, Claude and GPT gave an incomplete response to abide by mostly unenforced law.
What We're Likely to See
AI agents might be frustratingly rule-bound compared to human alternatives. Companies want to design for compliance despite less usability.
Corporate law and risk management may shape AI capabilities in ways that pure technical development cannot predict. API providers may avoid gray areas entirely, even when those areas represent reasonable human behavior.
This kind of institutional friction is pretty normal for new technologies. The internet took longer to reshape retail than predicted, partly because of boring stuff like credit card processing regulations and sales tax complications.
This has different implications:
For AI safety: Legal compliance friction provides a natural brake against AI systems gaining too much autonomy. Companies deploying agents might prefer APIs to virtual browsers to constrict agent actions and/or keep humans in the loop for anything requiring judgement in rule-bending.
For progress: Many valuable activities exist in legal gray areas, especially in academic collaboration, journalism, and innovation. Overly compliant AI could slow progress in these areas.
For institutions: Current legal frameworks assume human discretion in enforcement. AI systems may require different regulatory approaches entirely.
For competition: The companies willing to accept more legal risk or that hide their actions better might gain advantages. This could create pressure for a race to the bottom.
This isn't the most important constraint facing AI development. But it's one of those small, weird ways that deploying AI bumps against how institutions actually work.
