Discussion about this post

User's avatar
Aaron Scher's avatar

I claim that "AIs will be good at using APIs before they are good at computer use" is true but mostly irrelevant to existential risk. I agree that it's convenient for some current stuff around AI reliability, but I struggle to see a story where it reduces risk from misaligned superintelligence. If you want to make some broader claim of like "unexpected things happen and they're sometimes good for safety", sure. Are you trying to make a stronger claim vis a vis x-risk?

On a different note:

> Granular permissions, human-readable audit logs, and explicit action approval workflows could become the norm without safety regulations, but because they're what developers and organizations want and use. For example, Claude and OpenAI’s API options have taken off much faster than OpenAI’s Operator, their AI agent that uses a virtual browser that still sits behind a $200 a month paywall.

I don't think that example is a fair comparison. The Chat APIs have been around for like 2+ years, Operator for 6 months, and they're totally different use cases, which I think is basically what explains the difference in use. I don't disagree with the claim 'Granular permissions, human-readable audit logs, and explicit action approval workflows will be desired by customers', but this example is just not very relevant.

Expand full comment
Ken Kovar's avatar

I wish the term superintellenge would go away. It’s this trendy cocktail party term . I read Bostroms book and it’s pretty much just garbage

Expand full comment
2 more comments...

No posts

Ready for more?