Modiste is our platform to learn policies for when the use of AI agents is appropriate, given user state, task requirements, and organizational constraints. We modulate access to agents by combining dynamic prices, richer objective functions, and purposeful frictions so that assistance aligns with safety, equity, and cost goals.
Key Initiatives
01
Modiste Platform
Learning policies for when AI assistance is appropriate
02
Access Modulation
Using dynamic prices, richer objectives, and frictions to align AI assistance
03
Humans as Tools
Information acquisition to improve agent performance and oversight
Research Questions
When is the use of AI agents appropriate given user state, task requirements, and organizational constraints?
How can we modulate access to agents to align with safety, equity, and cost goals?
How can agents acquire information from humans to improve performance and strengthen oversight?