A recent academic paper about the economic significance of the “up or out” employment system used by famous consulting firms got me thinking about the economic liberty of employees in the age of artificial intelligence. Consulting is labor-intensive and thus very attractive to both consulting firms and AI tool-builders, but the early returns have mixed and the longer-term future is uncertain.
We often hear about the massive investments made by all kinds of corporations in artificial intelligence, and sometimes we also learn that these massive investments aren’t paying off. Why should businesses want to adopt AI? In short, because they think they can use it to boost productivity by getting more from their workforce, or getting the same level of output from a smaller workforce.
New Old Web is a blog published by Alley and Lede. We're researchers, strategists, designers, and developers who want to make the internet a fun place to live and work.
Many enterprise AI adoptions are top-down. Management says “adopt AI,” and engineering leaders or consultants set about finding models to deploy, decisions to automate, and documents to load into a vector database. The assumption here is that by foisting new systems upon workers, they’ll become more productive. Nothing about organizational change management suggests that this works very well.
In fact, unpacking the components of the Beckhard-Harris change management formula predicts that a top-down AI initiative should fail. The key components of the change equation are dissatisfaction with the status quo, desirability of the proposed end-state, and the practicality of the change, and these attributes must outweigh the cost of the change.
Top-down rollouts in general are demoralizing to workers, because it sends a signal that the enterprise would like to employ fewer people. In many organizations, this might not be surprising, but it’s still an unpleasant message to receive. Furthermore, it sends the signal that a worker’s own perspective on the job is less important than their managers — an especially frustrating circumstance for knowledge workers, some of whom hold jobs that are essentially theirs alone.
Is there a better way? Maybe. What if we give the tools and some constraints to our team members and simply continue to hold them accountable for their output and rate them by their own productivity? Here’s an idea:
- Tell your team that they can use whatever AI tools they think would make them more productive. Seed the conversation with some suggestions and your own ideas for experiments.
- Be clear about business constraints, especially around what material can and can’t be used to prompt AI.
- Stress that they’re responsible for their own output, whether they got help from an AI or not.
- Explicitly encourage knowledge-sharing and show-and-tell. If some individuals or teams seem like they’ve made a giant leap, you can investigate federating their work, but don’t rush into it.
- Implicitly encourage knowledge-sharing by rewarding productivity the same as you always have. If the best route to productivity is through AI, workers will be incentivized to copy strategies that are working and avoid strategies that are not. If the best route to productivity avoids AI, workers will limit their losses.
There’s an experimental and economic-evolutionary aspect to this that top-down strategies completely miss. In fact, the adoption strategy itself is nearly an agentic loop. This empowers team members to find strategies that work for them, improves buy-in, and limits the downside risk of spending a huge amount on a centralized AI project.
There’s a second advantage to this approach — management does not have to have to develop a strong thesis of AI in their organization, and thus does not have to bear the risk of being wrong.






