
Algorithms of Anxiety: Is smart technology making us nervous?
Far from a dystopian takedown, Algorithms of Anxiety by Anthony Elliott examines how AI is subtly infiltrating our emotions, choices, and leadership culture.
In both boardrooms and business schools, AI is still largely framed in terms of scale, speed, and productivity; powered with data, this ever-evolving suite of technologies provides clarity to known unknowns, and has the power to reveal to us certain unknown unknowns, too. Yet Anthony Elliott’s Algorithms of Anxiety puts forward a foil to this pedagogy: the same tools designed to eliminate uncertainty may also be driving it in new and insidious ways.
Elliott, a sociologist with a track record of decoding technological shifts, argues that automation is not just changing how we work—it is changing how we think, feel, and relate to ourselves. Algorithms are no longer just behind the scenes in operations; they are becoming embedded in the everyday, influencing everything from hiring decisions to mental health apps, online dating to battlefield drones. The book asks a serious question: Are we gaining control, or giving it away?
It is an important palette cleanser for the C-suite leader in 2025.
What this book isn’t is a rant against AI. Elliott refuses to indulge in doomsday speculation. Instead, he draws on social theory and cultural criticism to explore the unintended consequences of life increasingly lived by algorithm. From the addictive loops of Netflix to the predictive nudges of Amazon, the book examines how these systems reshape behaviour—not just in consumers, but in organisations and entire industries.
At the heart of Elliott’s argument is what he calls “outsourced autonomy.” Algorithms promise frictionless decision-making. But in delegating choices (about what to read, who to trust, how to evaluate risk) we may lose something vital: the active, sometimes uncomfortable role of human judgment. Convenience is not always convenient. Leaders who assume automation is purely liberating, for example, may find this counterpoint sobering.
In framing both the technical risks of AI as well as the psychological and emotional toll, Elliot’s book is a useful addition to the landscape and literature of a nascent professional culture dealing with AI. Elliott’s message is tough in some quarters but ultimately lands. Instead of asking only “How can AI improve our efficiency?”, he asks, “What kind of people and workplaces are we becoming in the process?” It is a question that opens up deeper lines of inquiry around leadership, ethics, and the long-term viability of trust in systems that are becoming increasingly opaque.
One interesting section explores the myth of the algorithm as neutral. Myths are important, not least because unchecked they quickly become story, and then narrative. Machine learning systems absorb human biases, then scale them. The book points to examples like biased hiring algorithms, facial recognition failures, and social media platforms that elevate outrage because of its clickability.
As C-suite leaders, the risks can be reputational and strategic: an algorithmic misstep can destroy brand trust in days.
The book enjoys theory as well as content on practical reflection. It suggests that real leadership in the age of AI means designing systems not just for performance, but for dignity. It means balancing smart automation with what Elliott calls “emotional resilience”, or, creating space for human agency in a world increasingly shaped by machine logic.
What Algorithms of Anxiety is light on is any suggestions of frameworks for action. Business readers looking for toolkits, case studies, or playbooks will need to look elsewhere. This may be intentional. The book does not seem to be looking to answer how we solve algorithm-induced anxiety. He is trying to make sure we see it.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.