We are at the crossroads of a paradigm shift in AI use at the workplace, from Large Language Models like OpenAI’s ChatGPT being a mere fascination to a core part of the workflow for many professions.
We are closing in on a world where digital assistance would anticipate and complete critical tasks, and seamlessly transition between applications with intelligence that matches that of human capabilities.
No longer is this science fiction, but an unfolding reality with the promise of rewriting the rules of how we work, create, and interact with technology. The core question lies for us: if and when will we allow LLMs to do our work for us?

Table Of Contents 👉
Automation in the current world
We’ve already seen glimmers of this interconnected future in today’s technology. Apple’s personal assistant Siri uses Siri Shortcuts to help people connect and complete workflows across different apps, such that users can automate simple tasks like sending a message after some calendar event or opening up a particular set of apps when they start work.
Similarly, platforms like IFTTT (If This Then That Too) allow users to automate repetitive tasks, from managing smart home devices to cross-posting on social media.
These early systems are just a peek into what may come next: a world where digital tools begin not only to execute commands but understand user context and intent.
Foundations for the next wave of automation
Anthropic made a giant leap toward this future when it released “tool calling” for their state of the art Large Language Model, Claude.
Tool calling is essentially the ability that allows AI to reach out and use different digital utilities, effectively stitching together otherwise unlinked tasks.
In a subsequent release Anthropic also released “computer control”, which now allows Claude to actually take control of the user’s computer – this is done by understanding what’s on the users screen by taking a screenshot of the entire screen and then deciding what to do next.
Next steps include typing or clicking on a specific part of the screen. This process is similar to how a human controls their computer.
It’s also rumored that OpenAI is also working on a similar project called “Operator” that could let their LLMs take more comprehensive control of our computers.
There are also a plethora of startups focussing on being a work buddy. For example, the Focus Buddy, a Y-combinator backed startup, promises to be an AI co-pilot that stays on call with you to manage your to-do list, help avoid procrastination, and find the behavioral patterns that are holding you back.
What an intelligent workflow looks like
What these upcoming technologies bring to the table is a leap beyond simple shortcuts like those Apple currently have. They understand what’s actually on the user’s screen and with natural language, can actually understand the user’s intent.
Imagine waking up, opening your laptop, and having an LLM summarize your overnight emails and note which ones require urgent responses.
It then interfaces with your calendar to schedule new meetings, updates your team on what they need to know, and builds visuals to accompany a report you need to present later that day.
Then when you start the day, you chat with the LLM to get all updates and start your day. The technology is evolving from assistant to proactive coworker.
Not only can LLMs help you with the day’s tasks, they get better with each passing day as they understand your work better, and with such improvement, the capacity for the automation of workflows also extends.
Imagine a kind of AI that can peer at the raw data you have entered on an Excel spreadsheet, discern some trends, and then assemble a shiny new PowerPoint presentation not just based on the Excel sheet, but also based on your work history and style.
It’s not just about generating content, but deeply integrating into your workflow to the point that the AI understands why you need a certain slide or how a particular graph should best represent your data.
An example transformation
Take customer support, for example. An AI powerful enough to control computers makes the process for solving customer issues almost trivial.
An LLM could analyze the complaint, find the customer’s records, search for common solutions, and apply them-all in a matter of seconds.
Instead of a long, back-and-forth on a support ticket with human agents, the users can converse with the AI instantaneously. As these systems improve, the dream of consistently flawless customer service will become a reality.
Risks of LLM based workflows
With every great innovation come risks. The integration of generative technology into computer control is no exception. At the core, LLMs are generative models; they create, innovate, and try to imagine.
While these attributes are often beneficial in and of themselves, they can be quite problematic when tied to full control over our devices.
If we give LLMs the authority to control our workflows, we have to confront issues related to safety, privacy, and unintended consequences.
Think of an AI misunderstanding urgent emails and destroying sensitive information rather than preserving it, or generating an image or a document which, even unintended, is ethically disturbing.
The risks of automating actions in a creative manner sans solid checks and balances is something which, at all costs, has to be handled before LLMs can take such major responsibilities.
To mitigate these risks, companies like Anthropic are focused on designing “safe” AI systems, embedding robust guardrails into the AI’s architecture to prevent misuse or mishap.
Even so, as computer control becomes more advanced, user oversight and clear guidelines will be critical. LLMs need to be companions rather than leaders, tools rather than fully autonomous agents.
Can humans still be creative?
Perhaps one of the most polarizing aspects of LLM-driven automation is its impact on human creativity. By offloading the drudge, LLMs promise to give us more time to think, create, and innovate.
But there’s also the question of whether we lose a certain edge if we get too comfortable with work produced by AI. Creativity often blooms from adversity: wrestling to find the right phrase for an email or finding the best way to present data in a presentation.
If LLMs take those struggles away, do we lose that opportunity to learn, grow, and innovate as individuals? Or does automating such laborious processes free us to tackle even bigger creative challenges we otherwise wouldn’t have bandwidth for? The answer will lie largely in our choice of how to use these tools-as partners in productivity or as replacements for human effort.
Despite the grand new capabilities emerging in AI, at least currently, a human element remains essential. In my opinion, these AI models are best regarded as collaborative technologies that supplement, rather than supplant, human creativity and insight.
The most successful professionals of the future will be those who learn how to work with AI-computers with an appropriate sense of what AI can and cannot do.
Path ahead
It’s a fundamentally philosophical challenge: how do we make the automation created by LLMs amplify rather than diminish our humanity? Finding the balance will be critical.
The best case scenario is that if LLMs handle those aspects of the tasks that are tedious, repetitive, and tiring, we will be in a better position to focus more on what counts – greater levels of connection with others, profound thinking, complex problem solving, and, pivotally, boundary-pushing innovation.
The next few years will be very critical in determining how deeply AI will embed itself into our professional workflows. We’re talking wholesale reimagining of how work gets done.
Be it tiny startups or multinational corporations, figuring out how to effectively integrate such intelligent systems into the organizational mainstream is something that organizations will have to do.
Ultimately, whether we will allow LLMs to do our jobs for us will boil down to trust: trust in the technology, trust in the safeguards built around it, and perhaps most important of all, trust in our own judgment in using these tools.
The future of work is coming and it’s up to us to decide how much of that work will be our own, and how much will be shared with the AI companions we create.
About the author:
Rishab Mehra, CTO / Co-founder at Pinnacle Intelligence
- Deep research experience in computer vision in Healthcare at Stanford, advised by Fei-Fei Li, published in top journals such as Nature, NeurIPS, etc.
- Led features for Apple Intelligence and On-device ML at Apple, published 20+ patents
- Currently founder and CTO of Pinnacle, a disruptive AI startup in the Mental Performance Space, raised pre-seed funding