How to increase your job security as AI reshapes the workplace
The key is to foster the right mindset
It didn’t take long for IT professionals in my circles who had been worrying about the impact of generative AI on their job security to feel a sense of relief when OpenAI finally released GPT-5.
Instead of a leap in capabilities comparable to what was seen in previous major releases like GPT-3 and GPT-4, the initial reaction to the new version was best summarized by Gary Marcus: “overdue, overhyped and underwhelming.”
Yet, despite their shortcomings and high cost, many companies continue to find more and more ways to fit AI agents into their workflows. In my last post, I talked about the great opportunities this brings for talented business analysts willing to put their critical thinking skills to good use helping organizations adapt to their new AI-powered workflows.
But what does it mean in practice? How can a person’s critical thinking skills make them more valuable (and potentially indispensable) for an employer using generative AI?
Critical thinking is the most powerful tool companies have against making wrong assumptions about AI that end up causing disruption of business operations and financial setbacks.
Take for example the case of the law firm whose lawyers were sanctioned for a motion that cited nonexistent cases earlier this year. There is abundant evidence that today’s generative AI tools aren’t able to reliably produce content free from errors or bogus citations (even those with an astonishing performance in bar exams). That means it’s entirely unreasonable to assume that their “cognitive abilities” can be trusted without strong supervision and oversight.
A critical thinker in charge of investigating the changes in process required to responsibly adopt AI in a law firm would be able to identify opportunities to taking advantage of the technology. Potential uses include saving time and improving outcomes by having AI generate legal document drafts and find pieces of evidence in large volumes of content. But a firm line would have been drawn at filing AI-supported briefs in court only after every fact and citation was checked and every legal reasoning and argumentation logic was reviewed.
Besides helping companies avoid the misuse of AI technology, critical thinkers also help them avoid the mistake others make at opposite end of the spectrum: assuming that AI tools are useless in real-word use cases because of its tendency to be “confidently wrong.”
Whether we like it or not, even with its “hallucinations” and unreliable outputs, general purpose AI has shown evidence of being a valuable tool to increase productivity and enhance the quality of work outputs.
Properly deployed, enterprise sanctioned AI tools (either based on a generalist model or a specialized solution) can help an organization develop sustainable advantages. For instance,
Law case strategy. Use AI to brainstorm potential arguments, anticipate counter-points, and draft discovery plans, treating it as tool to assist and enhance, not replace, one’s professional judgment and due diligence.
Training materials. Use AI to generate material to train employees in complex manufacturing processes. Rather than directly using the AI output in training artifacts, treat it as a draft to be reviewed by at least two knowledgeable employees before the final version is released.
Software localization. Use AI to produce translations and culturally appropriate images and gestures to make a software application feel native to local users. Put knowledgeable professionals (who wouldn’t necessarily be good at the translation task but can be reliable reviewers) in charge of proofreading and fixing localization bugs.
In common in all these scenarios is the human supervision and oversight layer. (It’s possible that at some point we’ll have enough evidence that an independent AI tool can successfully take this task over as well, but until then, human guidance is the only acceptable path.)
Some may challenge the benefits of adopting costly technology that can only work well under strict direction and guidance of humans. But again, the answer here is to use your critical thinking to question assumptions, recognize your bias, and consider all the evidence before forming any conclusions.
For instance, I’ve been involved in multiple software projects requiring localization. In many of them, launches were severely delayed because we struggled to find culturally-aware translators in various languages. In one case, because I was born in Brazil, I was in charge of supervising the localization work for Brazilian Portuguese.
We managed to find one great translator, but the other hires were performing badly. In that project I had to spend a lot of time fixing the mistakes of the less competent translators, which caused other tasks to suffer. If I was doing this work now, I’d definitely find the budget to introduce an enterprise AI solution to help expedite the process. While it wouldn’t be cheap, nor yield perfect translations, neither was the effort to constantly recruiting and training more translators and, more importantly, the opportunity costs of a delayed time-to-market. Our localized launches, and path to revenue growth, would have been greatly expedited. (Speaking as someone who has done the job, manually reviewing and improving mediocre translations created by either humans or AI is way faster and scalable than producing them.)
Critical thinking requires us to consider evidence and avoid letting emotions cloud our judgement. If you foster this mindset, you’ll develop a superior ability to evaluate arguments pro and con the use of generative AI, and will be well-positioned to find suitable workarounds for the limitations of the technology. For anyone interested in becoming an irreplaceable linchpin in the age of AI, the disposition to think critically when others might not, is still your best bet.
