We're getting a glimpse into the future of business analysis
AI tools are reshaping the BA role, but the core BA skill set hasn't changed
The business analyst role is changing, and fast.
Many BAs are no longer writing the steps in every use case, or drawing every process map. Instead, they’re focusing primarily on defining tasks or questions for an AI assistant, providing context to anchor the model's response, and adjusting the wording and re-testing when an output doesn’t meet expectations.
Some analysts I know are being proactive. They’ve voluntarily started using ChatGPT or perplexity.ai to expedite tasks such as getting ready for requirements workshops, documenting workflows, or summarizing findings about business problems.
A second group is trying to follow a request by their organization to start demonstrating use of a tool such as ChatGPT or Microsoft Copilot in their routine work, as part of an initiative to increase productivity.
In both circumstances, managing risks and ensuring a positive outcome is no easy feat.
Let’s look at a couple of examples:
“My stakeholders were ignoring my requests to meet to discuss their needs, so I ended up asking our enterprise AI assistant for ideas on how to improve a business process, and used its answers in my presentation to our decision-makers.”
Here, the main problem is clear: if the AI assistant wasn’t fed with input from users explaining their pain points, chances are that its output had little to do with the real challenges the team was facing.
As we all know, process improvement is a deeply collaborative activity. AI cannot replace the value of real conversations to uncover hidden assumptions and build consensus. The lack of context, combined with the “people pleasing” tendencies of the majority of large language models, is likely to result in well-written recommendations that sound plausible but completely overlook the real business need.
“My boss has been encouraging me to start using the free version of ChatGPT to help develop the test cases for our new applications.”
This second scenario highlights the risk of inadvertently exposing sensitive information to a 3rd-party provider. The manager making this recommendation is probably unaware that, due to the less robust data controls of ChatGPT’s free version, business content shared with the tool may be used to train OpenAI models even if the user has opted out of training. Of great concern here is the potential of trade secrets or personally identifiable information (PII) being accidentally added to the provider’s training data—and later exposed in outputs for different users.

The solution, however, can’t be the same as the one adopted by Samsung in 2023: banning the use of ChatGPT among employees after sensitive internal source code was uploaded to the tool.
Rather than simply creating policies that prohibit the use of AI (both difficult to enforce and an impediment to leveraging the technology to improve business operations), organizations must focus on ensuring they have the right frameworks to enable its effective and secure use.
Ultimately, AI assistants can enable a wide range of benefits capable of revolutionizing business operations. To help materialize such gains, business analysts will have to stay informed about things like best practices for prompting engineering, memory management, and AI risk management.
The good news, for talented business analysts who still feel a bit intimidated by AI, is that the skill set they developed to excel at the role will remain be invaluable for the foreseeable future:
Critical thinking to frame the problem for deep analysis, guide, create the right context knowing what to emphasize and what to omit, provide appropriate human oversight to protect the business against unexpected AI failures.
Strong communication skills to efficiently and effectively delegate tasks, refine prompts, and create feedback loops that ensures reliable outcomes.
Skilled BAs can also become instrumental in helping organizations create a robust framework to ensure that:
Anyone working with an AI assistant understands the limitations of large language models, how the quality of their prompts directly influences the relevance and accuracy of the outputs generated, and the risks of unintended data exposure.
AI usage is properly monitored, sensitive information is only submitted to enterprise versions of AI assistants with strong built-in data protection features, and potential threats are met with quick responses to prevent data leaks.
