With all eyes on generative AI (GenAI) this year, Australian executives are scrambling to make sense of how the technology can create actual value in their organisations. All while assessing what’s still an unclear risk profile, as well as learning the basics of the underlying technology itself.
Despite its impressive advances, it’s important to keep in mind that GenAI is a maturing technology. One that creates an enormous opportunity for any aspiring digital business to create and dominate new GenAI market categories.
Don’t limit yourself to thinking of GenAI as something that can be used to improve operations, but rather how your organisation can lead in and with the technology.
The right balance needs to be struck between the broadest efforts possible to innovate, with the need to manage the most pressing risks.
Significant improvements are being realised with every major revision of top large language models (LLMs).
The competitive landscape is highly dynamic, with mechanisms for commercialising it a work-in-progress, as is the potential impact of open-source offerings on the viability of commercial GenAI. The partner ecosystem around the major LLM providers is also still forming.
In addition, government regulations are still emerging, with significant differences in approach emerging from various government jurisdictions.
The immature nature of the market is reflected by most organisations still investigating what’s possible. Instead, they should be thinking about the contribution they’ll make to the effective use of GenAI.
At the same time, they’ll need to avoid the temptation of seeing themselves as simply technology consumers, while focusing only on the risks associated with vendor-supplied GenAI solutions.
Doing so gives their organisation the possibility of maximising value through their own innovations.
Speedy GenAI Innovation
Given AI is the technology executives believe will have the most significant impact on their industry over the next three years, according to a Gartner survey, efforts to drive growth through innovation will focus on ways to use GenAI, given its high profile.
Of course, an important aspect of immature technology is managing emerging, ambiguous risk, with organisations rushing to implement use policies. Yet, one risk is consistently neglected.
Almost none have considered the risk of not being an AI market leader.
This point belies a limited perspective about the potential of GenAI by those drafting AI policies. Historically, businesses considered new technology as a means to improve operational efficiency.
Modern businesses are just as inclined to look for ways to improve customer experience and drive revenue.
The potential mistake executives can make is to pursue these results solely through the use of technology vendor solutions. But there’s no longer a hard line that delineates a technology provider from a technology-consuming enterprise.
Leaders recognise their ability to create and commercialise digital assets, not simply manage the supply and use of providers’ digital assets.
They must strive toward GenAI leadership, but without a risk-benefit analysis that accurately assesses the potential benefits, the entire process is a wasted effort.
GenAI market leadership is a real possibility, but requires managing technical complexity and incurring costs, which will vary depending on the deployment approach.
It doesn’t require building an LLM from scratch or making use of an existing open-source LLM, although these are real options given the right circumstances.
Data assets can be applied to fine-tuning techniques to create new solutions, either for internal use or as market-facing digital products. Prompt design skills, directed and quarantined, can be the source of competitive differentiation.
Many categories of GenAI will emerge, and there’s nothing stopping a modern enterprise from becoming a category creator.
JPMorgan Chase, Bloomberg, and Thomson Reuters, for example, see their existing business as a way to become GenAI providers, rather than only being served by other providers.
As more businesses come to this understanding, the nature of what defines the GenAI market will be reshaped.
Speed of innovation is of the essence and GenAI requires bottom-up innovation to achieve that. The ability of any team, in any department, in any business unit, in any subsidiary to experiment within reasonable risk guardrails.
This isn’t to say there aren’t risks. Some are legitimate and must be dealt with now, such as using secure corporate information in publicly available chatbots or using GenAI outputs without proper checks.
Don’t stifle bottom-up innovation
Recognise that an element of public fear-mongering is amplifying concerns about the risks that, in some cases, are hyperbolic – such as mass unemployment, or the end of humanity – and are generating widespread calls for regulation.
We’re hard-pressed to recall any previous advance in digital technology that has followed this path so early in its development.
If the underlying theme of GenAI value realisation is speedy, bottom-up innovation, avoid the temptation to mitigate risk by creating AI policies and steering committees that effectively centralise the approval of GenAI initiatives.
Naturally, the creators of these clearly want to cover their organisation for anything that can go wrong. While that’s laudable at one level, it’s unrealistic on another.
As experimentation continues, the nature of GenAI risk will move from broad brush into nuanced categories. There’s no reason to believe that a policy can reasonably cover all possibilities, or that a steering committee can stay abreast of them.
Increased effort to do so will come at the cost of the autonomy and flexibility needed to support bottom-up innovation.
The scope of potential GenAI solutions spans every single facet of an enterprise. For innovation to gain traction, experimentation should be allowed to flourish with the widest possible risk management guardrails.
Don’t create new AI policies where existing data privacy or information security policies can be updated for GenAI. Don’t impose strict risk management conditions on experiments, but instead let them be conducted with the requirement to identify, manage and inform of any risk identified.
Finally, position steering committees as centres of collaboration, rather than centres of control.
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here.
About the author
Brian Prentice is a VP analyst at Gartner in the Executive Leaders of Digital Business practice. His focus is on digital geopolitics, digital productisation and how organisations can create business value with generative AI.