‘Early days’: Salesforce SVP talks OpenAI turbulence and the future of AI

Innovation

As news of OpenAI CEO Sam Altman’s ousting – and subsequent hiring at Microsoft – spreads, Forbes Australia sat down with Salesforce SVP of AI and Machine Learning, Jayesh Govindarajan, to find out what this means for the broader industry.

It’s been a turbulent week for OpenAI, which ousted its CEO Sam Altman, hired an interim CEO (for 24 hours), faced backlash from its major backers and found a new CEO in former Twitch boss Emmett Shear (who now might be resigning, according to fresh reports).

About two-thirds of OpenAI employees (700 staff) said they’d quit if Altman wasn’t reinstated, but now that the former CEO has found a new home at Microsoft in a newly-formed advanced AI research unit, they’re in limbo.

And Marc Benioff, CEO of the US$31 billion tech company, Salesforce, took it as an opportunity to poach a few skilled workers for its own research team, to work on the company’s Einstein AI tech. Benioff reached out to staff on X, offering to match the salaries of ex-employees should they resign.

“Certainly it’s a great opportunity for talent,” Salesforce’s SVP of AI and Machine Learning, Jayesh Govindarajan tells Forbes Australia of Benioff’s opportunistic play. “But it’s early days. We’ll have to see how things pan out, but we’re sure it’ll land pretty well.”

Govindarajan joined Salesforce through the acquisition of his data science company MinHash in 2016. Now, he’s working on the company’s Einstein tech, which has integrated with OpenAI to bring generative AI capabilities to Salesforce customers.

“We feel confident that the turbulence will settle down one way or another,” Govindarajan says of the recent leadership shuffle at OpenAI.

“But what it is broadly indicative of is… It’s one thing to build a great consumer application, and something else entirely to build systems that people can use at work to get things done. And the level of trust and control that one needs to bring to this technology to the workforce.”

Trust and safety a ‘top priority’

On this point, Govindarajan says ethics and responsibility must remain “top priority” when building generative AI assistants like its own Einstein co-pilot, which launched in September this year as a way for enterprise users to drive productivity.

That new tech enables users to ask questions in natural language and receive relevant answers that are grounded in secure proprietary company data from Salesforce Data Cloud. Govindarajan talks a lot about grounding, which means giving large language models access to use-case-specific information, which is not inherently part of their training data.

(For Salesforce, that means access to their enterprise client’s data – with permission, of course – to bring more context to the generative AI applications that they employ, which ultimately makes functions like co-pilot more reliable.)

Einstein co-pilot, which was an iteration of Einstein GPT, can generate everything from marketing copy to code, but Govindarajan is acutely aware this kind of tech can pose issues – like bias.

“It’s important for organisations to understand that bias can happen, and to take that data-first approach to building AI systems,” he says. “It’s a new world, to be entirely honest. There are all kinds of things that technologies can lean into.”

Jayesh Govindarajan, SVP of AI and Machine Learning at Salesforce. Image source: Supplied

But it also poses another risk, and it’s not AI becoming sentient (that doesn’t keep him up at night at all, he says). It’s the fact that AI may not have the same respect for company restrictions as humans would.

“Actions and data have restrictions, there’s a governance model,” Govindarajan says.

“There are company rules that are codified in enterprise on who can act on what, and who can access what controls. But in this new world of large language models, are these restrictions respected? In the Salesforce ecosystem, we of course strive to ensure customers have the knowledge to ensure there are no unintended consequences, but how that works in the consumer context is a little bit unclear.”

From generative content to generative experiences

On upcoming trends, Govindarajan says while 2023 was the year of ‘generate-only experiences’, like Chat GPT writing EDM copy or a high-school research paper, 2024 will be the year of ‘generate-and-execute experiences’, like submitting instructions or code into a GPT system and it performing a task for you.

“You scale the models, the models get large and new functions and capabilities start to emerge,” Govindarajan says. “You can see the trajectory of new tasks that these systems are learning – and they’re starting to get multimodal.”

Over the next 12 to 18 months, Govindarajan says he’s most excited about building an enterprise stack for generative AI, adding that he believes Salesforce has “the three big things that matter to win in the space”.

According to the SVP, those three things are: a deep sense of what enterprise users do at work; an ability to access enterprise data (with permission) to ground and give context to generative AI applications; and to plonk the two together to ensure its generative AI applications complete relevant tasks.

“Salesforce has amazing trailheads, which is how we teach people how to get something done on the Salesforce platform – imagine teaching an engine how to do that on their behalf.”

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.

More from Forbes Australia