IBM wants a bigger slice of the enterprise AI pie, and its latest pitch is a trio of generative AI assistants under its Watsonx brand, now generally available. They’re called watsonx.ai, watsonx.data, and watsonx.governance — and yes, they sound like product demos that escaped from a B2B keynote. But IBM insists these tools are designed to help companies build, manage, and audit their own generative AI workflows, without handing the keys to the hyperscalers.
The company’s bet is clear: while Google and Microsoft go all-in on Copilot-style productivity AI, IBM is chasing a different beast — the CIO who doesn’t care about ChatGPT in Slack but is terrified of generative models going off the rails in customer service, legal, or finance environments.
Each of the Watsonx assistants targets a different slice of that stack. Watsonx.ai is a “builder assistant,” aimed at helping developers and data scientists fine-tune and deploy open-source or proprietary models. Watsonx.data acts as a bridge between data lakes and AI tools, promising governed access to both structured and unstructured datasets. And watsonx.governance is meant to keep everything above board — it lets teams set and track compliance guardrails for how AI systems make decisions.
IBM is clearly hoping that a modular, enterprise-friendly AI architecture will win over the Fortune 500 — especially those skittish about plugging ChatGPT into their workflows. To that end, these assistants are supposed to help businesses “go beyond the chatbot,” in IBM’s words, and build genAI into things like customer service ticketing systems, claims automation, or HR tools.
But here’s the thing: IBM’s enterprise AI strategy has always sounded great on paper. In practice, Watson — the brand, the platform, and the promise — has had a checkered history. The original Watson was famously pitched as an AI system that would revolutionise healthcare. It didn’t. IBM eventually sold off its Watson Health assets in 2022. Since then, it’s retooled Watsonx as a new foundation for its AI ambitions, rooted in open models and Red Hat-style hybrid infrastructure.
And yet, there’s still little evidence that IBM’s AI tools have reached meaningful scale outside a handful of regulated sectors. In a market where Microsoft has baked Copilot directly into Office 365 and Google is aggressively bundling Gemini into Workspace and Cloud, IBM is still selling toolkits. Not experiences. That’s a hard sell — even if those toolkits are better suited to regulated industries.
To be fair, IBM does have some real traction. The company claims that over 20 million predictions are now being served monthly through Watsonx.ai and its underlying Granite models. That’s not trivial, but it’s also not on the same planet as the usage numbers OpenAI and Microsoft are touting. IBM’s differentiator, it says, is that its models can run anywhere — cloud, on-prem, or hybrid — and can be tuned on a company’s private data, with guardrails in place.
That approach appeals to banks, insurers, and governments, where AI hallucinations can’t be laughed off as “creative answers.” IBM is already working with companies like Citi, Truist, and NatWest, and with U.S. government agencies via its watsonx.governance layer. But it’s still early days. Building AI workflows in large enterprises takes months — sometimes years — and the feedback loop is slower than in consumer AI.
The broader question is whether IBM can make Watsonx more than just a product bundle. Generative AI is moving fast, and the companies winning so far are the ones that abstract the complexity away. IBM, by contrast, is leaning into complexity — and trying to make it manageable, not invisible.
For some enterprises, that might be the right approach. But for IBM, the risk is that while it builds safe, enterprise-ready AI plumbing, someone else is already selling the water.