There’s a line in the AWS re:Invent announcements that feels less like marketing and more like a map to where enterprise computing is actually heading: “The more your agents experience, the smarter they become.” It’s AWS describing its new AgentCore Memory capabilities, but it’s also describing the fundamental shift happening across cloud infrastructure, from systems that respond to systems that remember.
AWS re:Invent 2025, which ran from 1 to 5 December in Las Vegas, wasn’t subtle about its central thesis. Across keynote stages, customer showcases, and dozens of technical announcements, the message was consistent: personalisation at scale isn’t a nice-to-have anymore. It’s the infrastructure layer that determines whether your AI investment returns anything more than a glorified search bar.
Memory as infrastructure
The most revealing announcement came buried in Amazon Bedrock‘s updates: episodic memory for AgentCore. It’s a technical capability that sounds incremental until you consider what it actually enables. AI agents can now build logs of user behaviour over time (flight preferences, hotel choices, purchasing patterns) and use that accumulated knowledge to inform future decisions without explicitly being told to do so every time.
Swami Sivasubramanian, AWS’s Vice President of Agentic AI, framed it as agents that “truly understand user behaviour and recognise patterns across similar situations.” That’s the polite corporate version. The less polite version is that AWS is building the infrastructure for AI that tracks everything you do, learns from it, and predicts what you’ll want before you ask. Whether that’s helpful or dystopian depends entirely on who’s building the application and what they’re optimising for.
The practical application showed up in customer implementations. Epsilon, the data-driven marketing firm, deployed over 20 agents for audience segmentation, campaign creation, and performance optimisation. They’re reporting a 30% reduction in campaign setup time and a 20% increase in personalisation capacity. Those aren’t aspirational metrics, they’re production numbers from systems already running.
Adobe’s one-to-one gambit
Adobe CEO Shantanu Narayen used his keynote slot to announce something that matters more than most of the technical releases: Adobe Experience Platform is now available on AWS. That might sound like a straightforward cloud partnership, but it’s actually Adobe admitting that enterprise data gravity lives on AWS, and if they want to power personalised customer experiences at scale, they need to operate where that data already sits.
Anjul Bhambhri, senior vice president of Adobe Experience Cloud, put it plainly: “Delivering one-to-one personalisation across a myriad of digital channels is quickly becoming table stakes for brands.” Table stakes. Not competitive advantage. Not differentiation. The baseline expectation.
The technical implication is significant. Organisations storing customer data across AWS services like S3, Redshift, or DynamoDB can now activate that data for personalisation without the latency and complexity of cross-cloud transfers. It’s the difference between personalisation as a batch process that runs overnight and personalisation as a real-time decision layer that operates at the moment of interaction.
Adobe’s customer list for Experience Platform reads like a roll call of brands trying to figure out how to stay relevant: The Coca-Cola Company, Dick’s Sporting Goods, Major League Baseball, Marriott International, Panera Bread, Prudential Financial, Qualcomm, U.S. Bank. These aren’t experimental deployments. They’re production systems handling billions of interactions.
Sony’s 760-terabyte emotional engine
Sony took a different approach to the personalisation conversation. Instead of talking about efficiency or cost savings, Chief Digital Officer John Kodera framed it around “Kando”, a Japanese concept describing deep emotional connection. That’s a notably human framing for what’s essentially a massive data processing operation.
Sony Data Ocean, running on AWS infrastructure, processes up to 760 terabytes of data from more than 500 different sources across Sony’s businesses. That’s not just PlayStation gaming data or streaming analytics. It’s electronics, music, movies, anime, and everything else Sony produces, all being correlated to understand what individual fans enjoy and deliver more of it.
The new Engagement Platform Sony announced will connect their entire portfolio, making it easier for fans to discover content across all of Sony’s offerings. In practical terms, that means if you watch a certain anime on Sony’s streaming service, the system might surface a related PlayStation game, recommend a soundtrack on their music platform, and suggest merchandise from their retail operations. All personalised. All automated. All running on pattern recognition across 760 terabytes of your behaviour.
Whether that creates “Kando” or just creates more effective product recommendations is probably a matter of perspective and implementation. Sony’s betting it’s the former. The technology certainly enables the latter.
Amazon Connect gets aggressive about cross-selling
Amazon Connect received 29 new agentic AI capabilities, and the most telling one is AI-powered product recommendations during customer service interactions. By combining real-time clickstream data with customer history, AI agents and representatives can deliver “highly personalised product suggestions at exactly the right moment.”
James Boswell from Centrica explained how they’re using Amazon Bedrock to analyse call transcripts and automatically wrap up conversations, reducing talk time by 30 seconds per call. What they’re doing with those extra 30 seconds is revealing: “What we’ve been able to do with the extra 30 seconds is cross-sell.”
That’s personalisation in practice. Shave efficiency off operational processes, reinvest the time savings into revenue generation, frame it as better customer service. The AI isn’t just answering questions more efficiently, it’s identifying sales opportunities in real-time and executing on them within the conversation flow.
Companies like Zepz are deflecting 30% of customer contacts whilst processing US$16 billion in transactions. TUI Group migrated 10,000 agents across 12 European markets and cut operating costs by 10%. These aren’t pilot programs. They’re scaled deployments handling material business volume.
The infrastructure underneath
All of this personalisation requires significant computational resources, which explains some of the hardware announcements. Trainium3 UltraServers, AWS’s first 3nm AI chip, delivers up to 4.4 times higher performance and four times better performance per watt. That’s not just marketing fluff, training personalisation models at scale is computationally expensive, and better performance per watt directly translates to lower operational costs.
Amazon Nova 2 Sonic, a new speech-to-speech model for conversational AI, enables more natural voice interactions with multilingual capabilities. Combined with Nova Forge, which lets organisations build custom frontier models on their proprietary data, enterprises can now create deeply personalised AI that understands their specific domain and customer base without starting from scratch.
For enterprises requiring data sovereignty whilst pursuing personalisation, AWS introduced AI Factories, deployments that transform existing infrastructure into high-performance AI environments with AWS networking and services but residing in the customer’s data centres. It’s AWS acknowledging that regulatory requirements and data residency rules are real constraints, not problems to be solved by “moving everything to the cloud.”
The 100% human, 100% agentic paradox
Pasquale DeMaio, Vice President of Amazon Connect, outlined what he called the “messy middle” approach: businesses don’t need to be 100% agentic or 100% human. They can be somewhere in between, which is “more likely” and “effective.” That’s a notably pragmatic position from a company trying to sell AI infrastructure.
The idea, as DeMaio framed it, is that AI handles personalisation at the data and pattern recognition level, freeing humans to build deeper emotional connections with customers. Whether that’s actually what happens or whether it’s just humans managing the edge cases AI can’t handle is an open question. The technology certainly enables both scenarios.
What’s actually being built
Strip away the keynote polish and what AWS demonstrated at re:Invent 2025 is a coherent infrastructure stack for AI that doesn’t just respond but anticipates. Memory isn’t a feature anymore, it’s a fundamental capability. Systems that can’t remember user preferences, learn from behavioural patterns, and proactively act on accumulated knowledge aren’t personalised. They’re just customisable.
The business case is straightforward: Epsilon cut campaign setup time by 30%, increased personalisation capacity by 20%. Centrica found 30 seconds per call to reinvest in cross-selling. Zepz deflects 30% of contacts whilst handling billions in transactions. These aren’t vanity metrics. They’re measurable returns on infrastructure investment.
But the technical capabilities being deployed also enable something else entirely. When Adobe talks about “one-to-one personalisation across a myriad of digital channels” becoming “table stakes,” they’re describing a future where every interaction is monitored, every behaviour is logged, every pattern is analysed, and every response is optimised. The infrastructure AWS announced makes that technically and economically feasible at enterprise scale.
It’s worth remembering that AWS built an entire fragrance lab to demonstrate hyper-personalised marketing capabilities. That wasn’t about selling perfume, it was about proving the infrastructure could handle the data processing and real-time decision making required for true one-to-one personalisation. What seemed like an eccentric marketing stunt was actually a proof of concept for exactly the kind of systems being deployed now.
Whether personalised AI creates better customer experiences or just more sophisticated manipulation techniques depends entirely on how it’s implemented and who’s making those decisions. The technology is neutral. The applications won’t be.
What’s clear from re:Invent 2025 is that the race to build AI memory infrastructure is already underway, and the companies investing in it aren’t treating personalisation as a feature to bolt on later. They’re treating it as the foundation layer that determines whether their AI systems deliver anything more valuable than an expensive autocomplete function. That distinction matters more than most of the other announcements combined.


