Why AI Simulation Is Changing Corporate Learning Forever
Discover how AI simulation tools like CAISY are shifting corporate learning from content consumption to active practice, exposing critical gaps in L&D strategy.
The bit of the Skillsoft demo that should make every Head of Learning pause is not the personalised learning paths, or the dashboard for spotting skills gaps. It is CAISY. Skillsoft's AI-powered simulation tool lets employees practise real-world business scenarios, including leadership conversations and responsible AI usage, with dynamic feedback and scoring. That is a quiet but important shift. For thirty years, corporate learning has mostly been about consuming content. Watch the module, click next, take the quiz, get the certificate, forget it within a fortnight. Practice has been reserved for the lucky few who could afford coaches, role-plays, or expensive off-sites. If simulation at scale becomes the default, the centre of gravity in L&D moves from "did you complete the course" to "can you actually do the thing".
Which brings us to the question CTOs and CDOs should be asking, but rarely are. If the half-life of a skill is now shorter than the time it takes to commission, build and roll out a training programme, your library is out of date the day it goes live. Annual content refresh cycles made sense when the underlying tools changed slowly. They do not make sense for prompting, agent design, model evaluation, or the ethics of AI in regulated work. By the time the procurement form has been signed off, the model has had three updates and the vendor has changed its pricing.
So the operational question is not "should we buy an AI-native platform". The question is: who owns the trigger to update our curriculum, and how often do they pull it?
In most organisations I work with, the honest answer is "nobody, and rarely". L&D owns the budget but not the technical fluency to know when something has shifted. The data team has the fluency but no remit over learning. The CIO has the remit but is buried in infrastructure. So the curriculum drifts, quietly, while the slide deck still says "AI strategy refreshed Q1".
There is a more interesting move buried in the same demo. Sastry showed how the Skillsoft platform lets enterprises generate new training materials from their own internal documents, including policies and domain-specific knowledge. Treat that capability seriously and you stop buying training as a finished product and start treating it as a pipeline. Your policy updates, your incident reviews, your post-mortems, your customer service transcripts, your engineering RFCs, all of it becomes raw material for the next module. The teachable practice is being created every day by the people doing the work. The job of L&D becomes curation, governance and quality control, not authoring.
That is a genuinely different operating model. It also exposes a problem most organisations have not solved: the people using these tools every day are usually not the people writing the training. The frontline analyst who has worked out, by trial and error, that the model hallucinates on a particular type of contract clause is the person whose insight should be in next week's module. Right now, that knowledge dies in a Slack thread.
A few things worth thinking about before you sign the next platform contract.
Who pulls the update trigger. Name a person, not a committee. Give them a quarterly review cadence at minimum, monthly for anything touching generative AI tools or regulated workflows. If no one owns the trigger, no one will pull it.
Where your teachable practice comes from. If your training material is still being written exclusively by external instructional designers, you are paying twice. Once for the content, and once more in the gap between what the content says and what your people actually do. Build a route for practitioner insight to feed the curriculum, with proper review.
What counts as evidence of learning. Completion rates are vanity. Behaviour change in the work itself is the only metric that matters, and simulation tools like CAISY at least gesture in that direction by scoring practice rather than recall.
The deeper point, and the one I keep coming back to with the leaders I advise, is that AI does not make training obsolete. It makes the cost of bad training visible. When the tools change every quarter and the workforce is using them whether you have trained them or not, an out-of-date learning programme is no longer a small inefficiency. It is a governance problem with your name on it.
So the question for Monday morning is small and specific: when was the last time someone reviewed your AI curriculum, and who decided it was still fit for purpose?

