DTX Manchester hears debate on the future of work as well as warnings about over-reliance on technology
We’ve all heard the apocalyptic predictions that AI is going to take millions of jobs.
That’s hard to hear for those of us with a few years of work under our belts. But it’s really very bleak indeed for young people looking to start their careers, only to read every five minutes that there apparently won’t be any careers to have.
So is it true? And if it is – even only partly – what on earth can we do to make sure young people can start their careers?
The Digital Transformation Expo Manchester, known as DTX, is all about working out what technological change means for businesses and their employees. And so on the main stage, a group of experts gathered to debate the “elephant in the room” – if AI really does eliminate junior and entry roles where people learn on the job, then how can companies develop the next generation of leaders?
Host Gwyn Slee, chief technology officer and “AI Evangelist” at G-Star Intelligence, said that using AI in the short term might make sense because it’s “faster and cheaper”. But, he asked, is that storing up problems down the line because young skilled people aren’t being hired?
Richard Whittle, professor of artificial intelligence and public policy at the University of Salford, was blunt – our current way of training and way of developing experts, he said, is over.
“There is little point in your organisation hiring as you were 10 or 20 years ago…” he added. Instead of recruiting young graduates as generalists, companies might have to move to “deep specialisation”, hiring people with very specific and often very technical skills.
Keeley Crockett, professor in computational intelligence at Manchester Metropolitan University, said young recruits needed ethical skills and the ability to verify information.
She said the UK needed to offer hope to its young people as we need a “productive workforce of all ages” with opportunities open to all.
Caroline Ellis, AI & data ethics lead at NatWest Group, agreed firms needed to rethink what “entry level” means, to ensure there are still career opportunities for young people.
She asked bosses: “How is your business going to run in a few years’ time?” If there are no junior or mid-managers, who will run those businesses once existing leaders move on or retire?
The panel later debated how AI should be used in organisations, and what skills were most vital to make it work.
Keeley said AI could be well used as a tool to “augment” human work, but that it needed to be used by people with the right core knowledge and cognitive skills.
She said: “Unless you know how machine learning models work… how can you possibly verify what that AI comes out with? I’m very worried about complacency down the line.”
Carline agreed companies needed to have the right culture of AI use, ensuring they were using “the right tools in the right place”.
Host Gwyn added: “AI can be superhuman in some areas… but if you want it to tell you a joke, it’s absolute rubbish.”
Richard then asked the audience to take a step back and ask why AI is being used, particularly when it can give a worse outcome than human output. “Because,” he said, “it’s really cheap”.
He added: “Why are we doing this? I would argue it’s because of the wider economic forces all our businesses are facing at the moment… This is why we’re seeing an absolute explosion in very poor but very cheap AI output.”
Richard said firms needed to assess where the “humans in the loop” should be, and making realistic cost assessments.
Keeley added that companies needed to stay accountable to all their stakeholders in the age of AI. If businesses lose core skills through job cuts or a lack of recruitment, and then an AI gets something wrong, “where is the liability”?
One upcoming issue for AI users will be the costs of those services. In the early days of mass AI adoption, those tools have been cheap or even free to use. But in due course, and once AI is embedded in business, that will change.
Gwyn asked what will happen to AI use once that cheap pricing ends. Richard reflected on the way that other tech businesses, such as ride sharing apps, have relied on using cheap prices to grab market share before raising prices.
He says that when AI use prices rise, businesses will need to reflect on how much they are using it.
“We are using AI for weird things we wouldn’t be using it for”, he said, and asked if we would for example still use AI to generate simple emails or social media posts if each use cost more. If people were paying the true cost for those services, he said, then they wouldn’t use them as much.
So what happens if companies have shed jobs and abandoned graduate training schemes, hoping to save cash, “and then the price goes up 10 or 15 times?” Unless you’ve thought about your organisation’s long-term future and the skills you will need, Richard said, you may end up with a total dependency on AI when “the pricing is outside your control”.
Caroline agreed, saying: “You are paying less than we should and they are going to put the prices up. She added: “If you were paying 12 times (what you pay now), would you still choose to deploy it?”
Richard added that he had “honestly not seen” a long-term financial AI use forecast that was correct because people assume AI will stay free or cheap. So organisations should assess carefully what they need to use AI for and what still needs people.”
Later, he said it would be hard and expensive for firms to develop human expertise inside an organisation once it’s gone. He referenced EM Forster’s prophetic science fiction story The Machine Stops, in which people become dependent on technology, and said: “Part of the answer is – how do we value future expertise now?”
Keeley said organisations needed to value their people and the skills and deep knowledge they have.
She said: “If you lose that, maybe short term you’ll have a solution, but in three to five years time you won’t.”
And Caroline added: “Plus if AI takes all the jobs, who’s going to buy your products and services?”
Lessons learned from the Post Office scandal
A sobering reminder of what can go wrong when companies rely on tech came later when Bryan Glick, editor of Computer Weekly, spoke about the Post Office scandal, when hundreds of people were wrongly prosecuted for failures of the Horizon IT system.
For many people the sheer scale of the scandal only became clear in 2024 when TV drama Mr Bates vs The Post Office aired, leading to then Prime Minister Rishi Sunak announcing new measures to compensate the hundreds of subpostmasters and sub-postmistresses who were wrongfully convicted.
But Bryan pointed out this was “a scandal whose roots went back 24 years” and that Computer Weekly had been reporting on it for that time, recognising that the problems at the Post Office had the “noxious whiff” of a serious problem with officials not accepting that there could be any problems with the Horizon system despite mounting evidence of issues.
The 2025 report into the scandal, by Sir Wyn Williams, said the Post Office “maintained the fiction that its data was always accurate”.
He said that the Post Office scandal, and other high-profile scandals involving government, seemed like perfect storms in their own right. It would, he said, be easy to look at each individual scandal and to say that couldn’t happen again. He warned: “I’m here now to say to you, think again”.
It will be harder in future to solve any tech crises involving AI, he said, as we often don’t know exactly what is going on inside those models.
The key issue for organisations using AI and other tech suites is accountability. Leaders and employees alike need to know when to ask questions of the technology and of each other, and must not assume technology is infallible.
Bryan was speaking on stage to Sharron Gunn, chief executive of BCS the Chartered Institute for IT. She said that with AI there always needed to be a “human in the loop” with everyone at a company, including board members, needing to know what to ask about technology. All directors, she suggested, should have basic tech training.
Asked about what lessons he thought government and business could take from the Post Office scandal, Bryan said organisations needed to understand that “technology is a tool” and “not a magic solution to all your economic and social problems”.
And, he said, people were vital to the success of any implementation of technology. “AI can help, he said, “but government needs to listen to the experiences of the people who implement It” – not just to tech firms.






You must be logged in to post a comment Login