Connect with us

NewsBeat

DWP explains huge jump in Universal Credit claims

Published

on

Cambridgeshire Live

The Department for Work and Pensions has clarified the reason behind the massive increase in Universal Credit claims

The Department for Work and Pensions has responded to the sharp surge in Universal Credit claimants over recent years, clarifying that the figures are not what they might appear. Nearly 80% of these new recipients did not submit fresh claims for the benefit.

Advertisement

Since 2022, six legacy benefits have been progressively consolidated into Universal Credit, and the DWP confirmed this accounts for the overwhelming majority of the striking rise in claims. The department posted on X: “Nearly 80% of the increase is people being moved from old benefits onto Universal Credit. Not new claims. A transition we inherited.

“And it’s the same story for those with no work requirements – at least 72% of that increase is legacy benefit claimants moving across.”

In December 2025, the total number of Universal Credit claimants across Britain reached 8.34 million, an increase of almost one million since December 2024. Figures released on Tuesday revealed that more than 775,000 of these individuals had been transferred from legacy benefits.

In short, the considerable rise in Universal Credit recipients since 2022 is largely the result of a managed administrative transfer rather than an emerging pattern suggesting significantly more people are likely to lodge new claims for the benefit going forward. The transition from legacy benefits to Universal Credit has been progressing through a managed migration process, reports the Express.

Advertisement

Those affected were issued migration notices and given the opportunity to transfer their claim to Universal Credit with Transitional Protection before their existing benefits ceased.

Legacy benefits being moved to Universal Credit:

  • Income-related Employment and Support Allowance (ESA)
  • Child Tax Credit
  • Working Tax Credit
  • Housing Benefit
  • Income Support
  • Income-based Jobseeker’s Allowance (JSA)

Certain benefits, such as Working Tax Credits and Child Tax Credits, have already officially come to an end. The Government expects the final stages of the migration to be concluded by the end of March.

The managed migration process commences when an individual receives their migration notice. This will include their own personal deadline by which they must apply for Universal Credit in order to receive Transitional Protection, which guarantees they will not be left financially worse off under the new system.

For instance, if someone was receiving £600 a month from Tax Credits but only qualifies for £400 from Universal Credit under standard eligibility rules, the Transitional Protection will supplement this with an additional £200. However, should you miss the deadline stated in your migration notice, you will forfeit any entitlement to Transitional Protection.

Those unable to meet the deadline outlined in their migration notice may also be eligible for reasonable adjustments from the DWP. These could include extended deadlines or the appointment of representatives for individuals who are unable to manage their own affairs.

Advertisement

Earlier this month, Sir Stephen Timms disclosed that more than 150 Complex Case Coaches have been mobilised to offer tailored support, collaborating with local safeguarding teams for especially vulnerable individuals.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

NewsBeat

New venues sign up to Sunderland Restaurant Week 2026

Published

on

New venues sign up to Sunderland Restaurant Week 2026

The event, taking place from Saturday (March 7), will see about 50 venues across the city offer special dining deals, including new participants Ember at the Sheepfolds and The Korean Spoon on Fawcett Street.

Tamer Hassan, owner of Ember, said the restaurant is now ready to take part after 18 months of building its reputation.

Ember at the Sheepfolds is new to Restaurant Week this year. (Image: Ember)

Mr Hassan said: “When we first opened, our main focus was on establishing Ember’s reputation and building strong foundations as a new business.

“At the time, we were conscious that restaurant week could make us exceptionally busy, and we didn’t want to risk putting too much pressure on the team or compromising the standards we were working so hard to set.

Advertisement

“Now, 18 months on, we feel much more established and confident in what we do.

The Korean Spoon on Fawcett Street is also a newcomer to Restaurant Week. (Image: The Korean Spoon)

“It feels like the right time for us to be part of such a brilliant event and to give something back to the community that has supported us so strongly since day one.

“We’re excited to welcome new guests through the door and showcase what Ember is all about.”

Rachel Meng, owner of The Korean Spoon, said the restaurant is looking forward to welcoming diners during the event.

Advertisement

Ms Meng said: “We’re delighted to be taking part in Sunderland Restaurant Week for the first time.

“The support we’ve received since opening has been incredible, and we’re really looking forward to welcoming both new and returning customers to discover our authentic Korean cuisine during the week.”

The full list of venues taking part this year is as follows:

1842
Acropolis
Angelos
Antico
Ashbrooke Home
Asiana
Babaji
BobaCat Kitchen
Burger Drop
Café Floriana
Chesters Lounge
Deep North
Diegos
Ember
Enfes
Esquires
Fausto
Gatsby
Goa
Grand Hotel
House of Zen
I scream for pizza
Keel Tavern
Koji
Marina Vista
Mexico 70
Mumbai Silk
My Dehli
Panda Garden
Pho 179
Port of Call
Rio
Roma
Rumour Has It
Saba Maison De Luxe
Signatures
SIX
Spent Grain
The 3 Stories
The Coffee Snug at The Chair
The Korean Spoon
The Mad Hatter
The Seaburn Bay
The Shipwrights
The Sweet Petite
Vito’s Osteria
WEAR
Yard Nine

Organised by the City Centre business improvement district (Bid) and seafront Bid, with support from Nexus, the event offers set menus at price points of £10, £15, £20, and £25.

Advertisement

A variety of cuisines are on offer, including Mediterranean, Asian, Turkish, and Indian.

Diners can download the required vouchers from the Sunderland Bid website.

Source link

Advertisement
Continue Reading

NewsBeat

The oil price surge is just one symptom of a supply chain network that is not fit for this age of global tensions

Published

on

The oil price surge is just one symptom of a supply chain network that is not fit for this age of global tensions

The escalating conflict between Iran, the US and Israel has taken a critical turn. The strait of Hormuz – one of the most important shipping routes for oil and gas – is facing significant disruption. The strait is the main route connecting Persian Gulf ports in Iran and some of the region’s other oil producers to the open ocean.

The strikes on Iran are already having tangible effects: energy flows are slowing, markets are reacting and supply chains are under pressure. This is not just a regional conflict – it is a global supply chain crisis unfolding in real time.

As an expert on supply chains, I am acutely aware of how central the strait is – not only for the stability of the region but also to the functioning of the global economy.

This narrow corridor is one of the world’s most critical chokepoints – around a fifth of the world’s oil passes through the strait daily. Its sudden disruption represents a “chokepoint failure” – a breakdown at a critical node that triggers cascading effects across global systems.

Advertisement

Tanker traffic has dropped sharply, with vessels waiting in surrounding waters as ship owners reassess the risks. Oil prices surged in response to the strikes and the threat to shipping routes. Analysts have warned that prices could climb significantly higher if the disruption persists.

But crucially, this reaction was not driven solely by actual shortages. Markets respond to uncertainty itself. The mere possibility that several million barrels per day could be disrupted is enough to push prices up, even before supply is properly hit. This reflects a broader feature of geopolitical risk: expectations and perceptions can be as economically powerful as material disruptions.

Because energy underpins almost every sector, these price increases transmit rapidly through supply chains. Higher fuel costs raise transportation expenses, increase production costs and ultimately feed into inflation across goods and services that eventually land with consumers.

The strategic importance of the Gulf states

The disruption is not confined to the strait. Instability across the wider Gulf region also affects the United Arab Emirates, as well as other strategically important energy producers and logistics hubs, such as Qatar, Kuwait and Saudi Arabia.

Advertisement

This dimension matters because the Gulf functions not only as an energy supplier but also as a crossroads in global trade and logistics.

Ports such as Dubai handle vast volumes of international shipping, linking Asia, Europe and Africa. As tensions spread, the reliability of these logistics systems is increasingly called into question.

The result is a shift to more widespread insecurity, where both energy flows and trade infrastructure – things like major container ports, shipping lanes, export terminals and storage facilities – are simultaneously at risk.

Energy is the heart of global supply chains. Manufacturing depends on electricity and fuel, transport relies on oil-based logistics and agriculture depends heavily on natural gas-derived fertilisers. When energy flows are disrupted or become more expensive, the effects propagate across entire networks.

Advertisement

Research on geopolitical crises shows that disruptions to key inputs such as oil and gas quickly translate into broader supply chain instability. This affects production, trade and the availability of goods far beyond the conflict zone. The Iran crisis reflects this dynamic. What begins as disruption in a maritime corridor can become a global economic issue within days.

For decades, global supply chains have been optimised for efficiency. This means that they concentrate sourcing and production in regions that minimise costs. This model has delivered large economic benefits, but it has also created weaknesses in the structure.

The crisis in the strait of Hormuz is a prime example of a chokepoint failure.
AustralianCamera/Shutterstock

The concentration of energy flowing through a single chokepoint such as the strait of Hormuz exemplifies this trade-off. When it is disrupted, the system lacks resilience.

Advertisement

In response, supply chains are likely to accelerate efforts to diversify and invest in alternative energy routes and sources. Countries that are heavily dependent on oil transiting through the Gulf will seek to expand strategic reserves, diversify their import routes and invest in pipelines that bypass maritime chokepoints.

But at the same time, geopolitical instability strengthens the case for renewable energy, electrification and regional energy integration. Expanding solar, wind and green hydrogen capacity reduces exposure to concentrated fossil fuel corridors. And cross-border electricity connections can improve flexibility during shocks. In this sense, resilience is also an energy transition issue.

At the same time, instability in conflict-hit regions can fuel the rise of informal and illegal supply chains, particularly where governance is weakened. These can include things like unregulated oil trading, goods being smuggled through informal maritime routes and labour exploitation hidden within subcontracting chains.

What’s more, supply chains themselves are increasingly shaped by geopolitical forces, as states use trade, energy and logistics networks as instruments of power.

Advertisement

For consumers, this could mean greater price volatility, shortages and reduced choice as firms adjust sourcing strategies in response to sanctions, trade restrictions or security risks. In some cases, it may also mean higher costs over the long term, as businesses prioritise resilience over efficiency.

A turning point for globalisation?

The situation in the strait of Hormuz may mark a turning point in how global supply chains are understood. It has shone a light on a fundamental tension at the heart of globalisation. Efficiency depends on sourcing and production being concentrated in a few locations, but resilience depends on diversification. When critical links in the chain fail, the consequences extend far beyond their immediate location.

This war demonstrates that supply chains are not merely economic systems. They are deeply embedded in geopolitical realities. The challenge ahead is not simply to manage disruption, but to redesign supply chains and energy sources for a world in which geopolitical risk is no longer exceptional, but structural.

Advertisement

Source link

Continue Reading

NewsBeat

Ex-mayor caught by her teenage son ‘having sex with his friend at party’

Published

on

Daily Record

Misty Roberts, 43, stands accused of having sex with a teenage boy at a pool party in 2024.

An ex-mayor is accused of having sex with her son’s teenage friend at a pool party – with her children claiming they witnessed the alleged offence.

Advertisement

Misty Roberts, 43, allegedly carried out the offence at a party in 2024 while serving as mayor of DeRidder, Louisiana. Her trial on a charge of third-degree rape began last week following numerous delays, according to local media. Roberts resigned from office in late July 2024, days before she was arrested and charged with third-degree rape and contributing to the delinquency of juveniles.

Last week, jurors were shown pictures of the party in question, including of children holding drinks by the pool, reports the Mirror. In interviews played to the court, Roberts’ son told investigators he saw his mother having sex with his friend through the crack of a window, while her daughter told investigators she saw her mother and the teenager “on top of each other”, KPLC reports.

However, on Thursday, when both of Roberts’ children took the stand, her son told the court he was not certain what he saw that night. The prosecution presented a text message in which the son appears to tell Roberts: “He is seventeen.” The alleged victim of this case was identified as 16 years old at the time of the alleged offence, according to KPLC.

Advertisement

On Thursday, the defence and prosecution questioned two forensic interviewers who had spoken with children connected to the case. One interviewed three children, including the alleged victim, in July and August 2024. The second interviewed Roberts’ children in March 2025 at the request of the district attorney’s office.

Roberts’ nephew told the court that he used his phone’s camera to see what was happening in the room that night. He testified that he was unsure if he had hit “record”, but said that if he had, the video was never sent to anyone and he has since deleted it from his Snapchat memories.

When the defence asked Roberts’ nephew why he cleared his Snapchat before handing the phone to investigators, he said that he did it because it contained photos of him and his friends drinking, and he was worried about getting in trouble. He said he did not intend to delete any evidence.

None of the three witnesses who testified on Thursday said they saw any “private parts” of Roberts or the alleged victim. One witness said the teenage boy was shirtless.

After the alleged incident, the mother of the alleged victim texted Roberts to make sure she was not pregnant. The court was shown a screenshot of the message in which Roberts replied that she was on birth control.

Advertisement

The court was shown that Roberts sent a screenshot of her conversation with the boy’s mother to a group chat with her friends, who responded by telling her to take Plan B. A DoorDash driver testified that he delivered an emergency contraceptive to Roberts’ house.

The defence suggested in court that a key part of the interview with Roberts’ son was not transcribed. Defence attorney Adam Johnson claimed the interviewer told the boy: “Just say it once, and we can move on.” He also said the transcription notes are unintelligible.

Roberts had appeared in court in early February to enter her plea of not guilty to two felony charges of indecent behaviour with a juvenile and carnal knowledge of a juvenile.

Advertisement

In her resignation letter in July 2024, Roberts said: “For nearly 15 years, my love and passion for DeRidder has been my foundation while serving as Mayor. I will forever be proud of what we have been able to accomplish – together. This role has rewarded me with many great relationships.

“I am humbled to have witnessed the hard work that took a community to come together and overcome through unprecedented times. However, I must adjust my focus and priorities. Please accept this letter as my formal resignation, effective today.

“To the residents of this city: Thank you for your trust, love and support in me to lead our city into our future of greatness. My love for DeRidder will never waiver.” Roberts was in the middle of a second term as the city’s mayor, to which she was re-elected in 2022 with sixty per cent of the vote.

DeRidder is a city in Louisiana with a population of just under 10,000 people.

Advertisement

Get more Daily Record exclusives by signing up for free to Google’s preferred sources. Click HERE.

Source link

Continue Reading

NewsBeat

Award-winning Hall Hill Farm in County Durham hiring guides

Published

on

Award-winning Hall Hill Farm in County Durham hiring guides

On Saturday, February 21, Hall Hill Farm, which first opened its doors to the public in 1981, welcomed visitors back, marking the start of a landmark season.

The beloved Lanchester visitor attraction, now headed by Ann Darlington and her son Richard, originated as a simple lambing event.

It has since flourished into an immersive farm experience renowned nationally.

Last year, the farm welcomed more than 100,000 visitors, confirming its popularity as a destination for hands-on farm activities among families.

Advertisement

Over time, Hall Hill Farm has won both regional and national tourism accolades.

In the North East England Tourism Awards in 2025, it was awarded the titles of Large Farm Attraction of the Year and Large Visitor Attraction of the Year, further cementing its reputation.

The farm is now hiring guides to join its team.

null (Image: HALL HILL FARM)

A job advertisement on the farm’s wesbite reads: “Would you like to join our award winning team?

Advertisement

“Do you enjoy meeting people, could you handle rabbits and chicks, you will need to have a constant smile whatever the weather, and take pride in providing excellent customer service?

“If you are friendly and outgoing, we would love to hear from you.”

For those interested in part-time roles during the weekends and school holidays, the farm is looking for candidates who love animals, enjoy smiling, and meeting new people.

Advertisement

Duties will include interacting with the public, tending to animals such as rabbits and chicks, some feeding and mucking out, and general cleaning tasks.

Applicants must be able to work approximately 10am to 5pm on Saturdays or Sundays and bank holidays, and must be available for Easter holidays and all bank holiday weekends when the farm is open.

The minimum age for applicants is 16 and forms should be sent to chris@hallhill.co.uk.

Advertisement

Source link

Continue Reading

NewsBeat

The best cooling mattresses, tested by a hot sleeper

Published

on

The best cooling mattresses, tested by a hot sleeper

First, look at the structure. Traditionally, foams are dense, but dedicated cooling memory foam or hybrid mattresses feature enhanced technology for better temperature regulation.

Look for phrases such as “open cell”, which means the foam may have gaps for air to pass through and “gel beads” or “phase change materials”, which both absorb excess thermal energy to draw it away from the body.

Air flows more freely through pocket sprung mattresses, due to the space between the coils. Some designs also incorporate natural materials, such as wool, to dissipate heat and wick moisture.

Advertisement

Composition is important for the cover, too. Natural fibres, such as cotton, are more breathable than synthetic fabrics. Textiles derived from wood pulp (any described as viscose, bamboo, eucalyptus or Tencel) also tend to be better for temperature regulation.

As for firmness, softer mattresses can trap more heat as you sink into them. For further advice, read our guide on how to choose a mattress.

Source link

Advertisement
Continue Reading

NewsBeat

Salad of roast tomatoes and fennel with preserved lemon

Published

on

Salad of roast tomatoes and fennel with preserved lemon

This is Moroccan-inspired and very good with roast lamb or spicy barbecued mackerel. If you want to have it on its own, yogurt or labneh are good alongside and, of course, flatbread or couscous. It might seem like a hassle to roast the fennel and tomatoes separately but it does make things easier when you assemble the salad. Each element stays intact.

You can use coriander or mint instead of parsley in the dressing, and extend the salad by adding fresh leaves (rocket, watercress or baby spinach). Just note that if you add leaves you’ll need to make more dressing.

Source link

Advertisement
Continue Reading

NewsBeat

Time to retrain? How to future-proof your career in the AI age

Published

on

Time to retrain? How to future-proof your career in the AI age

These days, gen Z appears to be pivoting towards skilled trades, perhaps driven by a desire for “AI-proof” job security. Many young workers now view blue-collar careers as more stable than office jobs in the face of rapid change.

It’s not just the youngest workers. A growing sense of unease about AI is reshaping how many people think about work. Within younger groups, this shift is showing up in hard numbers. In the UK, hiring of gen Z workers (those born in or after 1997) in construction and trade roles rose by 16.8% in the year to January 2026. The result is what some are calling the “toolbelt generation”.

But elsewhere in the workforce, many professionals are taking a pragmatic approach. Instead of competing with automation, they are learning how to work alongside it. Building fluency with AI tools is increasingly seen as a form of career insurance.

The goal is to move into roles designing, managing or directing AI systems. In that model, technology becomes a force multiplier (that is, it increases productivity), rather than a threat.

Advertisement

This shift is also driven by economics. AI-related skills command a clear premium in the jobs market. Beyond pay, there are other benefits. AI systems are particularly effective at handling repetitive, process-heavy tasks. When those functions are automated, employees can redirect their energy towards strategy, creative problem-solving and higher-value decision-making.

Many find that this shift not only improves productivity but also makes their work more engaging and meaningful.

Importantly, entering the AI space does not always require a computer science degree. Through online learning, bootcamps or just practical experimentation, workers can gain expertise in areas such as prompt engineering, workflow automation or AI application. The barrier to entry is lower than many assume, especially for those who already understand a specific industry.

Industry knowledge is, in fact, a major advantage. Organisations increasingly want people who can bridge domain expertise with technical capability. A healthcare professional who knows what patients need as well as understanding AI tools; a finance specialist who can apply machine learning to risk analysis; or a tradesperson who uses smart systems for efficiency can all bring unique value.

Advertisement

These hybrid profiles are becoming central to how companies integrate AI, creating interdisciplinary roles that did not exist a few years ago.

The flip side: risks and challenges

AI is creating opportunity, but it also brings risks and trade-offs. One of the most immediate challenges is the pace of change. Keeping skills current can feel like trying to hit a moving target. Over time, constantly doing more can lead to fatigue and burnout, particularly in highly competitive environments where staying relevant is tied to job security.

There is also an upfront cost. Transitioning into AI, especially into more technical or advanced positions, can require an investment of time and money before any financial return materialises.

And AI is said to be contributing to a hollowing out of traditional career ladders. Many entry-level roles, once considered stepping stones into industries such as finance or marketing are being automated or cut back. As a result, entry pathways into certain professions may narrow before new ones are established.

Advertisement



À lire aussi :
AI could mark the end of young people learning on the job – with terrible results


Finally, working in AI often means grappling with complex ethical and safety questions. Workers must consider issues such as data bias, privacy, transparency and accountability. Decisions made during system design and deployment can have wide-reaching consequences. Navigating these responsibilities requires sound judgement and a clear understanding of these consequences.

Looking ahead

In many sectors, AI is unlikely to eliminate entire professions. Instead, it will reshape them. Tasks will be automated, workflows will evolve and job descriptions will shift. For most professionals, the practical response is not to abandon their field, but to integrate AI into it.

At the same time, technical fluency alone will not be enough. As automation takes over routine and rules-based work, human skills become more important. Critical thinking, judgement, empathy, communication and complex problem-solving remain difficult to replicate with algorithms. The more advanced the technology becomes, the more valuable distinctly human strengths appear to be.

Advertisement

There is also a widening gap across industries. AI is generating new, high-paying roles in areas such as engineering, data science and AI strategy. However, in positions where automation only partially replaces tasks, productivity may increase while wages do not. In some cases, partial automation can stifle pay or reduce opportunities for promotion.

AI may open up new roles and opportunities within your current sector.
DC Studio/Shutterstock

Retraining and career pivoting in the AI age is becoming a mainstream response to structural change. AI is reshaping how work is done across sectors, while opening up new roles that are centred on oversight, integration, strategy and innovation. For many professionals, the question is not whether change is coming but how proactively they choose to respond.

The most resilient path forward is rarely about abandoning your field entirely. More often, it involves layering AI fluency on top of existing expertise. A finance professional who understands automation tools, for example, is better positioned than someone relying on legacy skills alone. In this sense, the objective of retraining is to move closer to the decision-making layer of work.

Advertisement

Ultimately, the AI era is not about a binary choice between optimism and fear. It is about positioning. Retraining and career pivoting are becoming central strategies for navigating this shift with intention rather than reacting after the fact.

Source link

Advertisement
Continue Reading

NewsBeat

Seven tips for talking to children and young people about generative AI

Published

on

Seven tips for talking to children and young people about generative AI

For most of us, generative AI (GenAI) has moved from novelty to everyday infrastructure astonishingly fast. Many adults now use tools like chatbots at work or casually, and many children are already encountering them through homework “help”, entertainment, or social sharing.

Unsupervised use of generative AI can expose children and young people to confidently presented misinformation, manipulative “keep chatting” dynamics, and inappropriate or emotionally risky content. The tone and conversational dynamics of many chatbots can encourage secrecy and over-reliance, or mimic authority without real understanding or duty of care. In school contexts, GenAI can quietly undermine learning, turning homework and writing into shortcuts rather than skill-building.

I’ve helped create new school resources on GenAI, including guidance for parents. But the most effective safety measures still depend on adults setting boundaries, modelling critical thinking, and staying close enough to a child’s digital life to notice what’s changing in it. What follows are some practical ways to talk about, assess, and limit younger people’s GenAI use.

1. Begin with curiosity – not crackdowns

If you start by telling a child that they shouldn’t use GenAI, you may prompt secrecy about their current and further uses. A better opener could be a simple request to demonstrate to you the AI tools or uses they’re familiar with. Ask what they like about it, what it helps with, and what they’d never use it for. The initial aim should be to normalise discussing AI, though not to normalise unrestricted use.

Advertisement

From here it’s easier to acknowledge that these are powerful and intriguing tools, but not a person or an authority, and not without risks and necessary considerations.

2. Don’t treat stated age limits as optional

An awkward reality that parents may currently have missed is that many popular AI services set 13 as a minimum age (with parental permission under 18). OpenAI states that ChatGPT “is not meant for children under 13”, and still requires parental consent for ages 13 to 18. The AI chatbot ecosystem is inconsistent, however. Anthropic requires Claude users to be 18+, explicitly citing heightened risks for younger users. Google, meanwhile, allows supervised access to Gemini for under-13s via parent-enabled controls.

Your practical rule should be to treat age limits as a clear safety signal rather than a box-ticking exercise. If a service says “13+” or “18+”, that’s telling you something about risk, content exposure and the likelihood of harm from unsupervised use by young people.

3. Encourage fact-checking

Children (and indeed plenty of adults) can mistake confidence for correctness. When talking about GenAI with children, emphasise that AI chatbots can and regularly do “hallucinate”. They invent plausible-sounding details and mix fabrication with fact. Understanding that their speedy and well-stated responses come at a cost of large and small inaccuracies is key.

Advertisement
Encourage young people to check what GenAI tells them.
Pheelings media/Shutterstock

Encourage verifying anything important – news, health claims, law, school facts, statements that may be repeated as “true”.

4. Help them know when to stop

Large language models (LLMs) are designed to keep conversation flowing. They compliment, encourage, reassure and suggest what to do next. This may be helpful for brainstorming but it’s potentially dangerous for emotionally loaded topics where a young person is vulnerable, impressionable, or isolated.

Recent litigation around “companion” chatbots has alleged that vulnerable young users were pulled into harmful spirals, including self-harm risk and secrecy from parents. These are complex and unfolding cases, but they are serious enough to treat as a major warning sign about unsupervised, open-ended AI conversations for minors.

Parents and teachers should name a firm boundary: no chatbot is a counsellor, therapist, or trusted confidant. If a conversation becomes sexual, self-harm related, frightening, or intensely personal, the rule should be to stop and speak to a trusted adult.

Advertisement

5. Don’t feed the machine personal data

Young people often understand privacy better when it’s framed as something tangible. Some rules: don’t share a full name, address, school, phone number, or identifiable photos. Don’t upload private documents or screenshots. Don’t paste in other people’s personal information. If you wouldn’t post it on a public noticeboard, don’t paste it into a chatbot.

6. AI should support the work, not do the work

GenAI poses an educational risk that deserves far more attention: cognitive off-loading. This happens when the tool performs the thinking step – the learner may finish faster, but will learn less. Research is increasingly linking heavier AI reliance with reduced critical thinking and lower cognitive effort, with off-loading and automation bias proposed as mechanisms. A practical way to explain this to young people is that “AI can help you learn, but it can also help you avoid learning”.




À lire aussi :
How generative AI is really changing education – by outsourcing the production of knowledge to big tech


If you’re helping with homework, allow the use of GenAI for asking for an explanation in simpler terms, or requesting feedback on a draft. Don’t allow writing the essay, answering the homework questions directly, or producing a solution that the student can’t explain.

Advertisement

7. Make AI use visible and social

Where AI use is permitted, aim to reduce secrecy. Use AI in shared spaces at home. Set agreed times, not late-night private use. Coordinate with other adults: parents should share their concerns and approaches with other parents and with school staff.

We should treat Generative AI as we wish we’d treated social media much earlier – not as just another app, but as a behavioural technology that shapes attention, learning, confidence and relationships. Being AI aware is not about panic, but about adults building enough knowledge and confidence to guide children toward safe, age-appropriate, genuinely educational use, while regulation and curriculum development catch up.

Source link

Advertisement
Continue Reading

NewsBeat

Gary Neville urges Chelsea to sign three players after Arsenal defeat | Football

Published

on

Gary Neville urges Chelsea to sign three players after Arsenal defeat | Football
Former Man Utd and England defender Gary Neville (Picture: Getty)

Gary Neville has urged Chelsea to sign three ‘top-class’ players after a frustrating Premier League defeat to Arsenal.

Chelsea’s first wobble under new boss Liam Rosenior continued on Sunday with the Blues suffering a 2-1 defeat at the Emirates Stadium.

Rosenior’s side looked marginally on top in the second half after Piero Hincapie’s own goal cancelled out William Saliba’s early opener.

But Jurrien Timber scored Arsenal’s winner following another mistake from Robert Sanchez and Chelsea were unable to respond after losing Pedro Neto to a red card.

Advertisement

While Arsenal restored their five-point lead at the top of the Premier League, a third straight game without a win leaves Chelsea sixth in the table, six points outside the top-four places.

Neville says he ‘still can’t work Chelsea out’ but is adamant they need more ‘experience’ in three key areas of the pitch.

‘I’ve never commentated on a team that make me feel so many different emotions during a single game,’ former Manchester United and England defender Neville said on his Sky Sports podcast.

Your football fix

Metro‘s Head of Sport James Goldman delivers punchy analysis, transfer talk and his take on the week’s biggest stories direct to your inbox every week.

Advertisement

Sign up here, it’s an open goal.

Arsenal v Chelsea - Premier League
Chelsea lost 2-1 to Arsenal (Picture: Getty)

‘You can watch them and think they’re naive, they’re too nice, they’re ill-disciplined or that they’re electric, they’re a great possession team and they’re so talented.

‘You can think so many different things. I flip between thinking they’re miles away and thinking if they can get a goalkeeper, an experienced centre-back and an experienced centre-forward they could be in business.

Advertisement

‘They have to keep players fit but they need a top-class goalkeeper, a top-class centre-back with experience and a top-class centre forward to accompany Joao Pedro and Liam Delap, not to replace them.

To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video

Advertisement

‘Have three strikers – Pedro who is very good, Delap who is young with potential – and then bring someone in. I know that’s difficult – these players are not there, but they need a centre-forward with experience.

‘I’m not talking about a 33-year-old striker but someone who is 27 or 28 and the same at the back, a player who has real presence who can give them some solidity.

‘I’m going to talk about the goalkeeper as well because Robert Sanchez invites problems, every time I watch him my heart is in my mouth.

‘He flaps at the Arsenal goal so for me Chelsea are three players short, they need players in those positions.

Advertisement

‘I’ve got many thoughts about Chelsea and I still can’t quite work them out.’

Rosenior was frustrated with both Chelsea’s defending at set-pieces and their disciplinary issues after the defeat at Arsenal.

‘I’m really frustrated with the end result,’ he said. ‘A lot of good things in our game but we were undone by two set pieces like we were against Burnley last week.

‘There were some outstanding performance. Technically and tactically but we were undone by moments. Same as against Burnley and against Leeds.

Advertisement

‘I don’t want to push the league leaders very hard. We’re Chelsea, we want to win games of football.

‘Between both boxes, we were very, very good. I felt we were the better team by far in the second half but we weren’t ruthless in the moment.’

Chelsea face another huge game on Wednesday night as they visit top-four rivals Aston Villa, who suffered a shock defeat to bottom-placed Wolves last time out.

For more stories like this, check our sport page.

Advertisement

Follow Metro Sport for the latest news on
FacebookTwitter and Instagram
.

Advertisement
Advertisement

Source link

Continue Reading

NewsBeat

Will AI tools make better police officers?

Published

on

Will AI tools make better police officers?

Police officers often work with partial information under severe time constraints in situations that can change in seconds. Whether investigating a crime or patrolling a neighbourhood, they regularly have to make predictions based on instinct.

This “gut policing” isn’t just guesswork – it’s fast pattern recognition. It comes from training and years of dealing with real incidents, learning from colleagues, and building an instinctive sense of what matters and what doesn’t.

But instincts are no longer the only way police connect the dots. Many police forces are investing in AI-enabled tools, including predictive policing algorithms that forecast crime hotspots and offender assessment systems designed to support decision-making.




À lire aussi :
A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up

Advertisement

This reflects a wider global trend: police forces are integrating AI into everyday policing. These AI-enabled tools draw on large volumes of data and patterns that would be impossible for any single officer to analyse in real time. The aim is straightforward: to help ensure decisions are based on strong evidence and reliable data, rather than relying solely on instinct or experience.

Many people appear to accept the use of AI technology by police forces – so long as there are clear guidelines in place first.


AI has long been discussed as a threat to jobs and livelihoods. But what’s the reality? In this series, we explore the impact AI is already having on specific occupations – and how people in these jobs feel about their new AI assistants.

Advertisement

In England, police forces are already using AI tools in day-to-day work. These include Untrite Thrive, which helps staff in police control rooms decide how to allocate resources. Another example is Qlik Sense, used by Avon and Somerset Police for monitoring the likelihood of reoffending or perpetrating a crime. These developments align with a broader government agenda focused on efficiency and cost reduction.

But once you swap human judgment for more automated predictions, the value of officers’ traditional connect-the-dots police logic can be lost. There have been plenty of examples where AI tools have flagged the wrong people, the wrong places, or the wrong risks.

Unverified information

A House of Commons select committee recently highlighted serious failings in West Midlands Police’s use of the AI assistant Microsoft Copilot in its decision to stop Israeli fans of Maccabi Tel Aviv football club from travelling to Birmingham for a Europa League match against Aston Villa last November.

Claims made by this force about alleged disorder involving Maccabi fans at past matches were based on inaccurate information generated by Copilot, including a supposed game between the Israeli club and West Ham United that never happened.

Advertisement

“Information that showed the Maccabi fans to be a high risk was trusted without proper scrutiny,” explained the committee’s chair Karen Bradley. “Shockingly, this included unverified information generated by AI.”

This inaccurate AI‑generated information was repeated by senior police officers in safety advisory group meetings and even in oral evidence to MPs, demonstrating a lack of due diligence and overreliance on unverified AI outputs. The case is now subject to an investigation by the Independent Office for Police Conduct.

Video: Channel 4 News.

And this was not an isolated incident. The Harm Assessment Risk Tool deployed by Durham Constabulary was found to have displayed many flaws, from overestimation of the likelihood of reoffending to discrimination in its datasets.

Advertisement

And the Metropolitan Police’s now-discontinued Gang Matrix, a database that recorded intelligence related to alleged gang members, was heavily criticised by the Information Commissioner’s Office for unfairly labelling young black men as high‑risk based on flawed scoring.

Relying on AI-driven tools can be a double-edged sword in policing. They can improve decisions, but can also reinforce bias and amplify mistakes. In our experience of working with police forces in England, AI‑supported decision‑making works best when police officers combine their operational experience with data‑driven insights.

Reinforcing biases

Our ongoing study of AI use in policing shows that uncritical reliance on AI risks reinforcing existing biases, disproportionately affecting the poorest and most marginalised communities.

Our research, which is yet to be published, suggests that effective use of AI requires a difficult balance: officers must both trust and mistrust AI recommendations at the same time, maintaining a vigilant mindset.

Advertisement

To prevent biases creeping into AI‑supported decisions, police forces should invest in bias‑awareness training that prepares officers to question AI outputs regularly and constructively.

The National Police Chiefs’ Council covenant mandated that AI should support rather than replace human judgment. This is a step in the right direction. Yet even this principle can backfire if police officers treat AI recommendations as objective truth, rather than guidance that requires careful scrutiny.

These concerns take on renewed urgency in light of the government’s introduction of a national predictive policing prototype, announced in August 2025. The system, scheduled for nationwide deployment by 2030, combines AI‑powered crimemapping with behavioural‑pattern analysis, supported by a £4 million initial investment.

It draws on data from police forces, local councils and social services, and builds directly on the expanding fleet of live facial recognition vans now operating across seven forces across England and Wales.

Advertisement



À lire aussi :
Facial recognition technology used by police is now very accurate – but public understanding lags behind


At the same time, developments inside policing organisations highlight the limits of technological oversight. The Met was recently reported to have begun using AI tools to flag potential officer misconduct by analysing internal data such as sickness records, absences and overtime patterns.

While the Met argues that such systems help raise standards and rebuild public trust, critics warn that such monitoring risks misclassifying workplace pressures as misconduct and eroding accountability rather than strengthening it.

Ultimately, whether AI technology improves policing outcomes depends on the governance surrounding it. Ensuring there is a vigilant human in every AI loop should be a non-negotiable safeguard.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025