Plans show 14 tables would be installed at the historic Grecian mills complex
Part of a listed textile mill in Bolton is set to be converted in a snooker hall.
Advertisement
Plans published within the past week seek to transform Catherine House on Lever Street. The building forms part of the Grade II listed Grecian mills complex, south of Bolton town centre.
Floor plans published as part of the planning application show 14 full size snooker tables would be installed in the building over two floors.
A planning statement in support of the plans on behalf of applicant Yasar Wasim has been published on the council’s planning portal.
Advertisement
It said: “The proposal seeks change of use and works to facilitate a snooker club as the primary and predominant use across both the ground and first floors.
“A small café and kitchen facility is included at ground-floor level solely as an ancillary refreshment offer for patrons of the snooker club.”
Catherine House is a two-storey red brick building with a slate roof, fronting Lever Street. It is currently vacant.
Advertisement
The only external alteration within the plans is the installation of an extractor fan on the rear ground floor to serve the proposed kitchen.
The planning statement said that the council had advised that, in principle, a snooker hall and café may be capable of justification given the mixed-use nature of Grecian Mills, but identified key requirements to be addressed through supporting information.
The statement, adds: “In economic terms, the proposal will re-activate a redundant building, support business investment and create employment.
“In social terms, will provide an indoor recreation facility which, by its nature, supports social interaction and contributes to a ‘strong, vibrant and healthy’ community function.”
Advertisement
The Grecian mill complex of which Catherine House is a part is Grade II Listed and was formerly a cotton spinning and doubling works. The main spinning mill dates back to 1845 with other buildings dated 1850s-60s.
Catherine House is noted for its heavy Italianate detailing.
A heritage report, supporting the application, said: “As part of a large mill complex, the building provides evidence of the rise of industry in Bolton through the mid to late 19th century, and along with the remainder of the complex serves as a reminder to the historic industry, which was responsible for much of the growth of the area.
“In essence the key drivers of the building’s significance will be conserved, change being of a very low level and directed to areas of lower significance.”
Advertisement
Planners in Bolton will consider the application in th coming weeks.
First, look at the structure. Traditionally, foams are dense, but dedicated cooling memory foam or hybrid mattresses feature enhanced technology for better temperature regulation.
Look for phrases such as “open cell”, which means the foam may have gaps for air to pass through and “gel beads” or “phase change materials”, which both absorb excess thermal energy to draw it away from the body.
Air flows more freely through pocket sprung mattresses, due to the space between the coils. Some designs also incorporate natural materials, such as wool, to dissipate heat and wick moisture.
Advertisement
Composition is important for the cover, too. Natural fibres, such as cotton, are more breathable than synthetic fabrics. Textiles derived from wood pulp (any described as viscose, bamboo, eucalyptus or Tencel) also tend to be better for temperature regulation.
As for firmness, softer mattresses can trap more heat as you sink into them. For further advice, read our guide on how to choose a mattress.
This is Moroccan-inspired and very good with roast lamb or spicy barbecued mackerel. If you want to have it on its own, yogurt or labneh are good alongside and, of course, flatbread or couscous. It might seem like a hassle to roast the fennel and tomatoes separately but it does make things easier when you assemble the salad. Each element stays intact.
You can use coriander or mint instead of parsley in the dressing, and extend the salad by adding fresh leaves (rocket, watercress or baby spinach). Just note that if you add leaves you’ll need to make more dressing.
These days, gen Z appears to be pivoting towards skilled trades, perhaps driven by a desire for “AI-proof” job security. Many young workers now view blue-collar careers as more stable than office jobs in the face of rapid change.
It’s not just the youngest workers. A growing sense of unease about AI is reshaping how many people think about work. Within younger groups, this shift is showing up in hard numbers. In the UK, hiring of gen Z workers (those born in or after 1997) in construction and trade roles rose by 16.8% in the year to January 2026. The result is what some are calling the “toolbelt generation”.
But elsewhere in the workforce, many professionals are taking a pragmatic approach. Instead of competing with automation, they are learning how to work alongside it. Building fluency with AI tools is increasingly seen as a form of career insurance.
The goal is to move into roles designing, managing or directing AI systems. In that model, technology becomes a force multiplier (that is, it increases productivity), rather than a threat.
Advertisement
This shift is also driven by economics. AI-related skills command a clear premium in the jobs market. Beyond pay, there are other benefits. AI systems are particularly effective at handling repetitive, process-heavy tasks. When those functions are automated, employees can redirect their energy towards strategy, creative problem-solving and higher-value decision-making.
Many find that this shift not only improves productivity but also makes their work more engaging and meaningful.
Importantly, entering the AI space does not always require a computer science degree. Through online learning, bootcamps or just practical experimentation, workers can gain expertise in areas such as prompt engineering, workflow automation or AI application. The barrier to entry is lower than many assume, especially for those who already understand a specific industry.
Industry knowledge is, in fact, a major advantage. Organisations increasingly want people who can bridge domain expertise with technical capability. A healthcare professional who knows what patients need as well as understanding AI tools; a finance specialist who can apply machine learning to risk analysis; or a tradesperson who uses smart systems for efficiency can all bring unique value.
Advertisement
These hybrid profiles are becoming central to how companies integrate AI, creating interdisciplinary roles that did not exist a few years ago.
The flip side: risks and challenges
AI is creating opportunity, but it also brings risks and trade-offs. One of the most immediate challenges is the pace of change. Keeping skills current can feel like trying to hit a moving target. Over time, constantly doing more can lead to fatigue and burnout, particularly in highly competitive environments where staying relevant is tied to job security.
There is also an upfront cost. Transitioning into AI, especially into more technical or advanced positions, can require an investment of time and money before any financial return materialises.
And AI is said to be contributing to a hollowing out of traditional career ladders. Many entry-level roles, once considered stepping stones into industries such as finance or marketing are being automated or cut back. As a result, entry pathways into certain professions may narrow before new ones are established.
Finally, working in AI often means grappling with complex ethical and safety questions. Workers must consider issues such as data bias, privacy, transparency and accountability. Decisions made during system design and deployment can have wide-reaching consequences. Navigating these responsibilities requires sound judgement and a clear understanding of these consequences.
Looking ahead
In many sectors, AI is unlikely to eliminate entire professions. Instead, it will reshape them. Tasks will be automated, workflows will evolve and job descriptions will shift. For most professionals, the practical response is not to abandon their field, but to integrate AI into it.
At the same time, technical fluency alone will not be enough. As automation takes over routine and rules-based work, human skills become more important. Critical thinking, judgement, empathy, communication and complex problem-solving remain difficult to replicate with algorithms. The more advanced the technology becomes, the more valuable distinctly human strengths appear to be.
Advertisement
There is also a widening gap across industries. AI is generating new, high-paying roles in areas such as engineering, data science and AI strategy. However, in positions where automation only partially replaces tasks, productivity may increase while wages do not. In some cases, partial automation can stifle pay or reduce opportunities for promotion.
AI may open up new roles and opportunities within your current sector. DC Studio/Shutterstock
Retraining and career pivoting in the AI age is becoming a mainstream response to structural change. AI is reshaping how work is done across sectors, while opening up new roles that are centred on oversight, integration, strategy and innovation. For many professionals, the question is not whether change is coming but how proactively they choose to respond.
The most resilient path forward is rarely about abandoning your field entirely. More often, it involves layering AI fluency on top of existing expertise. A finance professional who understands automation tools, for example, is better positioned than someone relying on legacy skills alone. In this sense, the objective of retraining is to move closer to the decision-making layer of work.
Advertisement
Ultimately, the AI era is not about a binary choice between optimism and fear. It is about positioning. Retraining and career pivoting are becoming central strategies for navigating this shift with intention rather than reacting after the fact.
For most of us, generative AI (GenAI) has moved from novelty to everyday infrastructure astonishingly fast. Many adults now use tools like chatbots at work or casually, and many children are already encountering them through homework “help”, entertainment, or social sharing.
Unsupervised use of generative AI can expose children and young people to confidently presented misinformation, manipulative “keep chatting” dynamics, and inappropriate or emotionally risky content. The tone and conversational dynamics of many chatbots can encourage secrecy and over-reliance, or mimic authority without real understanding or duty of care. In school contexts, GenAI can quietly undermine learning, turning homework and writing into shortcuts rather than skill-building.
I’ve helped create new school resources on GenAI, including guidance for parents. But the most effective safety measures still depend on adults setting boundaries, modelling critical thinking, and staying close enough to a child’s digital life to notice what’s changing in it. What follows are some practical ways to talk about, assess, and limit younger people’s GenAI use.
1. Begin with curiosity – not crackdowns
If you start by telling a child that they shouldn’t use GenAI, you may prompt secrecy about their current and further uses. A better opener could be a simple request to demonstrate to you the AI tools or uses they’re familiar with. Ask what they like about it, what it helps with, and what they’d never use it for. The initial aim should be to normalise discussing AI, though not to normalise unrestricted use.
Advertisement
From here it’s easier to acknowledge that these are powerful and intriguing tools, but not a person or an authority, and not without risks and necessary considerations.
2. Don’t treat stated age limits as optional
An awkward reality that parents may currently have missed is that many popular AI services set 13 as a minimum age (with parental permission under 18). OpenAI states that ChatGPT “is not meant for children under 13”, and still requires parental consent for ages 13 to 18. The AI chatbot ecosystem is inconsistent, however. Anthropic requires Claude users to be 18+, explicitly citing heightened risks for younger users. Google, meanwhile, allows supervised access to Gemini for under-13s via parent-enabled controls.
Your practical rule should be to treat age limits as a clear safety signal rather than a box-ticking exercise. If a service says “13+” or “18+”, that’s telling you something about risk, content exposure and the likelihood of harm from unsupervised use by young people.
3. Encourage fact-checking
Children (and indeed plenty of adults) can mistake confidence for correctness. When talking about GenAI with children, emphasise that AI chatbots can and regularly do “hallucinate”. They invent plausible-sounding details and mix fabrication with fact. Understanding that their speedy and well-stated responses come at a cost of large and small inaccuracies is key.
Advertisement
Encourage young people to check what GenAI tells them. Pheelings media/Shutterstock
Encourage verifying anything important – news, health claims, law, school facts, statements that may be repeated as “true”.
4. Help them know when to stop
Large language models (LLMs) are designed to keep conversation flowing. They compliment, encourage, reassure and suggest what to do next. This may be helpful for brainstorming but it’s potentially dangerous for emotionally loaded topics where a young person is vulnerable, impressionable, or isolated.
Recent litigation around “companion” chatbots has alleged that vulnerable young users were pulled into harmful spirals, including self-harm risk and secrecy from parents. These are complex and unfolding cases, but they are serious enough to treat as a major warning sign about unsupervised, open-ended AI conversations for minors.
Parents and teachers should name a firm boundary: no chatbot is a counsellor, therapist, or trusted confidant. If a conversation becomes sexual, self-harm related, frightening, or intensely personal, the rule should be to stop and speak to a trusted adult.
Advertisement
5. Don’t feed the machine personal data
Young people often understand privacy better when it’s framed as something tangible. Some rules: don’t share a full name, address, school, phone number, or identifiable photos. Don’t upload private documents or screenshots. Don’t paste in other people’s personal information. If you wouldn’t post it on a public noticeboard, don’t paste it into a chatbot.
6. AI should support the work, not do the work
GenAI poses an educational risk that deserves far more attention: cognitive off-loading. This happens when the tool performs the thinking step – the learner may finish faster, but will learn less. Research is increasingly linking heavier AI reliance with reduced critical thinking and lower cognitive effort, with off-loading and automation bias proposed as mechanisms. A practical way to explain this to young people is that “AI can help you learn, but it can also help you avoid learning”.
If you’re helping with homework, allow the use of GenAI for asking for an explanation in simpler terms, or requesting feedback on a draft. Don’t allow writing the essay, answering the homework questions directly, or producing a solution that the student can’t explain.
Advertisement
7. Make AI use visible and social
Where AI use is permitted, aim to reduce secrecy. Use AI in shared spaces at home. Set agreed times, not late-night private use. Coordinate with other adults: parents should share their concerns and approaches with other parents and with school staff.
We should treat Generative AI as we wish we’d treated social media much earlier – not as just another app, but as a behavioural technology that shapes attention, learning, confidence and relationships. Being AI aware is not about panic, but about adults building enough knowledge and confidence to guide children toward safe, age-appropriate, genuinely educational use, while regulation and curriculum development catch up.
Rosenior’s side looked marginally on top in the second half after Piero Hincapie’s own goal cancelled out William Saliba’s early opener.
But Jurrien Timber scored Arsenal’s winner following another mistake from Robert Sanchez and Chelsea were unable to respond after losing Pedro Neto to a red card.
Advertisement
While Arsenal restored their five-point lead at the top of the Premier League, a third straight game without a win leaves Chelsea sixth in the table, six points outside the top-four places.
Neville says he ‘still can’t work Chelsea out’ but is adamant they need more ‘experience’ in three key areas of the pitch.
‘I’ve never commentated on a team that make me feel so many different emotions during a single game,’ former Manchester United and England defender Neville said on his Sky Sports podcast.
Your football fix
Metro‘s Head of Sport James Goldman delivers punchy analysis, transfer talk and his take on the week’s biggest stories direct to your inbox every week.
‘You can watch them and think they’re naive, they’re too nice, they’re ill-disciplined or that they’re electric, they’re a great possession team and they’re so talented.
‘You can think so many different things. I flip between thinking they’re miles away and thinking if they can get a goalkeeper, an experienced centre-back and an experienced centre-forward they could be in business.
Advertisement
‘They have to keep players fit but they need a top-class goalkeeper, a top-class centre-back with experience and a top-class centre forward to accompany Joao Pedro and Liam Delap, not to replace them.
Advertisement
‘Have three strikers – Pedro who is very good, Delap who is young with potential – and then bring someone in. I know that’s difficult – these players are not there, but they need a centre-forward with experience.
‘I’m not talking about a 33-year-old striker but someone who is 27 or 28 and the same at the back, a player who has real presence who can give them some solidity.
‘I’m going to talk about the goalkeeper as well because Robert Sanchez invites problems, every time I watch him my heart is in my mouth.
‘He flaps at the Arsenal goal so for me Chelsea are three players short, they need players in those positions.
Advertisement
‘I’ve got many thoughts about Chelsea and I still can’t quite work them out.’
‘I’m really frustrated with the end result,’ he said. ‘A lot of good things in our game but we were undone by two set pieces like we were against Burnley last week.
‘There were some outstanding performance. Technically and tactically but we were undone by moments. Same as against Burnley and against Leeds.
Advertisement
‘I don’t want to push the league leaders very hard. We’re Chelsea, we want to win games of football.
‘Between both boxes, we were very, very good. I felt we were the better team by far in the second half but we weren’t ruthless in the moment.’
Chelsea face another huge game on Wednesday night as they visit top-four rivals Aston Villa, who suffered a shock defeat to bottom-placed Wolves last time out.
Police officers often work with partial information under severe time constraints in situations that can change in seconds. Whether investigating a crime or patrolling a neighbourhood, they regularly have to make predictions based on instinct.
This “gut policing” isn’t just guesswork – it’s fast pattern recognition. It comes from training and years of dealing with real incidents, learning from colleagues, and building an instinctive sense of what matters and what doesn’t.
This reflects a wider global trend: police forces are integrating AI into everyday policing. These AI-enabled tools draw on large volumes of data and patterns that would be impossible for any single officer to analyse in real time. The aim is straightforward: to help ensure decisions are based on strong evidence and reliable data, rather than relying solely on instinct or experience.
AI has long been discussed as a threat to jobs and livelihoods. But what’s the reality? In this series, we explore the impact AI is already having on specific occupations – and how people in these jobs feel about their new AI assistants.
Advertisement
In England, police forces are already using AI tools in day-to-day work. These include Untrite Thrive, which helps staff in police control rooms decide how to allocate resources. Another example is Qlik Sense, used by Avon and Somerset Police for monitoring the likelihood of reoffending or perpetrating a crime. These developments align with a broader government agenda focused on efficiency and cost reduction.
But once you swap human judgment for more automated predictions, the value of officers’ traditional connect-the-dots police logic can be lost. There have been plenty of examples where AI tools have flagged the wrong people, the wrong places, or the wrong risks.
Unverified information
A House of Commons select committee recently highlighted serious failings in West Midlands Police’s use of the AI assistant Microsoft Copilot in its decision to stop Israeli fans of Maccabi Tel Aviv football club from travelling to Birmingham for a Europa League match against Aston Villa last November.
Claims made by this force about alleged disorder involving Maccabi fans at past matches were based on inaccurate information generated by Copilot, including a supposed game between the Israeli club and West Ham United that never happened.
Advertisement
“Information that showed the Maccabi fans to be a high risk was trusted without proper scrutiny,” explained the committee’s chair Karen Bradley. “Shockingly, this included unverified information generated by AI.”
This inaccurate AI‑generated information was repeated by senior police officers in safety advisory group meetings and even in oral evidence to MPs, demonstrating a lack of due diligence and overreliance on unverified AI outputs. The case is now subject to an investigation by the Independent Office for Police Conduct.
Video: Channel 4 News.
And this was not an isolated incident. The Harm Assessment Risk Tool deployed by Durham Constabulary was found to have displayed many flaws, from overestimation of the likelihood of reoffending to discrimination in its datasets.
Advertisement
And the Metropolitan Police’s now-discontinued Gang Matrix, a database that recorded intelligence related to alleged gang members, was heavily criticised by the Information Commissioner’s Office for unfairly labelling young black men as high‑risk based on flawed scoring.
Relying on AI-driven tools can be a double-edged sword in policing. They can improve decisions, but can also reinforce bias and amplify mistakes. In our experience of working with police forces in England, AI‑supported decision‑making works best when police officers combine their operational experience with data‑driven insights.
Reinforcing biases
Our ongoing study of AI use in policing shows that uncritical reliance on AI risks reinforcing existing biases, disproportionately affecting the poorest and most marginalised communities.
Our research, which is yet to be published, suggests that effective use of AI requires a difficult balance: officers must both trust and mistrust AI recommendations at the same time, maintaining a vigilant mindset.
Advertisement
To prevent biases creeping into AI‑supported decisions, police forces should invest in bias‑awareness training that prepares officers to question AI outputs regularly and constructively.
The National Police Chiefs’ Council covenant mandated that AI should support rather than replace human judgment. This is a step in the right direction. Yet even this principle can backfire if police officers treat AI recommendations as objective truth, rather than guidance that requires careful scrutiny.
These concerns take on renewed urgency in light of the government’s introduction of a national predictive policing prototype, announced in August 2025. The system, scheduled for nationwide deployment by 2030, combines AI‑powered crimemapping with behavioural‑pattern analysis, supported by a £4 million initial investment.
It draws on data from police forces, local councils and social services, and builds directly on the expanding fleet of live facial recognition vans now operating across seven forces across England and Wales.
At the same time, developments inside policing organisations highlight the limits of technological oversight. The Met was recently reported to have begun using AI tools to flag potential officer misconduct by analysing internal data such as sickness records, absences and overtime patterns.
While the Met argues that such systems help raise standards and rebuild public trust, critics warn that such monitoring risks misclassifying workplace pressures as misconduct and eroding accountability rather than strengthening it.
Ultimately, whether AI technology improves policing outcomes depends on the governance surrounding it. Ensuring there is a vigilant human in every AI loop should be a non-negotiable safeguard.
WASHINGTON (AP) — Videos of former President Bill Clinton and former Secretary of State Hillary Clinton answering questions about convicted sex offender Jeffrey Epstein were released Monday by a House committee investigating the late financier.
The recordings of the depositions, which spanned hours over two days last week, show how both Clintons distanced themselves from Epstein. Bill Clinton told the committee that he had ended his relationship with Epstein years before the financier entered a guilty plea in 2008 to soliciting prostitution from an underage girl.
The former Democratic president said he first remembered meeting Epstein when he flew aboard his private jet in 2002 for the Clintons’ humanitarian work, and they parted ways the year after.
“There’s nothing that I saw when I was around him that made me realize he was trafficking women,” Bill Clinton told the committee.
Advertisement
Epstein visited the White House numerous times during Clinton’s presidency and there are photos of them shaking hands, but Bill Clinton said he did not recall those interactions.
Hillary Clinton said she never even recalled meeting Epstein.
Still, they faced hours of questioning under oath from lawmakers who are searching for accountability for anyone who was aware or ignored Epstein’s abuse of underage girls.
The next phase of development at the Jade Business Park in Murton is set to proceed after the latest proposal for three units was supported by Durham County Council.
Located near the A19, the council-owned scheme was created to provide space for distribution, technology, and advanced manufacturing businesses.
Currently, six out of the seven units built during phase one are occupied, with 149 jobs on-site. A lease agreement is underway for the remaining property.
Advertisement
Last October, funding previously allocated by Durham County Council for the business park was pulled from its capital programme.
An allocation of £2.6 million, funded by corporate borrowing, was outlined for enabling works to build additional industrial space as part of phase two at the Jade Business Park next to Dalton Park outlet centre.
Yet, the local authority said the scheme, alongside a similar development in Bishop Auckland, hasn’t been “completely banished” and will continue to be assessed for alternative investment and funding opportunities, minimising the need for council borrowing.
A Durham County Council economy and enterprise scrutiny committee was told that staff are in contact with several companies about future opportunities at the site.
Advertisement
While not directly on the Jade site, the Eastern Green Link 1 – a high voltage electrical connection providing a marine cable link between Scotland and a landfall point north of Seaham – could use the Jade site.
New data centres and energy storage companies have also been linked with the development.
The Murton site could also be home to an on-land substation for the proposed Morven Wind Farm – a significant offshore wind project by bp and EnBW off the Aberdeenshire coast.
“The extent of the proposed development and land requirement is still to be determined but it may be situated on the Jade Business Park.
Advertisement
“Durham County Council are having ongoing and regular contact with Morven about this project and any particular impacts on its viability as a strategic employment location,” a council report said.
The Lady is a gripping four-part royal true crime drama about the former Duchess of York’s dresser in the 1980s and 1990s.
Hayley Anderson TV Reporter
21:30, 02 Mar 2026
The Lady: Natalie Dormer stars in Britbox trailer
ITV’s The Lady concludes tonight, leaving fans curious about how the royal true crime drama wrapped up.
Advertisement
For nine years, Jane Andrews (portrayed by Mia McKenna-Bruce) lived what seemed an idyllic existence, working in close proximity to Sarah Ferguson (Natalie Dormer), the former Duchess of York.
However, years following her redundancy from the position, Jane faced accusations of killing her partner Thomas Cressman (Ed Speleers).
She had struck him on the head with a cricket bat before delivering a fatal stab wound at their London residence, subsequently fleeing the scene for several days until police apprehended her.
Jane maintained that he had been violent towards her previously, forcing her down the stairs and restraining her to the bed to assault her, telling the court that she attempted to defend herself but ultimately inflicted the fatal injury on her partner.
Advertisement
What happened to Jane Andrews?
ITV’s The Lady concludes with Jane Andrews being convicted of murdering her partner Thomas Cressman.
Following a life sentence, Jane is shown in prison consulting with a psychiatrist who agrees with her assessment that she has Borderline Personality Disorder.
The four-part series then jumps forward to Jane speaking with her parents by telephone at Christmas, with her mother expressing worry about her being isolated.
However, Jane swiftly dismisses this concern, revealing she had acquired a new correspondent.
In the closing moments, Jane is depicted writing to someone with a newspaper cutting on her desk bearing the headline “King of the Wing”, accompanied by a photograph of a man. The closing credits state: “In 2003, Jane Andrews appealed against her conviction on the grounds of fresh psychiatric evidence.
Advertisement
“The appeal was refused and her claims of childhood sexual abuse remains unproven.
For the latest showbiz, TV, movie and streaming news, go to the new **Everything Gossip** website.Ensure our latest headlines always appear at the top of your Google Search by making us a Preferred Source.** Click here to activate**** or add us as your Preferred Source in your Google search settings.**
“She was released on licence in 2015 but recalled to prison in 2018, following allegations of harassment from a former boyfriend.
“No evidence was found to support the allegations but she remained in prison until 2019.”
Advertisement
Who is ‘King of the Wing’?
The identity of the “King of the Wing” remains unexplained, with no indication whether he was linked to American politics or served as a prison governor.
The newspaper cutting displays the surname Affcott, though no records exist of Jane corresponding with anyone bearing this name.
She did, however, maintain contact with a pen pal named Mark Ellson, who allegedly began writing to her whilst he was imprisoned for fraud.
Advertisement
The Mirror previously reported Ellson describing her as “obsessive”, before adding: “She is a difficult person to understand but I have seen how erratic she can be. Others need to be aware of this too.”
Is The Lady ending accurate?
The Lady remains largely faithful to events, though like many true crime dramas, certain scenes and characters have been fictionalised for dramatic effect.
For example, before the guilty verdict is delivered, Jane is depicted in the programme becoming light-headed outside the courtroom and collapsing, resulting in hospital treatment. Whilst Jane did visit hospital during the actual trial, it followed an emotional collapse rather than a fainting episode.
The character Aleksandra (Ophelia Lovibond) is also fictional, so Andrews didn’t lodge with her at her London residence throughout the trial proceedings.
Advertisement
However, she was subsequently diagnosed with Borderline Personality Disorder, though this diagnosis didn’t assist Andrews in winning her appeal.
Natural Resources Wales has confirmed that a large part of Brechfa Forest in Carmarthenshire has been been closed off after the incident
Pictures have revealed the damage caused to a 475ft tall wind turbine at a Welsh forest which lost of one its giant blades.
Advertisement
A large part of Brechfa Forest in Carmarthenshire has been closed off for safety reasons after one of three blades became detached from a turbine in a picturesque area used by walkers, horse riders and mountain bikers. The incident is believed to have happened last week at the wind farm north of the village of Brechfa, around 15 miles north-east of Carmarthen.
There are 28 wind turbines at the site in total, each one with a tip height of 145 metres and a rotor diameter of more than 92 metres. Stay informed on Carms news by signing up to our newsletter here.
Brechfa Forest in its entirety covers around 6,500 hectares of land and is managed by Natural Resources Wales (NRW).
Officers have been at the site and signs have been erected advising people that rights of access have been excluded for a week for the purpose of “avoiding danger to the public”.
However, NRW has said the closure of the forest will remain in place “until it is confirmed that the area can be safely reopened”.
Images taken at the forest show one of the giant turbines with only two blades. It is unclear how one of the blades became detached and when or if it is able to be reattached.
Ensure our latest news and sport headlines always appear at the top of your Google Search by making us a Preferred Source. Click here to activate or add us as Preferred Source in your Google search settings.
Advertisement
A spokeswoman for NRW said: “We have temporarily closed access to parts of the forest around Brechfa Forest West Wind Farm as a safety precaution while the operator, RWE, investigates the cause of a blade detachment at one of the turbines.
“The closure covers the area shown on the published map and restricts public access to the affected section of land.
“Ensuring appropriate measures are in place to keep visitors safe is NRW’s priority, and the closure will remain in place until it is confirmed that the area can be safely reopened.
Advertisement
“NRW is in close communication with RWE as they continue their investigation into this matter. Further updates will be issued as more information becomes available.”