It’s fourth period in the auto lab at Vel Phillips Memorial High School in Madison, Wisconsin, and a dozen students maneuver between nearly as many cars.
At one bay, a junior adjusts the valves of an oxygen-acetylene torch and holds the flame to a suspended Subaru’s front axle to loosen its rusty bolts. Steps away, two classmates tease each other in Spanish as they finish replacing the brakes on a red Saab. Teacher Miles Tokheim moves calmly through the shop, checking students’ work and offering pointers.
After extensive renovations, the lab reopened last year with more room and tools for young mechanics-in-training. What visitors can’t see is the class recently got an upgrade, too: college credit.
Through dual enrollment, high schoolers who pass the course now earn five credits for free at Madison College and skip the class if they later enroll. Classes like these are increasingly common in Wisconsin and across the country. They’ve allowed more high schoolers to earn college credit, reducing their education costs and giving them a head start on their career goals.
Advertisement
Wisconsin lawmakers and education officials want more high schoolers to have this opportunity. But these classes need teachers with the qualifications of college instructors, and those teachers are in short supply.
That leaves many students — disproportionately, those in less-affluent areas — without classes that make a college education more attainable.
“What’s at stake is access to opportunity, especially for high school students at Title I, lower-income high schools, rural high schools … It’s really been an on-ramp for so many students,” said John Fink, who studies dual enrollment at Columbia University’s Community College Research Center. “But we also know that many students are left behind.”
High school teacher Miles Tokheim earns an extra $50 a year teaching a college course. (Photo by Joe Timmerman, Wisconsin Watch)
To teach the auto class, Tokheim had to apply to become a Madison College instructor. As a certified auto service technician with a master’s degree, the veteran teacher met the college’s requirements for the course.
But for many teachers, teaching dual enrollment would require enrolling in graduate school, even if they already have a master’s degree. That, school leaders say, is a hard sell, despite the state offering to reimburse districts for the cost. Teachers in Wisconsin often don’t make much more money teaching advanced courses the way they do in some other states, and adding these courses doesn’t raise a school’s state rating.
Advertisement
“You’re asking people who are well educated to begin with to go back to school, which takes time and effort, and their reward for that is they get to teach a dual-credit class,” said Mark McQuade, Appleton Area School District’s assistant superintendent of assessment, curriculum and instruction.
High Standards, Short Supply
Nationwide, the number of high schoolers earning college credit has skyrocketed in recent years. In Wisconsin, the tally has more than doubled, with students notching experience in subjects ranging from manufacturing to business.
Most earn credit from their local technical college without leaving their high school campus. In the 2023-24 school year, one in three community college students in the state was a high schooler.
Education and state leaders have welcomed the trend, pointing to the potential benefits: Students who take dual-enrollment classes are more likely to enroll in college after high school. Theycan save hundreds or thousands of dollars on college tuition and fees. If they do enroll in college, they spend less time completing a degree.
Advertisement
“It also proves to the kids — to some of our kids that are first-generation — that they can do college work,” McQuade said.
Wisconsin Watch talked to leaders in five school districts. All said the shortage of qualified teachers was one of the biggest barriers to growing their dual-enrollment programs.
In 2015, the Higher Learning Commission, which oversees and evaluates the state’s technical colleges, released new guidelines about instructor qualifications. The new policy required many of Wisconsin’s dual-enrollment teachers to have a master’s degree and at least 18 graduate credits in the subject they teach, just like college instructors.
Advertisement
In 2023, the commission walked back the new policy.
By then, colleges across the state had already adopted the higher standard.
Meanwhile, Wisconsin high schools have struggled to hire and retain teachers, even without college credit involved. Four in 10 new teachers stop teaching or leave the state within six years, a 2024 Department of Public Instruction analysis shows.
The subject-specific prerequisite is much different from the graduate education K-12 teachers have historically sought: the kind that would help them become principals or administrators, said Eric Conn, Green Bay Area Public Schools’ director of curricular pathways and post-secondary partnerships.
Advertisement
“To advance in education, it wasn’t about getting a master’s in a subject area. It was getting a master’s in education to develop into educational administration or educational technology,” Conn said. For teachers who already have a master’s degree, he said, going back to school just to teach one or two new classes is “a large ask.”
Funding Tempts Few
When the Higher Learning Commission announced the heightened requirements in 2015, leaders of the Wisconsin Technical College System sounded the alarm. They warned that 85 percent of the instructors currently teaching these classes could be disqualified, whittling students’ college credit opportunities.
Wisconsin education leaders called on the Legislature to allocate millions of dollars to help teachers get the training they’d need — and they agreed. In 2017, lawmakers created a grant program to reimburse school districts for teachers’ graduate tuition. But of the $500,000 available every year, hundreds of thousands go unused.
“Nobody’s ever, ever requested this funding and been denied because of a funding shortage,” said Tammie DeVooght Blaney, executive secretary of the Higher Educational Aids Board, which manages the grant.
Advertisement
Tuition and fees for a single graduate credit at a Universities of Wisconsin school can cost over $800, putting the total cost of 18 graduate credits at around $15,000. For teachers who don’t already have a master’s degree, the cost is even steeper. The state grant requires teachers or districts to front the cost and apply for reimbursement yearly, with no guarantee they’ll get it.
A handful of Green Bay teachers have used the grant, Conn said, but many just aren’t interested in returning to school, even if it’s free.
The district offers 50 dual-enrollment courses, but he’d like to offer classes in more core subjects, which help students meet general college education requirements. There just aren’t enough teachers qualified to teach college sciences and math to offer the same options across the district’s four high schools.
Teachers are busy, and not just in the classroom, said Jon Shelton, president of AFT-Wisconsin, one of the state’s teachers unions. Many already spend extra hours coaching, grading or leading after-school activities. Those who do go back to school typically enroll in one class at a time, he said, meaning they could be studying for several years.
Advertisement
Pros and Cons
The financial perks for teachers returning to school for dual-enrollment credentials are dubious at best.
Some teachers get a salary bump for obtaining a master’s degree, and some earn modest bonuses for teaching dual enrollment. But many teachers make no more than they would have without the extra training.
“There’s no incentive,” said Tokheim, the Madison auto instructor, who receives a $50 yearly stipend for teaching the college course. In contrast to his standard classes, his dual-enrollment class required him to attend two kinds of training.
There’s little incentive for schools either. They receive no extra state funding to offer college-level courses. Plus, the classes don’t factor into their state report card score, which measures students’ standardized test performance and graduation preparation, among other things.
Advertisement
Leaders at Central High School in Sheboygan wish it did. At that school, where the majority of students are Latino and almost all are low-income, one in three students took dual-enrollment courses in the 2023-24 school year. Still, the state gave the school a failing grade.
“It’s an afterthought in our report card, and it’s always the thing that we can celebrate,” Principal Joshua Kestell said.
So why would a teacher take on the added schooling?
“It’s good for kids,” Tokheim said. “That’s why they get us teachers, because we care too much.”
Advertisement
Other potential draws: the challenge of teaching more rigorous courses and the opportunity to collaborate with college instructors.
Heather Fellner-Spetz retired two years ago from teaching English at Sevastopol High School in Sturgeon Bay. She taught college-level oral communication classes for 10 years before she retired. When the Higher Learning Commission set the heightened requirements, she was allowed to continue teaching dual enrollment while she studied for more graduate credits.
“There wasn’t much I didn’t enjoy about teaching it. It was just fabulous,” Fellner-Spetz said.
She especially liked having a college professor observe her class, and she said it was good for the students, too. “When they had other people come into the room and watch the lesson or watch them perform, it just ups the ante on pressure.”
Advertisement
Meanwhile, the jury is still out on whether it’s necessary for dual-enrollment teachers to have the same credentials as college professors.
“Folks running these programs generally would say that teaching a quality college course to a high school student requires a unique skill set that blends high school and college teaching, and that is not necessarily captured by the traditional (graduate coursework) standard,” Fink said.
Wisconsin educators are divided on that question. Fox Valley Technical College has kept the higher standard, limiting the number of Appleton teachers who qualify. McQuade, the Appleton leader, questions those “restrictions,” saying he believes his teachers are well qualified to teach college-level courses. A different standard tied to student performance, for example, could let his district offer more classes across each of its schools.
Schauna Rasmussen, dean of early college and workforce strategy at Madison College, said the answer isn’t to lower the standard, but to help more teachers reach it.
Advertisement
In October, a group of Republican Wisconsin lawmakers introduced a bill aimed at making it easier for students to find dual-enrollment opportunities. It would create a portal for families to view options and streamline application deadlines, among other changes.
It doesn’t address the shortage of qualified teachers.
“Separate legislation would likely have to be introduced addressing expanding the pool of teachers for those programs,” Chris Gonzalez, communications director for lead author State Sen. Rachael Cabral-Guevara, wrote in an email.
The winners of the 2026 Swift Student Challenge will be announced on March 26, with the best among them set to receive a trip to Apple Park.
Winners of the 2026 Swift Student Challenge will be announced on March 26.
Every year, Apple holds the Swift Student Challenge. The event encourages up-and-coming student developers to practice their craft and lets them win various prizes. In an announcement on Monday, the iPhone maker described the annual event as a program meant to “uplift the next generation of entrepreneurs, coders, and designers.” The company added that winners will be notified on Thursday, March 26. Continue Reading on AppleInsider | Discuss on our Forums
FBI and CISA warn of Russian espionage campaign targeting messaging apps
Phishing and social engineering used to hijack Signal and other CMA accounts
Thousands of victims’ accounts compromised, including officials, military, and journalists
The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) are warning about an ongoing espionage campaign by Russian cyberspies.
In a joint Public Service Announcement (PSA) published late last week, the two agencies said Russian Intelligence Services (RIS)-affiliated threat actors are actively targeting commercial messaging applications (CMA). They specifically mentioned Signal, but stressed that other CMAs are most likely targeted, as well.
The victims are mostly current and former US government officials, military personnel, political figures, and journalists.
Article continues below
Advertisement
Following the Dutch
The campaign does not revolve around “breaking” the apps by abusing vulnerabilities, or similar. Instead, it revolves around phishing and social engineering, where the victims end up sharing access willingly.
“RIS cyber actors send phishing messages masquerading as automated CMA support accounts,” the PSA reads. “The actors tailor the messages to deceive targets into taking an action, such as clicking a link or providing verification codes or account PINs. If the user performs any of the requested actions, they unwittingly provide the actors with unauthorized access to their account either by adding the attacker’s device as a linked device or through a full account takeover.”
Advertisement
Roughly two weeks ago, Dutch authorities published a similar warning, saying that Russian spies were targeting not only Signal, but WhatsApp, as well. The General Intelligence and Security Service (AIVD), the Netherlands’ primary civilian intelligence and security agency, said at the time that the campaign was “large-scale”, and “global”. Targets were dignitaries, military personnel, and civil servants, including Dutch government employees.
AIVD believes the campaign is already a success: “The Russian hackers likely gained access to sensitive information through this campaign,” it said, although it did not detail if they accessed it from Dutch targets or someone else entirely.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
On X, FBI Director Kash Patel echoed these warnings, saying the effort “resulted in unauthorized access to thousands of individual accounts.”
Advertisement
“After gaining access, the actors can view messages and contact lists, send messages as the victim, and conduct additional phishing from a trusted identity,” he warned.
For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.
In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.
Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.
There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
Unlike social media companies, AI labs have an economic incentive to spread accurate information.
Still, there are reasons to fear that AI will nonetheless make public discourse worse.
For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.
But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.
The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.
Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophanticlarge language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.
Advertisement
But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.
Are you there Grok? It’s me, the demos
At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.
Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.
Advertisement
In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting. The bot replied:
For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.
Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July — when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.
But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.
Advertisement
One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.
The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.
What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.
Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.
Advertisement
Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.
AI might combat misinformation in practice. But does it in theory?
A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.
But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:
Advertisement
1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).
But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”
For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).
2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.
Advertisement
After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.
That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.
But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.
The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.
Advertisement
It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.
Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:
1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.
Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.
Advertisement
The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.
A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.
2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.
In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.
Advertisement
3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.
4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.
5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.
Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.
Advertisement
LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.
For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.
Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.
As the global energy system evolves, companies are racing to adopt technologies that can deliver real-world solutions, especially in hard-to-abate industries. Oklahoma, long known as the oil capital of the world, is a center for energy innovation, with Rose Rock Bridge at the forefront.
A non-profit based in Tulsa, Rose Rock Bridge is a pilot deployment studio that connects early-stage energy startups with corporate energy partners, non-dilutive funding, and pilot opportunities that accelerate commercialization. Now accepting applications for its Spring 2026 cohort through April 6, it is seeking early- and growth-stage startups developing practical, scalable solutions to today’s most pressing energy challenges.
Rose Rock Bridge gives startups access to real-world commercial workflows and pilot opportunities through energy partners with more than $150 billion in market capitalization, including Devon Energy, H&P, ONEOK, and Williams. Backed by one of the strongest coalitions of strategic partners and investors of any energy-focused accelerator, incubator, or venture studio, the program enables startups to move quickly from development to real-world testing and deployment.
Advertisement
Here’s how it works:
Discover opportunities for energy innovation
Rose Rock Bridge starts by working directly with corporate innovation teams to identify high priority technology solutions for their businesses, pinpointing which solutions will carry the most impact. Focus areas are formed around these findings.
“We don’t just chase the latest tech and hope to find a use for it. Our process starts at the asset level — identifying the specific operational bottlenecks and unmet requirements our partners are actually facing,” says Nishant Agarwal, Innovation Manager. “By leveraging our background in CVC and engineering, we run technical deep dives alongside partner subject matter experts to define the requirement first. We then source technologies as a direct response to those needs. This ensures we aren’t just presenting ‘interesting research,’ but delivering solutions with a validated deployment pathway and a clear line of sight to a business case.”
Tapping into its network of 40+ universities, 10+ energy incubators, and Fortune 500 companies, Rose Rock Bridge then determines emerging opportunities in the energy ecosystem. Rather than just selecting companies or ideas that might bring in capital, the studio chooses startups that have real potential to commercialize quickly in order to solve the industry’s most pressing challenges.
Advertisement
This year’s focus areas include:
“We’re evaluating deployment probability from day one,” says Andrada Pantelimon, Innovation Associate at Rose Rock Bridge, who manages sourcing strategy and startup operations. “Can this technology deliver a measurable bottom-line impact? Can it realistically pilot within 12 months? Is your team equipped to commercialize? Show us you’ve quantified your value proposition in operator terms and understand which business unit within a corporation might own this solution. If you can articulate those pieces clearly, you’re the kind of startup we want to support.”
Derisk technologies for early-stage startups & energy companies
The benefit is tangible for leading energy corporations seeking proven solutions to complex operational challenges. Rose Rock Bridge provides its corporate partners with validated, field-tested technologies while significantly reducing deployment risk. At the program’s conclusion, partners gain direct access to emerging innovations that have already undergone technical validation and operational feasibility assessment, with identified procurement pathways and pilot plans designed for commercial deployment.
Each cohort cycle, up to 15 startups are selected to enter a six-week virtual accelerator focused on pilot deployment. Founders participate in reverse pitch sessions with oil and gas partners, one-on-one clinics with industry and capital mentors, and hands-on commercialization workshops. Founders have the unique opportunity to refine their solutions, assess pilot feasibility, and build industry relationships. This approach derisks adoption and investments through iterative customer feedback, in-field testing, and pilots, enabling breakthrough technologies to reach commercial viability quickly and effectively.
Advertisement
“Our curriculum is singularly focused on preparing startups for the realities of corporate partnerships.,” says Devon Fanfair, Rose Rock Bridge Manager and former Techstars Managing Director who is scaling the RRB program. “Founders aren’t just learning, they’re actively testing their assumptions with the exact customers who might deploy their technology. That rapid feedback loop is what transforms promising technologies into deployment-ready solutions with clear commercial pathways.”
At the culmination of the accelerator, teams participate in the Rose Rock Bridge showcase with the unique opportunity to pitch their startup to the energy corporate partners they’ve worked alongside for the past six weeks. Four startups are selected to receive up to $100,000 in non-dilutive funding and opportunities for business support services, joining a one-year cohort designed to prepare technologies for market adoption.
“Rose Rock Bridge is a cornerstone of Tulsa Innovation Labs’ strategy to showcase our region as a national hub for energy innovation,” added Jennifer Hankins, Managing Director of Tulsa Innovation Labs. “By linking emerging technologies with some of the nation’s largest energy leaders, we help move innovation from concept to market faster, drawing new businesses to the region, enhancing our existing businesses, and reinforcing Tulsa’s role in the global energy economy.”
Deploy viable energy solutions
Once selected to become members of Rose Rock Bridge, startups then pilot their technology with relevant energy partners and grow their venture in Tulsa. Support includes pilot design, execution, and go-to-market strategy, connections to follow-on investment opportunities, subsidized access to services including legal, marketing, PR, and support establishing a Tulsa presence for partner access.
Advertisement
Rose Rock Bridge’s success is measured not just in pilot deployments, but in lasting commercial relationships. Multiple portfolio companies have progressed from initial field tests to multi-year contracts with Fortune 500 operators. By derisking the path from proof-of-concept to procurement, RRB has helped establish procurement pathways that might otherwise take years to develop, if they materialize at all.
Launched in 2022 with support from Tulsa Innovation Labs, the studio has helped companies advance new technologies, secure patents, launch products, and attract capital. It has derisked 33 startups, supported 16 active or in-development pilots, and invested more than $2 million in early-stage companies, generating a combined portfolio valuation of over $55 million.
Examples of the studio’s success include Safety Radar, an AI-powered risk management platform, which secured its first contract with a Rose Rock Bridge partner, expanded to additional energy and aerospace clients, raised over $2 million, and established a Tulsa office. Kinitics Automation, a Canadian company, successfully piloted with one partner, resulting in deployments across multiple sites, effectively using RRB as their gateway to the U.S. market.
Backed by corporate partners with more than $150 billion in combined market capitalization, Rose Rock Bridge reflects both the scale of the opportunity and Tulsa’s rising influence in energy innovation.
Advertisement
Devon Fanfair is Manager of Rose Rock Bridge.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
With the partial shutdown still ongoing and no budget resolution in sight because the GOP is simply unwilling to endure any oversight of its anti-migrant programs, the TSA is leaking personnel. A whole lot of TSA agents walked off the job the moment their paychecks failed to arrive, leaving travelers to deal with scenarios that are somehow even worse than being manhandled by the TSA.
Yep, that’s the Atlanta airport, which has never been known for expeditious service, filled to the horizon with unhappy people that bears more than a slight resemblance to USSR grocery store photos from the mid-70s. (Making the resemblance even more uncanny is the amount of visible food.)
President Donald Trump on Saturday threatened to send federal immigration agents to airports across the country on Monday if Democrats don’t agree to end the Department of Homeland Security shutdown, now approaching five weeks.
“If the Radical Left Democrats don’t immediately sign an agreement to let our Country, in particular, our Airports, be FREE and SAFE again, I will move our brilliant and patriotic ICE Agents to the Airports where they will do Security like no one has ever seen before, including the immediate arrest of all Illegal Immigrants who have come into our Country,” he wrote.
I totally believe ICE will “do Security like no one has ever seen before.” I mean, they’ve already been doing civil enforcement like no one has ever seen before. And what better way to handle a travel crisis then by sending in a bunch of under-trained racists who just spent their ICE signing bonuses on emissions defeat devices and wraparound sunglasses subscription services to our nation’s airports, where they can apply all the skills they never learned during ICE training with the professionalism we’ve come to expect from people who like yelling and brandishing firearms.
What could possibly go wrong? I mean, they’re already not trained to do the job they’re supposed to be doing, so doing a job they’ve never been trained to do can’t be that much of step up on the “promoted to highest level of your incompetence” scale.
Of course, that was just Trump saying some shit on social media because he apparently has nothing better to do with his time now that he’s (again) the Leader of the Free World. Trump says a lot of stuff. He quite frequently says the opposite thing only hours or minutes or seconds later.
Immigration agents will deploy to airports on Monday under the direction of border czar Tom Homan, President Donald Trump said Sunday, as talks to fund the Department of Homeland Security have yet to yield a breakthrough.
[…]
Homan told CNN on Sunday that the move is about “helping TSA do their mission and get the American public through that airport as quick as they can while adhering to all the security guidelines and the protocols.”
Siiiiiiiiiiiiiiiiiiiiiigh. If you don’t need to travel, then maybe don’t? Sending a bunch of over-funded, under-trained, trigger-happy federal officers into crowded airports is a recipe for disaster. And even Homan doesn’t seem to know what ICE will be doing to actually help expedite passenger screening — not when he’s promising they won’t be doing anything they’re not trained to do.
Advertisement
“We’re simply there to help TSA do their job in areas that don’t need their specialized expertise, such as screening through the X-ray machine. Not trained in that? We won’t do that,” Homan told CNN’s Dana Bash on “State of the Union.”
“But there are roles we can play to release TSA officers from the non-significant roles, such as guarding an exit so they can get back to the scanning machines and move people quicker,” he added.
“Guarding an exit?” What the hell does that even mean? TSA agents don’t “guard exits.” No one “guards exits.” Travelers and terrorists alike are interested in boarding planes. They’re not interested in exiting airports to, I don’t know, wander around the tarmac or wonder how the hell exactly they ended up on the outside of a building they 100% intended to remain on the inside of.
This is going to end up being a case of Your Tax Dollars Trying To Look Busy. And that’s the best case scenario. The worst case scenarios begin directly after that. And I don’t think travelers are going to feel any safer or more secure when there are a bunch of twitchy, camouflaged dudes in masks wandering around like they’re about ready to raid Entebbe, rather than just looking for an exit to guard.
We’re in the midst of pretty hellish times. This… this just seems like we’re being trolled by a Higher Power that’s decided to amuse itself while the rest of the world falls apart.
Apple announced that its annual Worldwide Developers Conference (WWDC) will be June 8-12 this year, beginning with a keynote on Monday, June 8.
Each year, WWDC is used to unveil the company’s latest slate of software coming to iPhones, iPads, Macs and more. The news comes after the company released the iPhone 17E, iPad Air M4, and a number of new Macs, including the $599 MacBook Neo earlier in March. While we have seen some hardware announced during previous WWDC keynotes, like the Vision Pro in 2023, the developers conference has recently been focused on software and Apple Intelligence.
At the 2026 event, we expect Apple to introduce new versions of operating systems, like iOS 27, MacOS 27, iPadOS 27, WatchOS 27, VisionOS 27 and TVOS 27.
Advertisement
“WWDC is one of the most exciting times for us at Apple because it’s a chance for our incredible global developer community to come together for an electrifying week that celebrates technology, innovation and collaboration,” Susan Prescott, Apple’s vice president of worldwide developer relations, said in a statement.
There’s still time to grab a last-minute ticket for GeekWire’s Agents of Transformation, a half-day summit in Seattle on Tuesday that will explore how agentic AI is redefining work, creativity, and leadership.
Keep reading for details about our speaker lineup, the schedule, logistical information and more.
This event builds on an ongoing GeekWire editorial series, underwritten by Accenture, spotlighting how startups, developers and tech giants are using intelligent agents to innovate.
Computershack shares a report from NBC News: Leonid Radvinsky, the owner of adult-content platform OnlyFans, has died of cancer at the age of 43, the company said in a statement on Monday. “We are deeply saddened to announce the death of Leo Radvinsky. Leo passed away peacefully after a long battle with cancer,” an OnlyFans spokesperson said. “His family have requested privacy at this difficult time.”
Radvinsky, a Ukrainian-American entrepreneur, acquired Fenix International Limited, the parent company of OnlyFans, in 2018 and served as its director and majority shareholder. He also runs Leo, a venture capital fund he founded in 2009 that focuses primarily on investments in technology companies. According to Reuters, OnlyFans is valued at around $5.5 billion, including debt.
Walmart found that purchases made directly inside ChatGPT converted at only one-third the rate of traditional website checkouts, leading it to abandon OpenAI’s Instant Checkout in favor of routing users through its own platform. Search Engine Land reports: Starting in November, Walmart offered about 200,000 products through OpenAI’s Instant Checkout. Users could complete purchases inside ChatGPT without visiting Walmart’s site. Daniel Danker, Walmart’s EVP of product and design, said those in-chat purchases converted at one-third the rate of click-out transactions. He called the experience “unsatisfying” and confirmed Walmart is moving away from it.
Instant Checkout was designed to let users complete purchases directly inside ChatGPT without visiting a retailer’s website. However, earlier this month, OpenAI confirmed it was phasing out Instant Checkout in favor of app-based checkout handled by merchants. Walmart will embed its own chatbot, Sparky, inside ChatGPT. Users will log into Walmart, sync carts across platforms, and complete purchases within Walmart’s system. A similar integration is coming to Google Gemini next month. In other Walmart-related news, the retailer announced plans to roll out “digital price tags” to all U.S. stores by the end of the year.
A LinkedIn report shows that while the rate of people starting new jobs in Ireland is falling, the decline is softer than other European markets.
Workplace social media platform LinkedIn has released new data exploring Ireland’s current hiring market. What was discovered is that Irish hiring rates stalled during the month of January and recorded a “moderate” 7.2pc year-on-year decline.
In Ireland, the rate of decline was shown, however, to be less harsh than that of European counterparts, with the wider EMEA-LATAM grouping experiencing a year-on-year average decline of 12.2pc. Worse still, Italy and the Netherlands reported a decline of more than 16pc and France a decline of more than 17pc year-on-year.
Commenting on the data, LinkedIn Ireland’s country manager Cara O’Leary said: “Despite the stall in hiring, Ireland remains more resilient than many of our European peers. While some sectors might be seeing a lull in new hires, there are other industries that are on the front foot like financial services and healthcare, where opportunity abounds despite the broader cautious jobs market.”
Advertisement
Financial services (up 5.9pc), and hospitals and healthcare (up 5.4pc) are examples of what the report noted as numerous industries displaying ‘green shoots’ evident throughout Ireland.
Mobility and AI
The demand for flexible working opportunities was also shown to have increased, as Ireland saw the highest level of remote job postings offered (10.9pc), coming second in Europe for the proportion of hybrid positions advertised. The report said that despite accounting for 10.9pc of all job postings, applications for remote roles accounted for 18.5pc of all applications, highlighting the pulling power of flexibility for attracting talent.
O’Leary said: “Our latest data continues to show the magnetic power of flexible work to attract prospective talent. Ireland dominates the European ranks for remote job postings and coming a close second for hybrid roles. The volume of applications for remote positions underlines their desirability and sends a clear signal that flexibility is a key differentiator to hiring companies.”
LinkedIn’s data also highlighted a demand for specialised talent in a landscape where it found the ability to work with AI agents among the fastest growing AI engineering skills of 2025. This, the report stated, is reflective of a shift toward autonomous execution for certain tasks.
Advertisement
Similarly, a growth in AI strategy and large language model (LLM) operations was found by the platform to be underscoring organisations’ investments into specialised workflows, noting that the net result is that AI engineering talent now hold a mobility premium, and due to highly portable skills are eight times more likely to move across borders than the average LinkedIn member.
O’Leary said: “Employers are eying up specialist talent, particularly professionals with expertise in AI agents, AI strategy and LLM ops. Given that many of these roles did not exist five years ago, AI professionals are in a position to command a clear premium.
“For example, AI engineering professionals are significantly more mobile than the wider workforce, with our previous data showing Ireland to be a net beneficiary of AI migration.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
You must be logged in to post a comment Login