Connect with us
DAPA Banner

Tech

This Android XR Feature Convinced Me Smart Glasses Aren’t So Pointless After All

Published

on

One of my biggest gripes when navigating a new area is that I’m too busy following directions on my phone to really take in my surroundings. But after trying on Google’s Android XR glasses, I’ve seen a promising solution. 

At Mobile World Congress in Barcelona, I got a demo of Google’s wearable prototype frames and was more impressed than I expected to be. I’m not big on wearables; I’m good with plain-old glasses and jewelry that can’t ping me with notifications throughout the day. But I decided to give the Android XR glasses a try as I explored a strip of the MWC conference hall dubbed Android Avenue. 

With a thick black frame and clear lenses, the Android XR prototype glasses look rather unassuming — especially because the display in the right lens is barely perceptible. Once I put them on, I long-pressed the right side temple to trigger Gemini and ask questions about objects around me. Then my skepticism slowly began to dissolve.

Advertisement

The feature that sold me was the Google Maps demo. I looked at a photo of Barcelona stadium Camp Nou and asked Gemini to “navigate here.” White text appeared in the center of the lens, showing me how far I’d need to go before turning right. And when I looked down, I could see a visualization of the route, like you’ll find in the Maps app on a mobile device, so I could just follow the highlighted path. That would solve my dilemma of wanting to know where I’m going while also trying to take in the view. 

I also looked at a vinyl cover for Barcelona, the album by Freddie Mercury and Montserrat Caballé, and asked Gemini to play a song from it. The audio quality was impressively comparable to what I’d hear with headphones — but without the feeling of something in or on my ears, which I appreciated. 

And lastly, I got a demo of live translation through the glasses. The Google employee showing me the prototype spoke in Spanish and then Farsi, and an overlay of text appeared as I looked through the glasses at him and my surroundings. Perhaps the coolest part is I also heard the English translation spoken aloud in his (AI-generated) voice. 

Google has also tapped this AI tech for its Pixel 10 phones, so if you’re on a phone call with someone speaking a different language, you’ll get real-time translation with a simulation of their voice. Google Translate also got an AI update last year that surfaces audio and text translations in the app as two people chat. Glasses feel like a good fit for this use case, too, since you don’t have to pull out your phone and look down at a screen when talking to someone. If the other person doesn’t have Android XR glasses, though, they’ll need to glance at their phone to see a translation of what you’re saying. 

Advertisement
A woman in a pink headscarf wears Google's Android XR prototype

A subtle display in the right lens shows projections of directions and other information.

Patrick Holland/CNET

I walked away from the demo finding I’d softened to the idea of potentially owning smart glasses of my own someday. I’m not completely sold, as I’m not sure I need more tech in my life, but there are certainly instances in which it could come in handy to see a subtle overlay of answers from an AI assistant like Gemini. And because Android XR glasses look more like standard specs than the doomed Google Glass, I could probably pull them off without looking too pretentious. CNET’s Patrick Holland had a similar conversion moment when he tried the Android XR glasses at Google I/O last year.

As CNET’s Scott Stein has noted, smart glasses “aim to be what you want to wear, ideally every day and all day long. They could well become constant companions like your earbuds, smartwatch, fitness band and wellness ring, and as indispensable as your phone.”

Advertisement

I’ll probably have to wait a bit longer before making that call for myself. Google hasn’t shared any specifics on a launch date for glasses with Android XR, though it has said that Warby Parker and Gentle Monster will be the first eyeglass brands to carry the AI-powered glasses. 

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Apple Prepares To Add Search Ads To Apple Maps

Published

on

Apple is reportedly preparing to add search ads to Apple Maps, “and it could start to roll out to users by the summer,” reports AppleInsider, citing sources from Bloomberg (paywalled). From the report: Apple will make an announcement as soon as March. This will bring ads to search queries within the navigation app, which will operate similar to Google’s advertising system. Retailers and brands will be able to bid for ad spots located against search queries for specific terms, such as types of food or services. The winning bid will be able to show an ad at the top of the results, pointing to a related location for that business. Apple also announced in January that it would add more ads within the App Store, starting March in the UK and Japan.

Source link

Continue Reading

Tech

Samsung will soon let you control smart home devices from your car’s dashboard

Published

on

Your car might just become the new smart home hub for your house. Samsung has expanded SmartThings integration, enabling drivers to control their smart home devices directly from their car’s infotainment system. It’s called Car-to-Home. 

Building on the earlier Home-to-Car capability that allowed users to monitor their cars from inside the house, the Car-to-Home feature flips the functionality so you can control your smart home appliances, such as air conditioners, lighting systems, and other smart switches, from your car’s dashboard. 

What can the Car-to-Home feature do?

The practical scope of the feature is broader than it might sound, as it is compatible with devices such as air conditioners, air purifiers, robot vacuums, lights, and cameras. Connecting is straightforward — drivers scan a QR code displayed on their car’s infotainment screen and link their vehicle to their SmartThings account. 

Apart from manual control (flipping the switches), the Car-to-Home feature unlocks location-aware automation that genuinely changes how your home responds to your day. You can set routines so that the SmartThings network turns on the required appliances as you park your car in the garage.

I can see people using the feature to pre-cool their rooms or run air purifiers before they arrive home after a tiring day at the office. On the contrary, the feature should also shut everything down (automatically), as you get in the car and leave the driveway. There’s a dedicated Away Mode for handling lights when you’re away. 

Advertisement

Who gets access, and when?

For now, the feature is available on select Hyundai and Kia cars, specifically those that feature the connected car Navigation Cockpit (ccNC) introduced after November 2022 in Korea. However, both Samsung and Hyundai aim to expand the feature to their customers throughout the world in due course. 

Eligible models include the Grandeur, Santa Fe, Ioniq 5, K5, Sorento, and EV9. Samsung also plans to extend the feature to Genesis vehicles equipped with the ccIC27 infotainment system. 

As and when the feature becomes available to a wider audience, it could drive a behavioral shift in which cars become central nodes in someone’s smart home ecosystem, linking mobility and domestic technology in ways that were, until recently, purely speculative.

Source link

Advertisement
Continue Reading

Tech

The F-22 Raptor Is Getting 2 New Upgrades

Published

on





The F-22 Raptor is one of the premier fighter jets in the sky and one of the few fifth-generation fighters in active service in 2026. Still, despite its bleeding-edge placement in the United States Air Force’s arsenal, it’s getting a little long in the tooth, having first been introduced to service all the way back in 2005.

The War Zone reported that a Lockheed Martin-produced mockup of the new version of the Raptor was at the Warfare Symposium, a convention for the defense industry and elements of the United States military. The outlet reported some noteworthy changes being made on this plane. Namely, the aircraft is slated to get upgrades in the form of some extra range and another set of eyes.

Fuel tanks and sensor pods might not sound like a big deal, as those components have been mounted to wing pylons of various aircraft for decades. But it’s not so easy to make these kinds of adjustments on a plane as stealthy as the F-22. That’s because external fuel tanks and sensors don’t have the same stealth considerations as the rest of the aircraft. A big fuel tank is nice, but it can make the plane more visible to radar.

Advertisement

The latest and greatest Raptor

The newer and stealthier sensor pods are posited to give the Raptor better infrared tracking capabilities, according to The War Zone. Given the F-22’s primary role as an air-to-air fighter and the increasing prevalence of powerful stealth fighters from potentially adversarial air forces, any extra capability would likely be welcome. 

Specifics as to how much extra range the fuel tanks will give the Raptor and what the sensor pods will allow the F-22 Raptor to do are likely classified. Nevertheless, upgrades are expected to enter service, or at least more advanced testing, over the course of 2026.

Advertisement

The F-22 Raptor, despite all of its menace and upcoming capabilities that, at least on paper, seem to entirely outclass most other jets, has never seen much air-to-air combat apart from shooting down a suspected surveillance balloon. The jet’s exclusivity paired with the fact that Air Force fighters don’t shoot down jets that frequently, means that the F-22 doesn’t see a lot of air-to-air action (at least that we know of).



Advertisement

Source link

Continue Reading

Tech

Operation Alice: The dark web isn't as hidden as it seems, as global crackdown shows

Published

on


Europol recently unveiled “Operation Alice,” a major effort to dismantle a large network of fraudulent websites hidden within the dark web. The investigation began in 2021 and initially focused on a platform named Alice with Violence CP. In the end, the operation took down one of the largest dark web…
Read Entire Article
Source link

Continue Reading

Tech

Remembering IEEE Power & Energy Society’s Mel Olken

Published

on

Mel Olken

Former executive director of the IEEE Power & Energy Society

Fellow, 92; died 9 January

Olken became the first executive director of the IEEE Power & Energy Society (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s Power & Energy Magazine. Olken led the publication until 2016, when he retired.

After receiving a bachelor’s degree in engineering from the City College of New York, Olken was hired as an electrical engineer by American Electric Power, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and nuclear power plants. While at AEP, he was promoted to manager of the electrical generation department.

Advertisement

He joined IEEE in 1958 and became a PES member in 1973. An active volunteer, he chaired the society’s energy development and power generation committee and its technical council.

Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”

He became an IEEE staff member in 1984 as society services director for IEEE Technical Activities. From 1990 to 1995 he served as managing director of Regional Activities group (now IEEE Member and Geographic Activities), before becoming PES executive director.

He received a PES Lifetime Achievement Award in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”

Advertisement

Stephanie A. Huguenin

Research scientist

IEEE member, 48; died 1 October

Huguenin was an administrative assistant in the physics and biophysics department at Augusta University, in Georgia. According to her Augusta obituary, she died of an illness acquired during her volunteer work in India.

She received a bachelor’s degree in engineering in 1999 from the College of Charleston, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the Jenkins Institute for Children), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the Mother Teresa Foundation.

Advertisement

Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer Ebet Roberts in New York City. In 2010 she left to work as an operations strategist and technical consultant.

She earned a master’s degree in communication and research science in 2016 from New York University. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.

From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.

She was a member of the IEEE Geoscience and Remote Sensing Society and the IEEE Systems Council.

Advertisement

Huguenin volunteered for the Internet Engineering Task Force, a standards development organization, and the American Registry for Internet Numbers. ARIN manages and distributes internet number resources such as IP addresses and autonomous system numbers.

The nonprofits she supported included the Coastal Conservation League, the Longleaf Alliance, the Lowcountry Land Trust, the Nature Conservancy, and Women in Defense.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

2026 Swift Student Challenge winners to be announced on March 26

Published

on

The winners of the 2026 Swift Student Challenge will be announced on March 26, with the best among them set to receive a trip to Apple Park.

Colorful Swift programming language logo: an orange bird silhouette over faint code inside a glossy circular badge, centered on a vibrant blue and purple gradient background with soft waves
Winners of the 2026 Swift Student Challenge will be announced on March 26.

Every year, Apple holds the Swift Student Challenge. The event encourages up-and-coming student developers to practice their craft and lets them win various prizes.
In an announcement on Monday, the iPhone maker described the annual event as a program meant to “uplift the next generation of entrepreneurs, coders, and designers.” The company added that winners will be notified on Thursday, March 26.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Signal is being targeted by Russian hackers in a huge new phishing campaign, FBI says

Published

on


  • FBI and CISA warn of Russian espionage campaign targeting messaging apps
  • Phishing and social engineering used to hijack Signal and other CMA accounts
  • Thousands of victims’ accounts compromised, including officials, military, and journalists

The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) are warning about an ongoing espionage campaign by Russian cyberspies.

In a joint Public Service Announcement (PSA) published late last week, the two agencies said Russian Intelligence Services (RIS)-affiliated threat actors are actively targeting commercial messaging applications (CMA). They specifically mentioned Signal, but stressed that other CMAs are most likely targeted, as well.

Source link

Continue Reading

Tech

AI could be the opposite of social media

Published

on

For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.

In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.

Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.

This media environment cultivated broad public agreement over basic facts and widespread trust in mainstream institutions. It also helped the government wage a barbaric war in the name of lies.

Advertisement
  • There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
  • LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
  • Unlike social media companies, AI labs have an economic incentive to spread accurate information.
  • Still, there are reasons to fear that AI will nonetheless make public discourse worse.

For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.

But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.

The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.

Yet it has also helped pro-Hitler podcasters reach an audience of millions, enabled influencers with body dysmorphia to sell teenagers on self-mutilation, elevated crackpots to the commanding heights of American public health — and, more generally, eroded the intellectual standards, shared understandings, social trust, and (small-l) liberalism on which rational self-government depends.

Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophantic large language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.

Advertisement

But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.

Are you there Grok? It’s me, the demos

At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.

Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.

Advertisement

In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting.
The bot replied:

Screenshot of Grok

In reaching this assessment, Grok was affirming the consensus among mainstream journalistic institutions — and also, other chatbots.

For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.

Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.

But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.

Advertisement

One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.

The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.

What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.

Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.

Advertisement

Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.

AI might combat misinformation in practice. But does it in theory?

A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.

But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:

Advertisement

1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).

But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”

For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).

2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.

Advertisement

After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.

That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.

But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.

The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.

Advertisement

It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.

Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:

1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.

Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.

Advertisement

The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.

A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.

2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.

In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.

Advertisement

3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.

4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.

5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.

Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.

Advertisement

LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.

For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.

Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.

Source link

Advertisement
Continue Reading

Tech

From lab to market: Rose Rock Bridge fast-tracks energy innovation in Tulsa

Published

on

Presented by Tulsa Innovation Labs


As the global energy system evolves, companies are racing to adopt technologies that can deliver real-world solutions, especially in hard-to-abate industries. Oklahoma, long known as the oil capital of the world, is a center for energy innovation, with Rose Rock Bridge at the forefront.

A non-profit based in Tulsa, Rose Rock Bridge is a pilot deployment studio that connects early-stage energy startups with corporate energy partners, non-dilutive funding, and pilot opportunities that accelerate commercialization. Now accepting applications for its Spring 2026 cohort through April 6, it is seeking early- and growth-stage startups developing practical, scalable solutions to today’s most pressing energy challenges.

Rose Rock Bridge gives startups access to real-world commercial workflows and pilot opportunities through energy partners with more than $150 billion in market capitalization, including Devon Energy, H&P, ONEOK, and Williams. Backed by one of the strongest coalitions of strategic partners and investors of any energy-focused accelerator, incubator, or venture studio, the program enables startups to move quickly from development to real-world testing and deployment.

Advertisement

Here’s how it works:

Discover opportunities for energy innovation

Rose Rock Bridge starts by working directly with corporate innovation teams to identify high priority technology solutions for their businesses, pinpointing which solutions will carry the most impact. Focus areas are formed around these findings.

“We don’t just chase the latest tech and hope to find a use for it. Our process starts at the asset level identifying the specific operational bottlenecks and unmet requirements our partners are actually facing,” says Nishant Agarwal, Innovation Manager. “By leveraging our background in CVC and engineering, we run technical deep dives alongside partner subject matter experts to define the requirement first. We then source technologies as a direct response to those needs. This ensures we aren’t just presenting ‘interesting research,’ but delivering solutions with a validated deployment pathway and a clear line of sight to a business case.”

Tapping into its network of 40+ universities, 10+ energy incubators, and Fortune 500 companies, Rose Rock Bridge then determines emerging opportunities in the energy ecosystem. Rather than just selecting companies or ideas that might bring in capital, the studio chooses startups that have real potential to commercialize quickly in order to solve the industry’s most pressing challenges.

Advertisement

This year’s focus areas include:

“We’re evaluating deployment probability from day one,” says Andrada Pantelimon, Innovation Associate at Rose Rock Bridge, who manages sourcing strategy and startup operations. “Can this technology deliver a measurable bottom-line impact? Can it realistically pilot within 12 months? Is your team equipped to commercialize? Show us you’ve quantified your value proposition in operator terms and understand which business unit within a corporation might own this solution. If you can articulate those pieces clearly, you’re the kind of startup we want to support.”

Derisk technologies for early-stage startups & energy companies

The benefit is tangible for leading energy corporations seeking proven solutions to complex operational challenges. Rose Rock Bridge provides its corporate partners with validated, field-tested technologies while significantly reducing deployment risk. At the program’s conclusion, partners gain direct access to emerging innovations that have already undergone technical validation and operational feasibility assessment, with identified procurement pathways and pilot plans designed for commercial deployment.

Each cohort cycle, up to 15 startups are selected to enter a six-week virtual accelerator focused on pilot deployment. Founders participate in reverse pitch sessions with oil and gas partners, one-on-one clinics with industry and capital mentors, and hands-on commercialization workshops. Founders have the unique opportunity to refine their solutions, assess pilot feasibility, and build industry relationships. This approach derisks adoption and investments through iterative customer feedback, in-field testing, and pilots, enabling breakthrough technologies to reach commercial viability quickly and effectively.

Advertisement

“Our curriculum is singularly focused on preparing startups for the realities of corporate partnerships.,” says Devon Fanfair, Rose Rock Bridge Manager and former Techstars Managing Director who is scaling the RRB program. “Founders aren’t just learning, they’re actively testing their assumptions with the exact customers who might deploy their technology. That rapid feedback loop is what transforms promising technologies into deployment-ready solutions with clear commercial pathways.”

At the culmination of the accelerator, teams participate in the Rose Rock Bridge showcase with the unique opportunity to pitch their startup to the energy corporate partners they’ve worked alongside for the past six weeks. Four startups are selected to receive up to $100,000 in non-dilutive funding and opportunities for business support services, joining a one-year cohort designed to prepare technologies for market adoption.

“Rose Rock Bridge is a cornerstone of Tulsa Innovation Labs’ strategy to showcase our region as a national hub for energy innovation,” added Jennifer Hankins, Managing Director of Tulsa Innovation Labs. “By linking emerging technologies with some of the nation’s largest energy leaders, we help move innovation from concept to market faster, drawing new businesses to the region, enhancing our existing businesses, and reinforcing Tulsa’s role in the global energy economy.”

Deploy viable energy solutions

Once selected to become members of Rose Rock Bridge, startups then pilot their technology with relevant energy partners and grow their venture in Tulsa. Support includes pilot design, execution, and go-to-market strategy, connections to follow-on investment opportunities, subsidized access to services including legal, marketing, PR, and support establishing a Tulsa presence for partner access.

Advertisement

Rose Rock Bridge’s success is measured not just in pilot deployments, but in lasting commercial relationships. Multiple portfolio companies have progressed from initial field tests to multi-year contracts with Fortune 500 operators. By derisking the path from proof-of-concept to procurement, RRB has helped establish procurement pathways that might otherwise take years to develop, if they materialize at all.

Launched in 2022 with support from Tulsa Innovation Labs, the studio has helped companies advance new technologies, secure patents, launch products, and attract capital. It has derisked 33 startups, supported 16 active or in-development pilots, and invested more than $2 million in early-stage companies, generating a combined portfolio valuation of over $55 million.

Examples of the studio’s success include Safety Radar, an AI-powered risk management platform, which secured its first contract with a Rose Rock Bridge partner, expanded to additional energy and aerospace clients, raised over $2 million, and established a Tulsa office. Kinitics Automation, a Canadian company, successfully piloted with one partner, resulting in deployments across multiple sites, effectively using RRB as their gateway to the U.S. market.

Backed by corporate partners with more than $150 billion in combined market capitalization, Rose Rock Bridge reflects both the scale of the opportunity and Tulsa’s rising influence in energy innovation.

Advertisement

Devon Fanfair is Manager of Rose Rock Bridge.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Tech

DHS Takes Out Its Funding Frustrations On Millions Of Americans By Sending ICE Agents To Do TSA Work

Published

on

from the america:-never-greater dept

With the partial shutdown still ongoing and no budget resolution in sight because the GOP is simply unwilling to endure any oversight of its anti-migrant programs, the TSA is leaking personnel. A whole lot of TSA agents walked off the job the moment their paychecks failed to arrive, leaving travelers to deal with scenarios that are somehow even worse than being manhandled by the TSA.

Folks, it’s yet another Long National Nightmare!

Yep, that’s the Atlanta airport, which has never been known for expeditious service, filled to the horizon with unhappy people that bears more than a slight resemblance to USSR grocery store photos from the mid-70s. (Making the resemblance even more uncanny is the amount of visible food.)

Well, the TSA may be temporarily out of money, but guess who isn’t! I’ll leave it to Dr. America to deliver the news — a cure that’s worse than the disease!

Advertisement

President Donald Trump on Saturday threatened to send federal immigration agents to airports across the country on Monday if Democrats don’t agree to end the Department of Homeland Security shutdown, now approaching five weeks.

“If the Radical Left Democrats don’t immediately sign an agreement to let our Country, in particular, our Airports, be FREE and SAFE again, I will move our brilliant and patriotic ICE Agents to the Airports where they will do Security like no one has ever seen before, including the immediate arrest of all Illegal Immigrants who have come into our Country,” he wrote.

I totally believe ICE will “do Security like no one has ever seen before.” I mean, they’ve already been doing civil enforcement like no one has ever seen before. And what better way to handle a travel crisis then by sending in a bunch of under-trained racists who just spent their ICE signing bonuses on emissions defeat devices and wraparound sunglasses subscription services to our nation’s airports, where they can apply all the skills they never learned during ICE training with the professionalism we’ve come to expect from people who like yelling and brandishing firearms.

What could possibly go wrong? I mean, they’re already not trained to do the job they’re supposed to be doing, so doing a job they’ve never been trained to do can’t be that much of step up on the “promoted to highest level of your incompetence” scale.

Of course, that was just Trump saying some shit on social media because he apparently has nothing better to do with his time now that he’s (again) the Leader of the Free World. Trump says a lot of stuff. He quite frequently says the opposite thing only hours or minutes or seconds later.

Advertisement

It brings me no pleasure to report that this horrendous brain fart will apparently be A Real Thing:

Immigration agents will deploy to airports on Monday under the direction of border czar Tom Homan, President Donald Trump said Sunday, as talks to fund the Department of Homeland Security have yet to yield a breakthrough.

[…]

Homan told CNN on Sunday that the move is about “helping TSA do their mission and get the American public through that airport as quick as they can while adhering to all the security guidelines and the protocols.”

Siiiiiiiiiiiiiiiiiiiiiigh. If you don’t need to travel, then maybe don’t? Sending a bunch of over-funded, under-trained, trigger-happy federal officers into crowded airports is a recipe for disaster. And even Homan doesn’t seem to know what ICE will be doing to actually help expedite passenger screening — not when he’s promising they won’t be doing anything they’re not trained to do.

Advertisement

“We’re simply there to help TSA do their job in areas that don’t need their specialized expertise, such as screening through the X-ray machine. Not trained in that? We won’t do that,” Homan told CNN’s Dana Bash on “State of the Union.”

“But there are roles we can play to release TSA officers from the non-significant roles, such as guarding an exit so they can get back to the scanning machines and move people quicker,” he added.

“Guarding an exit?” What the hell does that even mean? TSA agents don’t “guard exits.” No one “guards exits.” Travelers and terrorists alike are interested in boarding planes. They’re not interested in exiting airports to, I don’t know, wander around the tarmac or wonder how the hell exactly they ended up on the outside of a building they 100% intended to remain on the inside of.

This is going to end up being a case of Your Tax Dollars Trying To Look Busy. And that’s the best case scenario. The worst case scenarios begin directly after that. And I don’t think travelers are going to feel any safer or more secure when there are a bunch of twitchy, camouflaged dudes in masks wandering around like they’re about ready to raid Entebbe, rather than just looking for an exit to guard.

We’re in the midst of pretty hellish times. This… this just seems like we’re being trolled by a Higher Power that’s decided to amuse itself while the rest of the world falls apart.

Advertisement

Filed Under: cruelty is the point, dhs, ice, tom homan, tsa

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025