Connect with us
DAPA Banner

Tech

Motorola’s Upcoming Razr Fold Pairs a Massive Battery With a Sleek Design

Published

on

Motorola is slowly teasing more details about its upcoming Razr Fold, including its battery capacity, thickness and durability. I got an early look at the phone at Mobile World Congress in Barcelona, before it debuts in North America this summer.

In January, the company shared a handful of Razr Fold specs, including that it’ll have a 6.6-inch external display and an 8.1-inch internal screen — making it slightly bigger than the Samsung Galaxy Z Fold 7 and Google Pixel 10 Pro Fold

We now also know the Razr Fold will be 4.6mm thick when open and 9.9mm thick when closed, weighing 243 grams. That places it firmly between Samsung’s and Google’s foldable offerings. 

Advertisement

In my hand, the Razr Fold felt similar to the Z Fold 7 in terms of its sleekness. The cover display is a comfortably and viable option for tasks like texting and scrolling. When you open the Razr, you can multitask with up to three apps. The incremental size-up compared with Samsung’s and Google’s foldables is hardly noticeable, but it should place it safely within their orbit.

The Razr Fold will also pack a triple 50-megapixel camera system, along with a 32-megapixel selfie camera on the cover and a 20-megapixel selfie inside. 

Motorola Razr Fold

The Razr Fold has a triple 50-megapixel camera system.

Advertisement

Celso Bulgatti/CNET

Despite its sleeker frame, the Razr Fold will have an impressively large 6,000-mAh battery. It’ll also support 80-watt wired charging and 50-watt wireless charging. That should help it stand out, especially from the 4,400-mAh battery on the Galaxy Z Fold 7. It’s powered by the Snapdragon 8 Gen 5 processor to boost performance and efficiency and to power AI features

See also: Motorola Razr Fold Debuts to Take On Samsung’s and Google’s Book-Style Phones

Motorola also shared more details about the Razr Fold’s durability. It’ll have an IP48 and IP49 rating, meaning it can withstand a meter of water for 30 minutes and handle high water pressure. But, unlike the Pixel 10 Pro Fold, it’s not dust-resistant. The Razr Fold will be the first smartphone to feature Corning’s Gorilla Glass Ceramic 3 on the cover.  

Like most premium Android phones, the Razr Fold will come with seven years of software and security updates. There are two color options: Pantone blackened blue, which has a more textured back, and Pantone lily white, which is smoother and matte. Both backs are made of vegan leather and offer a more luxurious feel than the glass on most premium phones (not to mention the relative lack of fingerprints). 

Advertisement

The two biggest questions still loom: price and availability. Motorola says it’ll share information on that as the summer release window approaches. 

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Remembering IEEE Power & Energy Society’s Mel Olken

Published

on

Mel Olken

Former executive director of the IEEE Power & Energy Society

Fellow, 92; died 9 January

Olken became the first executive director of the IEEE Power & Energy Society (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s Power & Energy Magazine. Olken led the publication until 2016, when he retired.

After receiving a bachelor’s degree in engineering from the City College of New York, Olken was hired as an electrical engineer by American Electric Power, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and nuclear power plants. While at AEP, he was promoted to manager of the electrical generation department.

Advertisement

He joined IEEE in 1958 and became a PES member in 1973. An active volunteer, he chaired the society’s energy development and power generation committee and its technical council.

Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”

He became an IEEE staff member in 1984 as society services director for IEEE Technical Activities. From 1990 to 1995 he served as managing director of Regional Activities group (now IEEE Member and Geographic Activities), before becoming PES executive director.

He received a PES Lifetime Achievement Award in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”

Advertisement

Stephanie A. Huguenin

Research scientist

IEEE member, 48; died 1 October

Huguenin was an administrative assistant in the physics and biophysics department at Augusta University, in Georgia. According to her Augusta obituary, she died of an illness acquired during her volunteer work in India.

She received a bachelor’s degree in engineering in 1999 from the College of Charleston, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the Jenkins Institute for Children), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the Mother Teresa Foundation.

Advertisement

Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer Ebet Roberts in New York City. In 2010 she left to work as an operations strategist and technical consultant.

She earned a master’s degree in communication and research science in 2016 from New York University. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.

From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.

She was a member of the IEEE Geoscience and Remote Sensing Society and the IEEE Systems Council.

Advertisement

Huguenin volunteered for the Internet Engineering Task Force, a standards development organization, and the American Registry for Internet Numbers. ARIN manages and distributes internet number resources such as IP addresses and autonomous system numbers.

The nonprofits she supported included the Coastal Conservation League, the Longleaf Alliance, the Lowcountry Land Trust, the Nature Conservancy, and Women in Defense.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

2026 Swift Student Challenge winners to be announced on March 26

Published

on

The winners of the 2026 Swift Student Challenge will be announced on March 26, with the best among them set to receive a trip to Apple Park.

Colorful Swift programming language logo: an orange bird silhouette over faint code inside a glossy circular badge, centered on a vibrant blue and purple gradient background with soft waves
Winners of the 2026 Swift Student Challenge will be announced on March 26.

Every year, Apple holds the Swift Student Challenge. The event encourages up-and-coming student developers to practice their craft and lets them win various prizes.
In an announcement on Monday, the iPhone maker described the annual event as a program meant to “uplift the next generation of entrepreneurs, coders, and designers.” The company added that winners will be notified on Thursday, March 26.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Signal is being targeted by Russian hackers in a huge new phishing campaign, FBI says

Published

on


  • FBI and CISA warn of Russian espionage campaign targeting messaging apps
  • Phishing and social engineering used to hijack Signal and other CMA accounts
  • Thousands of victims’ accounts compromised, including officials, military, and journalists

The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) are warning about an ongoing espionage campaign by Russian cyberspies.

In a joint Public Service Announcement (PSA) published late last week, the two agencies said Russian Intelligence Services (RIS)-affiliated threat actors are actively targeting commercial messaging applications (CMA). They specifically mentioned Signal, but stressed that other CMAs are most likely targeted, as well.

Source link

Continue Reading

Tech

AI could be the opposite of social media

Published

on

For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.

In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.

Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.

This media environment cultivated broad public agreement over basic facts and widespread trust in mainstream institutions. It also helped the government wage a barbaric war in the name of lies.

Advertisement
  • There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
  • LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
  • Unlike social media companies, AI labs have an economic incentive to spread accurate information.
  • Still, there are reasons to fear that AI will nonetheless make public discourse worse.

For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.

But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.

The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.

Yet it has also helped pro-Hitler podcasters reach an audience of millions, enabled influencers with body dysmorphia to sell teenagers on self-mutilation, elevated crackpots to the commanding heights of American public health — and, more generally, eroded the intellectual standards, shared understandings, social trust, and (small-l) liberalism on which rational self-government depends.

Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophantic large language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.

Advertisement

But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.

Are you there Grok? It’s me, the demos

At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.

Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.

Advertisement

In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting.
The bot replied:

Screenshot of Grok

In reaching this assessment, Grok was affirming the consensus among mainstream journalistic institutions — and also, other chatbots.

For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.

Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.

But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.

Advertisement

One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.

The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.

What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.

Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.

Advertisement

Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.

AI might combat misinformation in practice. But does it in theory?

A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.

But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:

Advertisement

1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).

But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”

For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).

2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.

Advertisement

After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.

That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.

But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.

The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.

Advertisement

It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.

Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:

1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.

Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.

Advertisement

The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.

A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.

2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.

In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.

Advertisement

3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.

4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.

5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.

Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.

Advertisement

LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.

For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.

Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.

Source link

Advertisement
Continue Reading

Tech

From lab to market: Rose Rock Bridge fast-tracks energy innovation in Tulsa

Published

on

Presented by Tulsa Innovation Labs


As the global energy system evolves, companies are racing to adopt technologies that can deliver real-world solutions, especially in hard-to-abate industries. Oklahoma, long known as the oil capital of the world, is a center for energy innovation, with Rose Rock Bridge at the forefront.

A non-profit based in Tulsa, Rose Rock Bridge is a pilot deployment studio that connects early-stage energy startups with corporate energy partners, non-dilutive funding, and pilot opportunities that accelerate commercialization. Now accepting applications for its Spring 2026 cohort through April 6, it is seeking early- and growth-stage startups developing practical, scalable solutions to today’s most pressing energy challenges.

Rose Rock Bridge gives startups access to real-world commercial workflows and pilot opportunities through energy partners with more than $150 billion in market capitalization, including Devon Energy, H&P, ONEOK, and Williams. Backed by one of the strongest coalitions of strategic partners and investors of any energy-focused accelerator, incubator, or venture studio, the program enables startups to move quickly from development to real-world testing and deployment.

Advertisement

Here’s how it works:

Discover opportunities for energy innovation

Rose Rock Bridge starts by working directly with corporate innovation teams to identify high priority technology solutions for their businesses, pinpointing which solutions will carry the most impact. Focus areas are formed around these findings.

“We don’t just chase the latest tech and hope to find a use for it. Our process starts at the asset level identifying the specific operational bottlenecks and unmet requirements our partners are actually facing,” says Nishant Agarwal, Innovation Manager. “By leveraging our background in CVC and engineering, we run technical deep dives alongside partner subject matter experts to define the requirement first. We then source technologies as a direct response to those needs. This ensures we aren’t just presenting ‘interesting research,’ but delivering solutions with a validated deployment pathway and a clear line of sight to a business case.”

Tapping into its network of 40+ universities, 10+ energy incubators, and Fortune 500 companies, Rose Rock Bridge then determines emerging opportunities in the energy ecosystem. Rather than just selecting companies or ideas that might bring in capital, the studio chooses startups that have real potential to commercialize quickly in order to solve the industry’s most pressing challenges.

Advertisement

This year’s focus areas include:

“We’re evaluating deployment probability from day one,” says Andrada Pantelimon, Innovation Associate at Rose Rock Bridge, who manages sourcing strategy and startup operations. “Can this technology deliver a measurable bottom-line impact? Can it realistically pilot within 12 months? Is your team equipped to commercialize? Show us you’ve quantified your value proposition in operator terms and understand which business unit within a corporation might own this solution. If you can articulate those pieces clearly, you’re the kind of startup we want to support.”

Derisk technologies for early-stage startups & energy companies

The benefit is tangible for leading energy corporations seeking proven solutions to complex operational challenges. Rose Rock Bridge provides its corporate partners with validated, field-tested technologies while significantly reducing deployment risk. At the program’s conclusion, partners gain direct access to emerging innovations that have already undergone technical validation and operational feasibility assessment, with identified procurement pathways and pilot plans designed for commercial deployment.

Each cohort cycle, up to 15 startups are selected to enter a six-week virtual accelerator focused on pilot deployment. Founders participate in reverse pitch sessions with oil and gas partners, one-on-one clinics with industry and capital mentors, and hands-on commercialization workshops. Founders have the unique opportunity to refine their solutions, assess pilot feasibility, and build industry relationships. This approach derisks adoption and investments through iterative customer feedback, in-field testing, and pilots, enabling breakthrough technologies to reach commercial viability quickly and effectively.

Advertisement

“Our curriculum is singularly focused on preparing startups for the realities of corporate partnerships.,” says Devon Fanfair, Rose Rock Bridge Manager and former Techstars Managing Director who is scaling the RRB program. “Founders aren’t just learning, they’re actively testing their assumptions with the exact customers who might deploy their technology. That rapid feedback loop is what transforms promising technologies into deployment-ready solutions with clear commercial pathways.”

At the culmination of the accelerator, teams participate in the Rose Rock Bridge showcase with the unique opportunity to pitch their startup to the energy corporate partners they’ve worked alongside for the past six weeks. Four startups are selected to receive up to $100,000 in non-dilutive funding and opportunities for business support services, joining a one-year cohort designed to prepare technologies for market adoption.

“Rose Rock Bridge is a cornerstone of Tulsa Innovation Labs’ strategy to showcase our region as a national hub for energy innovation,” added Jennifer Hankins, Managing Director of Tulsa Innovation Labs. “By linking emerging technologies with some of the nation’s largest energy leaders, we help move innovation from concept to market faster, drawing new businesses to the region, enhancing our existing businesses, and reinforcing Tulsa’s role in the global energy economy.”

Deploy viable energy solutions

Once selected to become members of Rose Rock Bridge, startups then pilot their technology with relevant energy partners and grow their venture in Tulsa. Support includes pilot design, execution, and go-to-market strategy, connections to follow-on investment opportunities, subsidized access to services including legal, marketing, PR, and support establishing a Tulsa presence for partner access.

Advertisement

Rose Rock Bridge’s success is measured not just in pilot deployments, but in lasting commercial relationships. Multiple portfolio companies have progressed from initial field tests to multi-year contracts with Fortune 500 operators. By derisking the path from proof-of-concept to procurement, RRB has helped establish procurement pathways that might otherwise take years to develop, if they materialize at all.

Launched in 2022 with support from Tulsa Innovation Labs, the studio has helped companies advance new technologies, secure patents, launch products, and attract capital. It has derisked 33 startups, supported 16 active or in-development pilots, and invested more than $2 million in early-stage companies, generating a combined portfolio valuation of over $55 million.

Examples of the studio’s success include Safety Radar, an AI-powered risk management platform, which secured its first contract with a Rose Rock Bridge partner, expanded to additional energy and aerospace clients, raised over $2 million, and established a Tulsa office. Kinitics Automation, a Canadian company, successfully piloted with one partner, resulting in deployments across multiple sites, effectively using RRB as their gateway to the U.S. market.

Backed by corporate partners with more than $150 billion in combined market capitalization, Rose Rock Bridge reflects both the scale of the opportunity and Tulsa’s rising influence in energy innovation.

Advertisement

Devon Fanfair is Manager of Rose Rock Bridge.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Tech

DHS Takes Out Its Funding Frustrations On Millions Of Americans By Sending ICE Agents To Do TSA Work

Published

on

from the america:-never-greater dept

With the partial shutdown still ongoing and no budget resolution in sight because the GOP is simply unwilling to endure any oversight of its anti-migrant programs, the TSA is leaking personnel. A whole lot of TSA agents walked off the job the moment their paychecks failed to arrive, leaving travelers to deal with scenarios that are somehow even worse than being manhandled by the TSA.

Folks, it’s yet another Long National Nightmare!

Yep, that’s the Atlanta airport, which has never been known for expeditious service, filled to the horizon with unhappy people that bears more than a slight resemblance to USSR grocery store photos from the mid-70s. (Making the resemblance even more uncanny is the amount of visible food.)

Well, the TSA may be temporarily out of money, but guess who isn’t! I’ll leave it to Dr. America to deliver the news — a cure that’s worse than the disease!

Advertisement

President Donald Trump on Saturday threatened to send federal immigration agents to airports across the country on Monday if Democrats don’t agree to end the Department of Homeland Security shutdown, now approaching five weeks.

“If the Radical Left Democrats don’t immediately sign an agreement to let our Country, in particular, our Airports, be FREE and SAFE again, I will move our brilliant and patriotic ICE Agents to the Airports where they will do Security like no one has ever seen before, including the immediate arrest of all Illegal Immigrants who have come into our Country,” he wrote.

I totally believe ICE will “do Security like no one has ever seen before.” I mean, they’ve already been doing civil enforcement like no one has ever seen before. And what better way to handle a travel crisis then by sending in a bunch of under-trained racists who just spent their ICE signing bonuses on emissions defeat devices and wraparound sunglasses subscription services to our nation’s airports, where they can apply all the skills they never learned during ICE training with the professionalism we’ve come to expect from people who like yelling and brandishing firearms.

What could possibly go wrong? I mean, they’re already not trained to do the job they’re supposed to be doing, so doing a job they’ve never been trained to do can’t be that much of step up on the “promoted to highest level of your incompetence” scale.

Of course, that was just Trump saying some shit on social media because he apparently has nothing better to do with his time now that he’s (again) the Leader of the Free World. Trump says a lot of stuff. He quite frequently says the opposite thing only hours or minutes or seconds later.

Advertisement

It brings me no pleasure to report that this horrendous brain fart will apparently be A Real Thing:

Immigration agents will deploy to airports on Monday under the direction of border czar Tom Homan, President Donald Trump said Sunday, as talks to fund the Department of Homeland Security have yet to yield a breakthrough.

[…]

Homan told CNN on Sunday that the move is about “helping TSA do their mission and get the American public through that airport as quick as they can while adhering to all the security guidelines and the protocols.”

Siiiiiiiiiiiiiiiiiiiiiigh. If you don’t need to travel, then maybe don’t? Sending a bunch of over-funded, under-trained, trigger-happy federal officers into crowded airports is a recipe for disaster. And even Homan doesn’t seem to know what ICE will be doing to actually help expedite passenger screening — not when he’s promising they won’t be doing anything they’re not trained to do.

Advertisement

“We’re simply there to help TSA do their job in areas that don’t need their specialized expertise, such as screening through the X-ray machine. Not trained in that? We won’t do that,” Homan told CNN’s Dana Bash on “State of the Union.”

“But there are roles we can play to release TSA officers from the non-significant roles, such as guarding an exit so they can get back to the scanning machines and move people quicker,” he added.

“Guarding an exit?” What the hell does that even mean? TSA agents don’t “guard exits.” No one “guards exits.” Travelers and terrorists alike are interested in boarding planes. They’re not interested in exiting airports to, I don’t know, wander around the tarmac or wonder how the hell exactly they ended up on the outside of a building they 100% intended to remain on the inside of.

This is going to end up being a case of Your Tax Dollars Trying To Look Busy. And that’s the best case scenario. The worst case scenarios begin directly after that. And I don’t think travelers are going to feel any safer or more secure when there are a bunch of twitchy, camouflaged dudes in masks wandering around like they’re about ready to raid Entebbe, rather than just looking for an exit to guard.

We’re in the midst of pretty hellish times. This… this just seems like we’re being trolled by a Higher Power that’s decided to amuse itself while the rest of the world falls apart.

Advertisement

Filed Under: cruelty is the point, dhs, ice, tom homan, tsa

Source link

Advertisement
Continue Reading

Tech

Apple’s WWDC 2026 Developer Event Is Set for Early June

Published

on

Apple announced that its annual Worldwide Developers Conference (WWDC) will be June 8-12 this year, beginning with a keynote on Monday, June 8. 

Each year, WWDC is used to unveil the company’s latest slate of software coming to iPhones, iPads, Macs and more. The news comes after the company released the iPhone 17E, iPad Air M4, and a number of new Macs, including the $599 MacBook Neo earlier in March. While we have seen some hardware announced during previous WWDC keynotes, like the Vision Pro in 2023, the developers conference has recently been focused on software and Apple Intelligence.

At the 2026 event, we expect Apple to introduce new versions of operating systems, like iOS 27, MacOS 27, iPadOS 27, WatchOS 27, VisionOS 27 and TVOS 27. 

Advertisement

“WWDC is one of the most exciting times for us at Apple because it’s a chance for our incredible global developer community to come together for an electrifying week that celebrates technology, innovation and collaboration,” Susan Prescott, Apple’s vice president of worldwide developer relations, said in a statement.

Source link

Advertisement
Continue Reading

Tech

GeekWire’s AI summit is Tuesday: What to know if you’re attending our ‘Agents of Transformation’ event

Published

on

There’s still time to grab a last-minute ticket for GeekWire’s Agents of Transformation, a half-day summit in Seattle on Tuesday that will explore how agentic AI is redefining work, creativity, and leadership.

Keep reading for details about our speaker lineup, the schedule, logistical information and more.

We look forward to seeing you at the event!

When: Tuesday, March 24, 1 – 6:30 p.m.

Location: Block 41 | 115 Bell St., Seattle, 98121

Advertisement

Schedule:

  • 1:00 PM – Doors open: check-out the AWS Marketplace AI Innovator Spotlight Studio, Startup Zone and grab your barista bot coffee.
  • 1:40 PM – Main stage program begins.
  • 5:00 PM – Reception – appetizers, drinks, and networking while exploring the Startup Zone demos and robotic cocktail bar.
  • 6:30 PM – Event concludes.

Parking: Multiple parking lots are available within a 3-block radius of Block 41.

What’s included:

  • Four fireside chats featuring leaders from Microsoft, AWS, OpenAI, and more.
  • Expert panel on practical uses of AI agents.
  • Startup Zone with live pitches from emerging AI companies.
  • AWS Marketplace AI Innovator Spotlight Studio (live thought leadership recordings).
  • Networking reception hosted by Nebius with appetizers and beverages.

Speakers:

  • Charles Lamanna, President of Business & Industry Copilot, Microsoft.
  • Julia White, VP & CMO, AWS.
  • Vijaye Raji, CTO of Applications, OpenAI.
  • Deepak Singh, VP of Kiro, AWS.
  • Expert panel: Angela Garinger (Outreach), Jeremy Tryba (AI2), Liat Ben-Zur (LBZ Advisory).

Tickets: A limited number of tickets are available here

Questions? Contact us at events@geekwire.com.

This event builds on an ongoing GeekWire editorial series, underwritten by Accenture, spotlighting how startups, developers and tech giants are using intelligent agents to innovate.

Advertisement

Thanks to presenting sponsor Accenture; gold sponsors Nebius and AWS Marketplace; and silver sponsors Prime Team Partners, Astound Business Solutions, Pay-i and Cascade for helping to make the event possible. For sponsorship opportunities or any other inquiries about the event, contact events@geekwire.com.

Source link

Continue Reading

Tech

OnlyFans Owner Dies At 43

Published

on

Computershack shares a report from NBC News: Leonid Radvinsky, the owner of adult-content platform OnlyFans, has died of cancer at the age of 43, the company said in a statement on Monday. “We are deeply saddened to announce the death of Leo Radvinsky. Leo passed away peacefully after a long battle with cancer,” an OnlyFans spokesperson said. “His family have requested privacy at this difficult time.”

Radvinsky, a Ukrainian-American entrepreneur, acquired Fenix International Limited, the parent company of OnlyFans, in 2018 and served as its director and majority shareholder. He also runs Leo, a venture capital fund he founded in 2009 that focuses primarily on investments in technology companies. According to Reuters, OnlyFans is valued at around $5.5 billion, including debt.

Source link

Continue Reading

Tech

Walmart: ChatGPT Checkout Converted 3x Worse Than Website

Published

on

Walmart found that purchases made directly inside ChatGPT converted at only one-third the rate of traditional website checkouts, leading it to abandon OpenAI’s Instant Checkout in favor of routing users through its own platform. Search Engine Land reports: Starting in November, Walmart offered about 200,000 products through OpenAI’s Instant Checkout. Users could complete purchases inside ChatGPT without visiting Walmart’s site. Daniel Danker, Walmart’s EVP of product and design, said those in-chat purchases converted at one-third the rate of click-out transactions. He called the experience “unsatisfying” and confirmed Walmart is moving away from it.

Instant Checkout was designed to let users complete purchases directly inside ChatGPT without visiting a retailer’s website. However, earlier this month, OpenAI confirmed it was phasing out Instant Checkout in favor of app-based checkout handled by merchants. Walmart will embed its own chatbot, Sparky, inside ChatGPT. Users will log into Walmart, sync carts across platforms, and complete purchases within Walmart’s system. A similar integration is coming to Google Gemini next month. In other Walmart-related news, the retailer announced plans to roll out “digital price tags” to all U.S. stores by the end of the year.

Source link

Continue Reading

Trending

Copyright © 2025