Connect with us
DAPA Banner

Tech

How to Earn Gold Efficiently in WoW Classic Season of Discovery

Published

on

If you have spent some time in the World of Warcraft Classic: Season of Discovery, you already know how everything ties back to how much gold you have in your bags. Your entire experience-the ability to learn new skills, gear upgrades, and consumables, and keeping up with the rapidly ramping game economy- all depend on the amount of gold you have. The good news is, a few smart moves and simple practices can make earning gold a part of your gameplay, without making you choose between affording your next gear upgrade and actually enjoying the game.

Gold Matters More Than Ever in the Season of Discovery

A dungeon filled with gold coins in wow

The Season of Discovery has introduced new game mechanics and class changes to increase player engagement and content lifecycle through more experimentation and exploration. However, the importance of gold remains largely unchanged since Mount training, profession leveling, and preparing for endgame activities all require significant gold resources. Because of this, players are naturally seeking reliable strategies to earn gold parallel to their gameplay. We’ll help you cut through the noise to focus on simple, smart practices that can make earning gold a part of your gameplay while you are discovering everything this season has to offer.

The first step to having more gold isn’t farming more- it is to stop wasting it and understand where it disappears. The biggest drains are vendor gear upgrades, excessive auction house purchases, and poorly planned profession leveling. Focus on maximizing the benefits of quest rewards and drops. Donʼt squander your gold on random items when leveling. If you manage your gold properly at the beginning, it is much simpler when it comes to the necessities of the game.

Smart Ways to Save and Earn Gold

A place filled with gold in wow

A smart way to save gold is by stopping the buying of gear too often. Quest rewards and dungeon drops can easily handle most of your needs while leveling. These rewards are usually enough to keep you going. Instead of spending gold on upgrades, save it for things that really matter, like mounts and abilities. Smart and experienced players spend their quest rewards and dungeon drops, saving up for their mount purchases. This simple habit can help you save more while still progressing well.

If you want a simple way to earn gold, start with the right professions. Herbalism, Mining, and Skinning allow you to collect items as you explore the world. These items are always in demand. You can then sell these materials in the market and make a steady amount of gold without extra effort. If you use the auction house well, you can get yourself some gold with little effort.

Furthermore, a smart way to earn more gold is by focusing on the timing. The value of many materials changes depending on what players are doing in the game. For example, when raids become popular or players prepare for group activities, demand increases, and prices go up. The same material that gathered dust after buying at the auction house will suddenly triple in price as you near the endgame activities- basic economics, good profits. If you keep an eye on these changes and sell when demand is high, you can earn more gold from the same items. 

Advertisement

Another way of earning gold is by focusing on dungeons and valuable mobs. This is an easy way of earning gold. This strategy will work for you if you remain consistent. Collect all you can and sell all the extra things you collect. Focus on finding valuable things that you can sell for more. Some mobs also have good drops. This strategy may take some of your time, but it is good for earning gold.

Earning gold is not all about collecting things and selling them. The right time for selling also plays an important role. For example, some things may be more valuable before raids. Some things, especially those in demand, will always give you more value. If you keep an eye on these things, you will be able to earn more gold without doing more work.

Trading is another good way to get gold without farming all day long. Trading is all about prices. Buy items when they are low in price, then sell them later when their price increases. It takes a little time to get used to this, but when you do, it is another great way to get extra gold. 

Final Thoughts

wow characters

Despite Season of Discovery encouraging the users to experiment with new activities while balancing earning gold at the same time, not everyone wants to dedicate hours to farming resources when they could be exploring the game more. There are some player communities that discuss alternative ways, such as options related to sod gold. Platforms like Eldorado and similar platforms can help users purchase gold to be used in the gameplay when they feel stuck between limited playtime and the gold requirements.

Ultimately, the ones who understand smart gold-management habits can unlock the true potential that the season has to offer. Users must refine their strategies to adapt to the in-game economy and enjoy the gameplay without unnecessary limitations. Other than buying gear, consumables, mounts, and abilities, in this season, gold also enables something more valuable- the freedom to discover and experiment.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

TeamPCP deploys Iran-targeted wiper in Kubernetes attacks

Published

on

TeamPCP deploys Iran-targeted wiper in Kubernetes attacks

The TeamPCP hacking group is targeting Kubernetes clusters with a malicious script that wipes all machines when it detects systems configured for Iran.

The threat actor is responsible for the recent supply-chain attack on the Trivy vulnerability scanner, and also an NPM-based campaign dubbed ‘CanisterWorm,’ which started on March 20.

Selective destruction payload

Researchers at application security company Aikido say that the campaign targeting Kubernetes clusters uses the same command-and-control (C2), backdoor code, and drop path as seen in the CanisterWorm incidents.

However, the new campaign differs in that it includes a destructive payload targeting Iranian systems and installs the CanisterWorm backdoor on nodes in other locales.

Advertisement

“The script uses the exact same ICP canister (tdtqy-oyaaa-aaaae-af2dq-cai[.]raw[.]icp0[.]io) we documented in the CanisterWorm campaign. Same C2, same backdoor code, same /tmp/pglog drop path,” Aikido says.

“The Kubernetes-native lateral movement via DaemonSets is consistent with TeamPCP’s known playbook, but this variant adds something we haven’t seen from them before: a geopolitically targeted destructive payload aimed specifically at Iranian systems.”

According to Aikido researchers, the malware is built to destroy any machine that matches Iran’s timezone and locale, regardless if Kuberenetes is present or not.

If both conditions are met, the script deploys a DaemonSet named ‘Host-provisioner-iran’ in ‘kube-system’, which uses privileged containers and mounts the host root filesystem into /mnt/host.

Advertisement

Each pod runs an Alpine container named ‘kamikaze’ that deletes all top-level directories on the host filesystem, and then forces a reboot on the host.

If Kubernetes is present but the system is identified as not Iranian, the malware deploys a DaemonSet named ‘host-provisioner-std’ using privileged containers with the host filesystem mounted.

Instead of wiping data, each pod writes a Python backdoor onto the host filesystem and installs it as a systemd service so it persists on every node.

On Iranian systems without Kubernetes, the malware deletes every file on the machine, including system data, accessible to the current user by running the rm -rf/ command with the –no-preserve-root flag. If root privileges are not available, it attempts passwordless sudo.

Advertisement
TeamPCP wiping Iranian systems with no Kubernetes
TeamPCP wiping Iranian systems with no Kubernetes
source: Aikido

On systems where none of the conditions are met, no malicious action is taken, and the malware just exits.

Aikido reports that a recent version of the malware, which uses the same ICP canister backdoor, has omitted the Kubernetes-based lateral movement and instead uses SSH propagation, parsing authentication logs for valid credentials, and using stolen private keys.

The researchers highlighted some key indicators of this activity, including outbound SSH connections with ‘StrictHostKeyChecking+no’ from compromised hosts, outbound connections to the Docker API on port 2375 across the local subnet, and privileged Alpine containers via an unauthenticated Docker API with / mounted as a hostPath.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Apple Prepares To Add Search Ads To Apple Maps

Published

on

Apple is reportedly preparing to add search ads to Apple Maps, “and it could start to roll out to users by the summer,” reports AppleInsider, citing sources from Bloomberg (paywalled). From the report: Apple will make an announcement as soon as March. This will bring ads to search queries within the navigation app, which will operate similar to Google’s advertising system. Retailers and brands will be able to bid for ad spots located against search queries for specific terms, such as types of food or services. The winning bid will be able to show an ad at the top of the results, pointing to a related location for that business. Apple also announced in January that it would add more ads within the App Store, starting March in the UK and Japan.

Source link

Continue Reading

Tech

Samsung will soon let you control smart home devices from your car’s dashboard

Published

on

Your car might just become the new smart home hub for your house. Samsung has expanded SmartThings integration, enabling drivers to control their smart home devices directly from their car’s infotainment system. It’s called Car-to-Home. 

Building on the earlier Home-to-Car capability that allowed users to monitor their cars from inside the house, the Car-to-Home feature flips the functionality so you can control your smart home appliances, such as air conditioners, lighting systems, and other smart switches, from your car’s dashboard. 

What can the Car-to-Home feature do?

The practical scope of the feature is broader than it might sound, as it is compatible with devices such as air conditioners, air purifiers, robot vacuums, lights, and cameras. Connecting is straightforward — drivers scan a QR code displayed on their car’s infotainment screen and link their vehicle to their SmartThings account. 

Apart from manual control (flipping the switches), the Car-to-Home feature unlocks location-aware automation that genuinely changes how your home responds to your day. You can set routines so that the SmartThings network turns on the required appliances as you park your car in the garage.

I can see people using the feature to pre-cool their rooms or run air purifiers before they arrive home after a tiring day at the office. On the contrary, the feature should also shut everything down (automatically), as you get in the car and leave the driveway. There’s a dedicated Away Mode for handling lights when you’re away. 

Advertisement

Who gets access, and when?

For now, the feature is available on select Hyundai and Kia cars, specifically those that feature the connected car Navigation Cockpit (ccNC) introduced after November 2022 in Korea. However, both Samsung and Hyundai aim to expand the feature to their customers throughout the world in due course. 

Eligible models include the Grandeur, Santa Fe, Ioniq 5, K5, Sorento, and EV9. Samsung also plans to extend the feature to Genesis vehicles equipped with the ccIC27 infotainment system. 

As and when the feature becomes available to a wider audience, it could drive a behavioral shift in which cars become central nodes in someone’s smart home ecosystem, linking mobility and domestic technology in ways that were, until recently, purely speculative.

Source link

Advertisement
Continue Reading

Tech

The F-22 Raptor Is Getting 2 New Upgrades

Published

on





The F-22 Raptor is one of the premier fighter jets in the sky and one of the few fifth-generation fighters in active service in 2026. Still, despite its bleeding-edge placement in the United States Air Force’s arsenal, it’s getting a little long in the tooth, having first been introduced to service all the way back in 2005.

The War Zone reported that a Lockheed Martin-produced mockup of the new version of the Raptor was at the Warfare Symposium, a convention for the defense industry and elements of the United States military. The outlet reported some noteworthy changes being made on this plane. Namely, the aircraft is slated to get upgrades in the form of some extra range and another set of eyes.

Fuel tanks and sensor pods might not sound like a big deal, as those components have been mounted to wing pylons of various aircraft for decades. But it’s not so easy to make these kinds of adjustments on a plane as stealthy as the F-22. That’s because external fuel tanks and sensors don’t have the same stealth considerations as the rest of the aircraft. A big fuel tank is nice, but it can make the plane more visible to radar.

Advertisement

The latest and greatest Raptor

The newer and stealthier sensor pods are posited to give the Raptor better infrared tracking capabilities, according to The War Zone. Given the F-22’s primary role as an air-to-air fighter and the increasing prevalence of powerful stealth fighters from potentially adversarial air forces, any extra capability would likely be welcome. 

Specifics as to how much extra range the fuel tanks will give the Raptor and what the sensor pods will allow the F-22 Raptor to do are likely classified. Nevertheless, upgrades are expected to enter service, or at least more advanced testing, over the course of 2026.

Advertisement

The F-22 Raptor, despite all of its menace and upcoming capabilities that, at least on paper, seem to entirely outclass most other jets, has never seen much air-to-air combat apart from shooting down a suspected surveillance balloon. The jet’s exclusivity paired with the fact that Air Force fighters don’t shoot down jets that frequently, means that the F-22 doesn’t see a lot of air-to-air action (at least that we know of).



Advertisement

Source link

Continue Reading

Tech

Operation Alice: The dark web isn't as hidden as it seems, as global crackdown shows

Published

on


Europol recently unveiled “Operation Alice,” a major effort to dismantle a large network of fraudulent websites hidden within the dark web. The investigation began in 2021 and initially focused on a platform named Alice with Violence CP. In the end, the operation took down one of the largest dark web…
Read Entire Article
Source link

Continue Reading

Tech

Remembering IEEE Power & Energy Society’s Mel Olken

Published

on

Mel Olken

Former executive director of the IEEE Power & Energy Society

Fellow, 92; died 9 January

Olken became the first executive director of the IEEE Power & Energy Society (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s Power & Energy Magazine. Olken led the publication until 2016, when he retired.

After receiving a bachelor’s degree in engineering from the City College of New York, Olken was hired as an electrical engineer by American Electric Power, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and nuclear power plants. While at AEP, he was promoted to manager of the electrical generation department.

Advertisement

He joined IEEE in 1958 and became a PES member in 1973. An active volunteer, he chaired the society’s energy development and power generation committee and its technical council.

Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”

He became an IEEE staff member in 1984 as society services director for IEEE Technical Activities. From 1990 to 1995 he served as managing director of Regional Activities group (now IEEE Member and Geographic Activities), before becoming PES executive director.

He received a PES Lifetime Achievement Award in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”

Advertisement

Stephanie A. Huguenin

Research scientist

IEEE member, 48; died 1 October

Huguenin was an administrative assistant in the physics and biophysics department at Augusta University, in Georgia. According to her Augusta obituary, she died of an illness acquired during her volunteer work in India.

She received a bachelor’s degree in engineering in 1999 from the College of Charleston, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the Jenkins Institute for Children), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the Mother Teresa Foundation.

Advertisement

Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer Ebet Roberts in New York City. In 2010 she left to work as an operations strategist and technical consultant.

She earned a master’s degree in communication and research science in 2016 from New York University. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.

From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.

She was a member of the IEEE Geoscience and Remote Sensing Society and the IEEE Systems Council.

Advertisement

Huguenin volunteered for the Internet Engineering Task Force, a standards development organization, and the American Registry for Internet Numbers. ARIN manages and distributes internet number resources such as IP addresses and autonomous system numbers.

The nonprofits she supported included the Coastal Conservation League, the Longleaf Alliance, the Lowcountry Land Trust, the Nature Conservancy, and Women in Defense.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

2026 Swift Student Challenge winners to be announced on March 26

Published

on

The winners of the 2026 Swift Student Challenge will be announced on March 26, with the best among them set to receive a trip to Apple Park.

Colorful Swift programming language logo: an orange bird silhouette over faint code inside a glossy circular badge, centered on a vibrant blue and purple gradient background with soft waves
Winners of the 2026 Swift Student Challenge will be announced on March 26.

Every year, Apple holds the Swift Student Challenge. The event encourages up-and-coming student developers to practice their craft and lets them win various prizes.
In an announcement on Monday, the iPhone maker described the annual event as a program meant to “uplift the next generation of entrepreneurs, coders, and designers.” The company added that winners will be notified on Thursday, March 26.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Signal is being targeted by Russian hackers in a huge new phishing campaign, FBI says

Published

on


  • FBI and CISA warn of Russian espionage campaign targeting messaging apps
  • Phishing and social engineering used to hijack Signal and other CMA accounts
  • Thousands of victims’ accounts compromised, including officials, military, and journalists

The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) are warning about an ongoing espionage campaign by Russian cyberspies.

In a joint Public Service Announcement (PSA) published late last week, the two agencies said Russian Intelligence Services (RIS)-affiliated threat actors are actively targeting commercial messaging applications (CMA). They specifically mentioned Signal, but stressed that other CMAs are most likely targeted, as well.

Source link

Continue Reading

Tech

AI could be the opposite of social media

Published

on

For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.

In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.

Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.

This media environment cultivated broad public agreement over basic facts and widespread trust in mainstream institutions. It also helped the government wage a barbaric war in the name of lies.

Advertisement
  • There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
  • LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
  • Unlike social media companies, AI labs have an economic incentive to spread accurate information.
  • Still, there are reasons to fear that AI will nonetheless make public discourse worse.

For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.

But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.

The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.

Yet it has also helped pro-Hitler podcasters reach an audience of millions, enabled influencers with body dysmorphia to sell teenagers on self-mutilation, elevated crackpots to the commanding heights of American public health — and, more generally, eroded the intellectual standards, shared understandings, social trust, and (small-l) liberalism on which rational self-government depends.

Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophantic large language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.

Advertisement

But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.

Are you there Grok? It’s me, the demos

At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.

Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.

Advertisement

In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting.
The bot replied:

Screenshot of Grok

In reaching this assessment, Grok was affirming the consensus among mainstream journalistic institutions — and also, other chatbots.

For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.

Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.

But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.

Advertisement

One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.

The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.

What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.

Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.

Advertisement

Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.

AI might combat misinformation in practice. But does it in theory?

A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.

But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:

Advertisement

1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).

But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”

For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).

2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.

Advertisement

After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.

That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.

But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.

The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.

Advertisement

It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.

Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:

1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.

Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.

Advertisement

The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.

A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.

2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.

In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.

Advertisement

3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.

4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.

5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.

Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.

Advertisement

LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.

For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.

Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.

Source link

Advertisement
Continue Reading

Tech

From lab to market: Rose Rock Bridge fast-tracks energy innovation in Tulsa

Published

on

Presented by Tulsa Innovation Labs


As the global energy system evolves, companies are racing to adopt technologies that can deliver real-world solutions, especially in hard-to-abate industries. Oklahoma, long known as the oil capital of the world, is a center for energy innovation, with Rose Rock Bridge at the forefront.

A non-profit based in Tulsa, Rose Rock Bridge is a pilot deployment studio that connects early-stage energy startups with corporate energy partners, non-dilutive funding, and pilot opportunities that accelerate commercialization. Now accepting applications for its Spring 2026 cohort through April 6, it is seeking early- and growth-stage startups developing practical, scalable solutions to today’s most pressing energy challenges.

Rose Rock Bridge gives startups access to real-world commercial workflows and pilot opportunities through energy partners with more than $150 billion in market capitalization, including Devon Energy, H&P, ONEOK, and Williams. Backed by one of the strongest coalitions of strategic partners and investors of any energy-focused accelerator, incubator, or venture studio, the program enables startups to move quickly from development to real-world testing and deployment.

Advertisement

Here’s how it works:

Discover opportunities for energy innovation

Rose Rock Bridge starts by working directly with corporate innovation teams to identify high priority technology solutions for their businesses, pinpointing which solutions will carry the most impact. Focus areas are formed around these findings.

“We don’t just chase the latest tech and hope to find a use for it. Our process starts at the asset level identifying the specific operational bottlenecks and unmet requirements our partners are actually facing,” says Nishant Agarwal, Innovation Manager. “By leveraging our background in CVC and engineering, we run technical deep dives alongside partner subject matter experts to define the requirement first. We then source technologies as a direct response to those needs. This ensures we aren’t just presenting ‘interesting research,’ but delivering solutions with a validated deployment pathway and a clear line of sight to a business case.”

Tapping into its network of 40+ universities, 10+ energy incubators, and Fortune 500 companies, Rose Rock Bridge then determines emerging opportunities in the energy ecosystem. Rather than just selecting companies or ideas that might bring in capital, the studio chooses startups that have real potential to commercialize quickly in order to solve the industry’s most pressing challenges.

Advertisement

This year’s focus areas include:

“We’re evaluating deployment probability from day one,” says Andrada Pantelimon, Innovation Associate at Rose Rock Bridge, who manages sourcing strategy and startup operations. “Can this technology deliver a measurable bottom-line impact? Can it realistically pilot within 12 months? Is your team equipped to commercialize? Show us you’ve quantified your value proposition in operator terms and understand which business unit within a corporation might own this solution. If you can articulate those pieces clearly, you’re the kind of startup we want to support.”

Derisk technologies for early-stage startups & energy companies

The benefit is tangible for leading energy corporations seeking proven solutions to complex operational challenges. Rose Rock Bridge provides its corporate partners with validated, field-tested technologies while significantly reducing deployment risk. At the program’s conclusion, partners gain direct access to emerging innovations that have already undergone technical validation and operational feasibility assessment, with identified procurement pathways and pilot plans designed for commercial deployment.

Each cohort cycle, up to 15 startups are selected to enter a six-week virtual accelerator focused on pilot deployment. Founders participate in reverse pitch sessions with oil and gas partners, one-on-one clinics with industry and capital mentors, and hands-on commercialization workshops. Founders have the unique opportunity to refine their solutions, assess pilot feasibility, and build industry relationships. This approach derisks adoption and investments through iterative customer feedback, in-field testing, and pilots, enabling breakthrough technologies to reach commercial viability quickly and effectively.

Advertisement

“Our curriculum is singularly focused on preparing startups for the realities of corporate partnerships.,” says Devon Fanfair, Rose Rock Bridge Manager and former Techstars Managing Director who is scaling the RRB program. “Founders aren’t just learning, they’re actively testing their assumptions with the exact customers who might deploy their technology. That rapid feedback loop is what transforms promising technologies into deployment-ready solutions with clear commercial pathways.”

At the culmination of the accelerator, teams participate in the Rose Rock Bridge showcase with the unique opportunity to pitch their startup to the energy corporate partners they’ve worked alongside for the past six weeks. Four startups are selected to receive up to $100,000 in non-dilutive funding and opportunities for business support services, joining a one-year cohort designed to prepare technologies for market adoption.

“Rose Rock Bridge is a cornerstone of Tulsa Innovation Labs’ strategy to showcase our region as a national hub for energy innovation,” added Jennifer Hankins, Managing Director of Tulsa Innovation Labs. “By linking emerging technologies with some of the nation’s largest energy leaders, we help move innovation from concept to market faster, drawing new businesses to the region, enhancing our existing businesses, and reinforcing Tulsa’s role in the global energy economy.”

Deploy viable energy solutions

Once selected to become members of Rose Rock Bridge, startups then pilot their technology with relevant energy partners and grow their venture in Tulsa. Support includes pilot design, execution, and go-to-market strategy, connections to follow-on investment opportunities, subsidized access to services including legal, marketing, PR, and support establishing a Tulsa presence for partner access.

Advertisement

Rose Rock Bridge’s success is measured not just in pilot deployments, but in lasting commercial relationships. Multiple portfolio companies have progressed from initial field tests to multi-year contracts with Fortune 500 operators. By derisking the path from proof-of-concept to procurement, RRB has helped establish procurement pathways that might otherwise take years to develop, if they materialize at all.

Launched in 2022 with support from Tulsa Innovation Labs, the studio has helped companies advance new technologies, secure patents, launch products, and attract capital. It has derisked 33 startups, supported 16 active or in-development pilots, and invested more than $2 million in early-stage companies, generating a combined portfolio valuation of over $55 million.

Examples of the studio’s success include Safety Radar, an AI-powered risk management platform, which secured its first contract with a Rose Rock Bridge partner, expanded to additional energy and aerospace clients, raised over $2 million, and established a Tulsa office. Kinitics Automation, a Canadian company, successfully piloted with one partner, resulting in deployments across multiple sites, effectively using RRB as their gateway to the U.S. market.

Backed by corporate partners with more than $150 billion in combined market capitalization, Rose Rock Bridge reflects both the scale of the opportunity and Tulsa’s rising influence in energy innovation.

Advertisement

Devon Fanfair is Manager of Rose Rock Bridge.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025