Connect with us
DAPA Banner

Tech

AI could be the opposite of social media

Published

on

For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.

In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.

Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.

This media environment cultivated broad public agreement over basic facts and widespread trust in mainstream institutions. It also helped the government wage a barbaric war in the name of lies.

Advertisement
  • There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
  • LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
  • Unlike social media companies, AI labs have an economic incentive to spread accurate information.
  • Still, there are reasons to fear that AI will nonetheless make public discourse worse.

For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.

But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.

The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.

Yet it has also helped pro-Hitler podcasters reach an audience of millions, enabled influencers with body dysmorphia to sell teenagers on self-mutilation, elevated crackpots to the commanding heights of American public health — and, more generally, eroded the intellectual standards, shared understandings, social trust, and (small-l) liberalism on which rational self-government depends.

Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophantic large language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.

Advertisement

But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.

Are you there Grok? It’s me, the demos

At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.

Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.

Advertisement

In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting.
The bot replied:

Screenshot of Grok

In reaching this assessment, Grok was affirming the consensus among mainstream journalistic institutions — and also, other chatbots.

For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.

Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.

But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.

Advertisement

One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.

The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.

What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.

Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.

Advertisement

Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.

AI might combat misinformation in practice. But does it in theory?

A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.

But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:

Advertisement

1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).

But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”

For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).

2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.

Advertisement

After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.

That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.

But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.

The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.

Advertisement

It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.

Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:

1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.

Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.

Advertisement

The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.

A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.

2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.

In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.

Advertisement

3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.

4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.

5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.

Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.

Advertisement

LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.

For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.

Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Roblox Will Pay $12 Million to Settle Nevada Child Safety Lawsuit

Published

on

Popular gaming platform Roblox agreed to pay more than $12 million and implement new safety features as part of a settlement with the state of Nevada. This settlement comes amid several lawsuits accusing the company of an alleged lack of protection of children on the platform. 

The agreement resolves potential litigation over allegations that Roblox failed to adequately safeguard children while they played the online game, Nevada Attorney General Aaron Ford said in a press release on Wednesday. 

As part of the deal, Roblox will spend $10 million over three years to encourage children to engage in non-digital activities, as well as institute age verification for all users. This will include “facial age estimation technology and government-issued ID for age assurance, and will use behavioral monitoring to identify users who may have been aged incorrectly,” according to the press release. 

Advertisement

“The injunctive relief that Roblox has agreed to will give parents the tools they need to protect their children on the platform; institute default protections to block predators from engaging with children; and ensure that messages involving minors are not encrypted,” Ford said in the press release.

Roblox also committed to spending $1 million over two years on a campaign to educate minors and adults about online safety and another $1.5 million to develop a law enforcement liaison position to work with state law enforcement agencies over concerns about the platform. 

Roblox Chief Safety Officer Matt Kaufman said it’s part of the company’s “work to establish a new standard for digital safety.”

“This resolution creates a blueprint for how industry and regulators can work together to protect the next generation of digital citizens,” Kaufman said Thursday. “We have no finish line when it comes to safety.”

Advertisement

Roblox is under significant legal pressure amid more than 140 lawsuits, according to Reuters. The suits, filed in 2025, allege the company knowingly created a gaming platform that allowed child predators to target minors. 

The company also faces lawsuits from state attorneys general in Texas, Kentucky, Louisiana, Iowa, Nebraska, Tennessee and Florida over similar accusations.

Age-based accounts coming soon

Two days before the settlement announcement, Roblox CEO and founder David Baszucki revealed new accounts for younger Roblox users.

Advertisement

Roblox Kids will be available for children between the ages of 5 and 8, and Roblox Select is for those ages 9 to 15. Roblox is reportedly used by nearly half of US children under 16. Children who are older than 16 will be in their own age group, simply called “Roblox.”

Kids and Select accounts would be available in those age groups as determined by Roblox’s age-check technology or by a verified parent.

Unmonitored chat in the game has been a point of criticism for the platform, as it allows predators to chat with children. Kids’ accounts will have chat turned off by default, with limited access to Minimal or Mild games as determined by the platform. Select accounts will have chat with safeguards and access to games with Moderate content, which is described by the platforms as having “moderate violence, light realistic blood, moderate crude humor, unplayable gambling content, and/or moderate fear.”

These new age-based accounts will roll out sometime in early June. 

Advertisement

Source link

Continue Reading

Tech

Dark Matter May Be Made of Black Holes From Another Universe

Published

on

A recent cosmological model combines two of the most eccentric ideas in contemporary physics to explain the nature of dark matter, the invisible substance that makes up about 85 percent of all matter in the universe. To understand it, it’s necessary to look beyond the Big Bang we all know and consider two concepts that rarely intersect: cyclic universes and primordial black holes.

A Different Kind of Multiverse

There are different versions of the “multiverse.” The most popular model—that of the Marvel Cinematic Universe—proposes that there are as many universes as there are possibilities and that these versions of reality are parallel. Physics proposes something more sober and mathematically consistent: the cosmic bounce.

In this model, the universe is not born from a singularity, but expands, contracts, and expands again in an endless cycle. Each “universe” is not parallel, but sequential—that is, one arises from the ashes of the previous one.

Is it possible for something to survive the end of its universe and endure into the next? According to a paper published in Physical Review D, yes. Author Enrique Gaztanaga, a research professor at the Institute of Space Sciences in Barcelona, shows that any structure larger than about 90 meters could pass through the final collapse of a universe and survive the rebound. These “relics” would not only persist, but could also seed the formation of giant, unexplained structures observed in the early stages of the present-day universe. Moreover, they could be the key to understanding dark matter.

Advertisement

For decades, the dominant explanation for dark matter has been that it is an unknown particle or particles. But after years of experiments without direct detections, physicists have begun to explore alternatives. One of them proposes that dark matter is not an exotic particle, but an abundant population of small black holes that we overlook.

The idea is appealing, but it has a serious problem. For these black holes to explain dark matter, they would have to exist from the earliest moments of the universe, long before the first stars could collapse. There are indications that these objects could exist, but a convincing physical mechanism to explain their origin is lacking.

A Universe Born With Black Holes

This is where Gaztanaga’s newly proposed model shines. If cosmic bouncing allows compact structures to survive the collapse of the previous universe, then the current universe would have already been born with pre-existing black holes. They would not have to have been generated by extreme fluctuations or finely tuned inflationary processes, but would simply have been there from the first instant.

The assumption has the potential to solve two riddles at once: the origin of black holes and the nature of dark matter. If this model is correct, dark matter would not be a mystery of the early universe but rather a legacy of a cosmos that predates our own.

Advertisement

“Much work remains to be done,” Gaztanaga, also a researcher at the Institute of Cosmology and Gravitation at the University of Portsmouth, said in an article for The Conversation. “These ideas must be tested against data—from gravitational-wave backgrounds to galaxy surveys and precision measurements of the cosmic microwave background.”

“But the possibility is profound,” he added. “The universe may not have begun once, but may have rebounded. And the dark structures shaping galaxies today could be relics from a time before the Big Bang.”

This story originally appeared in WIRED en Español and has been translated from Spanish.

Source link

Advertisement
Continue Reading

Tech

Operation PowerOFF identifies 75k DDoS users, takes down 53 domains

Published

on

Operation PowerOFF identifies 75k DDoS users, takes down 53 domains

More than 75,000 individuals using distributed denial-of-service (DDoS) platforms for disruptive attacks have been warned through emails and letters during the latest phase of the Operation PowerOFF international law enforcement action.

The ongoing operation is supported by Europol and involves authorities in 21 countries. Coordinated efforts led to the arrest of four people, taking offline 53 domains, and issuing 25 search warrants.

“Leading up to the action week, a series of operational sprints took place, gathering experts from national authorities across the globe to carry out actions against high-value target users of DDoS-for-hire platforms and raise awareness about the illegality of these activities,” Europol says.

Wiz

“During these sprints, the participating countries disrupted illegal booter services, dismantling the technical infrastructure that supports illegal DDoS.”

The operation has a global span, and includes multiple European Union countries as well as Australia, Thailand, the United States, the United Kingdom, Japan, and Brazil.

Advertisement
Latest Operation PowerOFF reach
Latest Operation PowerOFF reach
Source: Europol

“Booter services” are DDoS-for-hire platforms that allow users to pay for renting the firepower of DDoS swarms, typically consisting of compromised routers and IoTs, and directing it to their intended targets.

Some operators of these services attempt to hide their real goal by claiming they are used for legitimate stress testing, but lack verification of target ownership and hence are still used for illegal attacks.

The latest Operation PowerOFF action was built on previous phases that resulted in dismantling key infrastructure and seizing databases with more than 3 million criminal accounts.

Europol states that the operation is now entering its prevention phase, which includes launching awareness campaigns and disruption measures.

These involve placing search engine ads aimed at young people seeking DDoS tools, removing from search results more than 100 URLs that promote these illegal services, and adding on-chain warning messages tied to illicit payments.

Advertisement

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Source link

Continue Reading

Tech

Call of Duty movie arrives on June 30, 2028

Published

on

A Call of Duty movie is still happening, but don’t hold your breath for it to hit screens any time soon. Today, the popular FPS’ social media revealed that the movie’s theatrical release date will be June 30, 2028.

A film adaptation of the game franchise was first revealed last year, and shortly after, we learned that Taylor Sheridan and Peter Berg would be serving as the producers. The duo, whose past credits include Friday Night Lights and Yellowstone, will also be co-writing the project under Berg’s direction. We still haven’t heard anything about the cast, or even what era of the long-running series will be depicted, so it seems like a safe bet that there’s still a ways to go before this wraps. But CoD is nothing if not a money-maker, so reimagining it as a summer blockbuster seems pretty expected.

Source link

Continue Reading

Tech

New self-healing material can repair itself over 1,000 times, extend the lifespan of cars and aircraft

Published

on


The breakthrough addresses a critical structural failure known as delamination, where layers in fiber reinforced polymer (FRP) materials begin to separate over time. The new composite looks similar to traditional FRPs but is designed to be tougher, making it less prone to cracking or breaking.
Read Entire Article
Source link

Continue Reading

Tech

Roblox AI assistant gets agentic tools to plan, build, and self-test games

Published

on

In short: Roblox is upgrading its built-in AI assistant with agentic capabilities including a planning mode that analyses game code before proposing action plans, procedural 3D model generation, mesh generation, and self-correcting loops that test and refine outputs. The update also adds MCP client integration with third-party tools like Claude and Cursor, with a roadmap toward multi-agent parallel workflows in the cloud.

Roblox is upgrading its built-in AI assistant with agentic capabilities that let it plan, build, and test games rather than just answer questions about how to make them. The update adds a planning mode that analyses a game’s code and data model before proposing action plans, procedural model generation that creates editable 3D objects through prompts, and a self-correcting loop that lets the assistant test its own work and incorporate the results into future iterations.

The changes turn Roblox Assistant from a code-suggestion tool into something closer to a junior development partner: one that can examine an existing project, ask clarifying questions, propose an approach, execute it, test the results, and refine its work based on what it finds. For a platform whose 380 million monthly active users include a vast number of creators with limited programming experience, the implications are significant.

What the new tools do

Planning Mode transforms the assistant into a collaborative planner. Rather than responding to individual prompts with code snippets, it analyses a game’s existing codebase and data model, asks the developer clarifying questions about what they want to achieve, and translates the conversation into an editable action plan. The developer can review, modify, and approve the plan before the assistant begins implementation. It is the difference between asking an AI to write a function and asking it to design an approach to a problem.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Procedural Models, coming soon, will let developers create 3D objects that are defined by code rather than static meshes. A developer can prompt the assistant to generate a bookcase and then adjust its attributes, the number of shelves, the height, the material, through parameters rather than manual modelling. The objects understand physical relationships: a staircase knows how its steps relate to its height, and a table knows that its legs support its surface. This is not generative art; it is parametric design driven by natural language.

Mesh Generation adds the ability to place fully textured 3D objects directly into a game world through prompts, building on Roblox’s Cube foundation model. The company introduced 4D generation in February 2026, powered by Cube, which adds an interactivity dimension to generated objects so they behave correctly in-game rather than sitting as static props. More than 160,000 objects were generated during early access, and Roblox says players using 4D generation showed a 64% increase in play time on average.

Advertisement

The agentic loop

The most consequential change is the self-correcting system. The assistant can now test different aspects of a game, identify problems, surface suggested solutions, and feed those results back into its planning process. This creates what Roblox describes as agentic loops: cycles of planning, execution, testing, and refinement that the AI performs with decreasing human intervention over time.

The roadmap extends this further. Roblox is working on enabling multiple AI agents to work together in parallel, running long and complex workflows in the cloud rather than within the constraints of a local Studio session. The company is also building integration with third-party tools, including Claude, Cursor, and Codex, and has added a built-in MCP client to Roblox Studio’s assistant, letting it connect to external AI services through the Model Context Protocol standard.

The long-term vision, which Roblox has been articulating since it open-sourced the Cube foundation model in March 2025, is that a developer should be able to describe a game in natural language and have AI generate the assets, environments, code, animations, and interactive behaviour to make it real. The agentic tools announced today are incremental steps toward that goal, but they represent a meaningful shift from AI as autocomplete to AI as collaborator.

The vibe-coding parallel

Roblox’s update arrives in the middle of a broader shift in how software is made. Vibe coding, the practice of describing what you want in natural language and letting AI generate the code, drove an 84% jump in App Store submissions earlier this year and prompted Apple to crack down on low-quality AI-generated apps. The same dynamic is playing out in game creation, where the barrier to building something playable is dropping rapidly.

Advertisement

For Roblox, this is both an opportunity and a quality problem. More creators making more games drives engagement on the platform, but only if those games are worth playing. The planning mode and self-correcting loops are, in part, a response to this tension: they are designed to produce better outputs than a single-shot prompt, guiding creators through a structured process rather than letting them generate and publish whatever the AI produces on the first try.

Third-party AI tools for Roblox game creation have already emerged, including Lemonade, SuperbulletAI, and BloxBot. By building agentic capabilities directly into Roblox Studio, the company is trying to ensure that the primary creation experience remains on its own platform rather than fragmenting across external tools that it does not control.

The business context

Roblox’s investment in AI creation tools is backed by strong commercial momentum. The company’s daily active users reached 144 million in Q4 2025, up from 85 million a year earlier. Monthly active users grew from 280 million to 380 million through the year. Full-year 2025 revenue was $4.9 billion, a 36% increase, with 2026 guidance projecting $6 to $6.2 billion. Total Robux purchases reached $6.79 billion in 2025.

These numbers matter because they determine how much Roblox can invest in AI infrastructure and how large the creator ecosystem is that benefits from better tools. A platform with 380 million monthly users and nearly $5 billion in revenue can afford to build foundation models, train agentic systems, and absorb the compute costs of running AI-assisted game creation at scale. Smaller platforms cannot, which means AI creation tools become a competitive moat rather than just a feature.

Advertisement

The Roblox Developers Conference, scheduled for September in San Jose, will likely showcase the next stage of this roadmap. For now, the agentic assistant update positions Roblox as one of the first major platforms to move beyond AI-assisted coding into AI-assisted product development, where the AI does not just write code but plans, builds, tests, and improves what it creates. Whether that produces better games or just more of them is the question that the next year of Roblox development will answer.

Source link

Advertisement
Continue Reading

Tech

How EverCognitive helps organizations turn AI ambition into business outcomes

Published

on

Elizabeta Gjorgievska Joshevski’s career spans multiple continents and leadership roles, yet her current focus reflects a consistent theme: understanding how technology translates into business outcomes. As founder and CEO of EverCognitive, she brings that perspective into an AI landscape where many organizations are still defining their AI transformation and how to apply it in practical ways. 

Originally from North Macedonia, Elizabeta began her career in programming and technology education, where she developed both technical expertise and early leadership experience. According to her, those early roles were shaped by building relationships and expanding capabilities, which eventually connected her to global technology ecosystems. That period, she explains, formed her understanding of how organizations adopt technology beyond theory and into real operations.

Her move into Cisco marked a turning point, opening the door to a global career spanning 19 years that would take her from the Balkans to Vienna and later to Dubai to lead multicultural EMEA teams. From her perspective, progression came through exposure to different markets and business challenges rather than a predefined trajectory. She notes that working across regions required constant adaptation, particularly in aligning customer needs with evolving technology strategies.

In Vienna, she led telecom operations across Eastern Europe, managing teams that combined sales leadership with technical execution. She frames this phase as one that demanded both operational discipline and strategic clarity. Elizabeta promotes Servant Leadership principles as a way to elevate the team to connect effectively with C-suite executives. This is what guided her to lead global teams to deliver exceptional results.

Advertisement

Her relocation to Dubai reflected a deliberate choice. Elizabeta explains that Dubai offered a setting where opportunity is closely tied to capability, creating space to continue building her leadership profile. She notes that the region’s pace of development and openness to innovation shaped how she approached large-scale initiatives and organizational transformation.

Over time, her roles expanded to include overseeing enterprise-level operations across EMEA, where she was responsible for a business portfolio. She explains that this level of responsibility requires coordination across global teams and alignment with product and strategy functions. According to Elizabeta, the experience provided her the opportunity to shape the decisions made on a corporate level to accelerate growth across the regions while managing execution complexity at scale.

A gradual shift in perspective began during a leadership program at Harvard between 2013 and 2014. Elizabeta explains this period as a moment of reflection, when she evaluated her career and experiences both professionally and personally, which planted the seed of her next chapter. “I knew my next challenge would be to take all of my experience and the immense knowledge I gained from my time at Harvard to build my own company,” she says. 

That idea remained in the background and started to grow as artificial intelligence became the front and center of every technology conversation. She explains that this was the catalyst for her stepping away from the corporate world. It was during this time that allowed her to focus on understanding Generative AI more deeply, including completing a program at MIT in 2024 and studying how organizations could apply AI to their ongoing digital transformations. According to Elizabeta, she recognized the gap between the rapid development of AI tools and the market readiness of implementing AI, which is where she saw her opportunity. 

Advertisement

She states, “The starting point should always be the business outcome, to understand the client and how their AI ambition can convert into operational reality and growth.

EverCognitive was built around that principle. The company operates as an AI transformation firm that works with organizations to assess organizational health, perform AI readiness audits, and build and select AI solutions to map out the client’s business outcomes. Elizabeta explains that this includes executive advisory, organizational assessments, and frameworks designed to implement architecture that operators can execute.

Her approach reflects her experience working within large organizations. She notes that decisions around technology are often shaped by leadership alignment, organizational structure, and operational priorities. From her perspective, this is where many companies require guidance when approaching AI.

I have spent years working with the companies that are now trying to leverage AI,” Elizabeta explains. “My deep understanding of the digital transformation journeys different vertical markets went through gives me the leverage to be able to accelerate their AI transformation with confidence and tangible business outcomes.

Advertisement

Today, EverCognitive is engaging with organizations on AI leadership and strategy, focusing on translating technological potential into measurable outcomes. For Elizabeta, the emphasis remains on applying experience to a space that is still developing.

Elizabeta maintains that while AI will continue to evolve, the ability to guide its application will remain essential. Technology will continue to advance, but the way it is applied within organizations will determine its real impact over time. 

She states, “The question is not if but when. And in the age of the AI revolution, those who adopt quickly and wisely won’t just survive; they will win.” 

Advertisement

Source link

Continue Reading

Tech

GeekWire Awards: From AI safety to robotic ultrasounds, meet the Startup of the Year finalists

Published

on

The key players leading 2026 GeekWire Awards Startup of the Year finalists, clockwise from top left: Grin Lord, CEO of mpathic; Edward Wu, Dropzone AI CEO; Loopr CEO Priyansha Bagari; Dopl Technologies co-founders Wayne Monsky, Ryan James and Steve Seslar; and ElastixAI co-founders Saman Naderiparizi, Mohammad Rastegari, and Mahyar Najibi.

From making AI safer for kids in crisis to guiding robotic arms through remote ultrasounds, from sniffing out factory defects to slashing the cost of running large language models — the 2026 GeekWire Awards Startup of the Year finalists are building across a variety of frontiers in tech.

The finalists are: mpathic, ElastixAI, Dropzone AI, Dopl Technologies, and Loopr AI.

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

Last year’s winner was Auger, the startup that makes supply chain software that unifies data, targets inefficiencies and provides real-time insights and automation.

Continue reading for information on the Startup of the Year finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through today.

Mpathic is a Seattle startup building safety infrastructure for AI models that interact with vulnerable users, including children and people in mental health crises. The company helps foundational model developers and LLM-powered app teams stress-test model behavior, evaluate responses, and monitor live interactions with safeguards that can flag or intervene when AI-generated advice veers into dangerous territory.

Advertisement

Mpathic was co-founded in 2021 by CEO Grin Lord, a board-certified psychologist and NLP researcher, in a bid to bring more empathy to corporate communication. The company raised $15 million in 2025 and says its global network of thousands of licensed clinical experts is growing by hundreds weekly to keep up with demand. Mpathic is No. 188 on the GeekWire 200, a ranked index of the Pacific Northwest’s top startups.

ElastixAI is a Seattle startup building an AI inference platform designed to make running large language models faster, cheaper, and more flexible across edge devices and cloud deployments. The platform lets customers configure their inference infrastructure for specific use cases, and the company says it could serve everyone from hyperscalers to enterprises weaving AI into daily operations.

The company was co-founded by CEO Mohammad Rastegari, CTO Saman Naderiparizi, and Mahyar Najibi — all veterans of Xnor, the Seattle edge-AI startup acquired by Apple for around $200 million in 2020. Founded in early 2025, ElastixAI raised $16 million last May.

Dropzone AI is a Seattle startup building AI security agents that work alongside human analysts in security operations centers, handling repetitive tasks and investigating alerts. The company’s pre-trained agents use large language models to mimic the thought process of expert security analysts, helping teams keep pace with a growing volume of cybersecurity threats.

Advertisement

Dropzone AI was founded by CEO Edward Wu, who previously spent eight years at Seattle-based security company ExtraHop. The company raised $16.8 million in Series A funding a year ago.

Dopl Technologies is a Seattle startup using telerobotics to bring diagnostic exams and interventional procedures to underserved communities, particularly rural patients who would otherwise travel long distances to reach specialists. Its robotic ultrasound system can be controlled remotely by a sonographer in a different location, with advanced haptics and visual tools designed to give the operator a sense of touch — and AI assistance to optimize workflows.

Dopl was co-founded by CEO Ryan James, COO Steve Seslar, and chief medical officer Wayne Monsky, who began researching novel care delivery methods together at the University of Washington in 2017. The company, ranked No. 193 on the GeekWire 200, raised $1.5 million in a pre-seed round last year.

Loopr is a Seattle startup selling AI-powered computer vision software that helps manufacturers detect defects and quality issues in real time. Unlike legacy vision systems that require fixed cameras and custom installs, Loopr’s software is hardware-agnostic and can run on tablets, making it accessible across aerospace, automotive, and chemical manufacturing — where it is already working with 10 Fortune 1000 companies.

Advertisement

Loopr was founded in 2021 by CEO Priyansha Bagaria, who drew inspiration from building defect-detection software for her family’s manufacturing business in India. The company raised $5.4 million in a funding round last August.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships are available. Contact events@geekwire.com to reserve a spot for your team today.

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Advertisement

Source link

Continue Reading

Tech

Bluesky blames DDoS attack for server outages

Published

on

Bluesky is once again having a wobble. The platform said some of its systems are down and that it’s “investigating an incident with service in one of our reginos” (that’s Bluesky’s typo, not mine). The issue appears to have started at 1:42AM ET and was still persisting as of 11AM when this story was originally published. Since then, the site has been experiencing intermitent interuptions, including at times to its status page where users should be able to monitor outages.

At 7:47PM ET, the platform explained that it’s been attempting to mitigate “a sophisticated Distributed Denial-of-Service (DDoS) attack, which intensified throughout the day.” It said the attack had caused interruptions to users’ feeds, notifications, threads and search, all of which the Engadget team experienced first-hand at various points through the day. While DDoS attacks are frequently used as virtual smokescreens for hacks, Bluesky says it has “not seen any evidence of unauthorized access to private user data.” The social media service had another brief outage earlier this month.

The outage is ongoing, but due to its intermittent nature it’s more of a rolling blackout than a power outage. Bluesky says it will provide another update on the situation by 1PM ET on April 17.

Update, April 16, 8PM ET: This story was updated after publish with an of the outage from Bluesky.

Advertisement

Source link

Continue Reading

Tech

RSD 2026 Preview: Brian Wilson On Tour 1999-2007 1LP Color Vinyl

Published

on

One of the genuine bright spots in my pre-Record Store Day inbox this year was news of a 1LP retrospective spotlighting Brian Wilson’s late-1990s comeback and the transformational musical run that carried him deep into the 2000s.

The good folks at Oglio Records kindly sent me a preview copy of the album titled Brian Wilson On Tour 1999-2007.
The single LP collection offers a tasty overview of Brian’s live work from this period including choice late’60s  Beach Boys nuggets, primo solo cuts and special cover tunes. Given the quality of Brian’s tremendous backing group at that time, there is a remarkable consistency of performance and sound quality on these recordings across the years.  

The album opens with a rousing version of “This Could BeThe Night,” a particularly special tune originally written by Harry Nilsson in tribute to Brian and eventually recorded by Wilson himself on the 1995 Nilsson tribute album For The Love Of Harry: Everybody Sings Nilsson. Do look up the fascinating back story about this song on the wiki.

brian-wilson-album

Brian Wilson On Tour 1999-2007 offers a mini medley of opening songs from Wilson’s legendary 1966/2004 SMiLE album which lead into a terrific version of “Heroes & Villains.” Three fan favorite Beach Boys LP cuts from 1968’s Friends including the title track are also featured. If you saw any of the tours around this time you know that when Brian performed “Marcella” — from the under appreciated 1972 Beach Boys LP Carl & The Passions — he took full ownership of the tune, turning it into a brilliant rocker only hinted at in the original. 

The Beach Boys deep album cut “Drive In” from 1964’s All Summer Long is a special kick to hear performed live, with its decidedly humorous and slyly racy lyrics — apparently this song was one of Brian’s transformational early productions where the band’s sound first came together as he’d envisioned.

Advertisement
brian-wilson-vinyl-backlit

Heartstring-tugger “Melt Away” is one of my all-time faves from Wilson’s 1988 solo debut — such an incredible song, performed gorgeously. Brian delivers a genuinely rocking cover of the Chuck Berry classic “Johnny B. Goode” without sounding tired or cliche. The album ends with a curiously upbeat pop arrangement of “She’s Leaving Home” from The Beatles’ Sergeant Pepper.

Sonics-wise, Brian Wilson On Tour 1999–2007 happily sounds really good start to finish despite its likely digital sourcing (hey, these are modern live concert recordings, folks). I was pleasantly surprised that the opaque marble vinyl actually is pretty nice overall — well centered, overall quiet. I did not notice any surface noise issues, which doesn’t always happen with highly patterned color vinyl.  

brian-wilson-cover-rsd-2026

Fourteen great songs performed live by music legend Brian Wilson at the peak of his late period renaissance seems like an equation for a must-get album for Record Store Day. Only 2000 copies of Brian Wilson On Tour 1999–2007 are being made so get to your favorite vinyl shop early to grab your copy! 


Mark Smotroff is a deep music enthusiast / collector who has also worked in entertainment oriented marketing communications for decades supporting the likes of DTS, Sega and many others. He reviews vinyl for Analog Planet and has written for Audiophile Review, Sound+Vision, Mix, EQ, etc.  You can learn more about him at LinkedIn.

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Trending

Copyright © 2025