Connect with us
DAPA Banner

Tech

Yes, Section 230 Should Apply Equally To Algorithmic Recommendations

Published

on

from the it-won’t-do-what-you-think-if-you-remove-it dept

If you’ve spent any time in my Section 230 myth-debunking guide, you know that most bad takes on the law come from people who haven’t read it. But lately I keep running into a different kind of bad take—one that often comes from people who have read the law, understand the basics passably well, and still say: “Sure, keep 230 as is, but carve out algorithmically recommended content.”

Unlike the usual nonsense, this one is often (though not always) offered in good faith. That makes it worth engaging with seriously.

It’s still wrong.

Let’s start with the basics: as we’ve described at great length, the real benefits of Section 230 are its procedural protections, which make it so that vexatious cases get tossed out at the earliest (i.e., cheapest) stage. That makes it possible for sites that host third party content to do so in a way that they won’t get sued out of existence any time anyone has a complaint about someone else’s content being on the site. This important distinction gets lost in almost every 230 debate, but it’s important. Because if the lawsuits that removing 230 protections would enable would still eventually win on First Amendment grounds, the only thing you’re doing in removing 230 protections is making lawsuits impossibly expensive for individuals and smaller providers, without doing any real damage to large companies, who can survive those lawsuits easily.

Advertisement

And that takes us to the key point: removing Section 230 for algorithmic recommendations would only lead to vexatious lawsuits that will fail.

But what about [specific bad thing]?

Before diving into the legal analysis, let’s engage with the strongest version of this argument. Proponents of carving out algorithmic recommendations typically aren’t imagining ordinary defamation suits. They’re worried about something more specific: cases where an algorithm itself arguably causes harm through its recommendation patterns—radicalization pipelines, engagement-driven amplification of dangerous content, recommendation systems that push vulnerable users toward self-harm.

The theory goes something like this: maybe the underlying content is protected speech, but the act of recommending it—especially when the algorithm was designed to maximize engagement and the company knew this could cause harm—should create liability, usually as some sort of “products liability” type complaint.

Advertisement

It’s a more sophisticated argument than “platforms are publishers.” But it still fails, for reasons I’ll explain below. The short version: a recommendation is an opinion, opinions are protected speech, and the First Amendment doesn’t carve out “opinions expressed via algorithm” as a special category.

A short history of algorithmic feeds

To understand why removing 230 from algorithmic recommendations would be such a mistake, it helps to remember the apparently forgotten history of how we got here. In the pre-social media 2000s, “information overload” was the panic of the moment. Much of the discussion centered on the “new” technology of RSS feeds, and there were plenty of articles decrying too much information flooding into our feed readers. People weren’t worried about algorithms—they were desperate for them. Articles breathlessly anticipated magical new filtering systems that might finally surface what you actually wanted to see.

The most prominent example was Netflix, back when it was still shipping DVDs. Because there were so many movies you could rent, Netflix built one of the first truly useful recommendation algorithms—one that would take your rental history and suggest things you might like. The entire internet now looks like that, but in the mid-2000s, this was revolutionary.

Advertisement

Netflix’s approach was so novel that they famously offered $1 million to anyone who could improve their algorithm by 10%. We followed that contest for years as it twisted and turned until a winner was finally announced in 2009. Incredibly, Netflix never actually implemented the winning algorithm—but the broader lesson was clear: recommendation algorithms were valuable, and people wanted them.

As social media grew, the “information overload” panic of the blog+RSS era faded, precisely because platforms added recommendation algorithms to surface content users were most likely to enjoy. The algorithms weren’t imposed on users against their will—they were the answer to users’ prayers.

Public opinion only seemed to shift on “algorithms” after Donald Trump was elected in 2016. Many people wanted something to blame, and “social media algorithms” was a convenient excuse.

Algorithmic feeds: good or bad?

Advertisement

Many people claim they just want a chronological feed, but studies consistently show the vast majority of people prefer algorithmic recommendations, because they surface more of what users actually want, compared to chronological feeds.

That said, it’s not as simple as “algorithms good.” There’s evidence that algorithms optimized purely for engagement can push emotionally charged political content that users don’t actually want (something Elon Musk might take notice of). But there’s also evidence that chronological feeds expose users to more untrustworthy content, because algorithms often filter out garbage.

So, algorithms can be good or bad depending on what they’re optimized for and who controls them. That’s the real question: will any given regulatory approach give more power to users, to companies, or to the government?

Keep that frame in mind. Because removing 230 protections for algorithmic recommendations shifts power away from users and toward incumbents and litigants.

Advertisement

The First Amendment still exists

As mentioned up top, the real role of Section 230 is providing a procedural benefit to get vexatious lawsuits tossed well before (and at much lower cost) they would get tossed anyway, under the First Amendment. With Section 230, you can get a case dismissed for somewhere in the range of $50k to $100k (maybe up to $250k with appeals and such). If you have to rely on the First Amendment, it’s up in the millions of dollars (probably $5 to $10 million).

And, the crux of this is that any online service sued over an algorithmic recommendation, even for something horrible, would almost certainly win on First Amendment grounds.

Because here’s the key point: a recommendation feed is a website’s opinion of what they think you want to see. And an opinion is protected speech. Even if you think it’s a bad or dangerous opinion. One thing that the US has been pretty clear on is that opinions are protected speech.

Advertisement

Saying that an internet service can be held liable for giving its opinion on “what we think you’d like to see” would be earth shatteringly problematic. As partly discussed above, the modern internet today relies heavily on algorithms recommending stuff, giving opinions. Every search result is just that, an opinion.

This is why the “algorithms are different” argument fails. Yes, there’s a computer involved. Yes, the recommendation emerges from machine learning rather than a human editor’s conscious decision. But the output is still an expression of judgment: “Based on what we know, we think you’ll want to see this.” That’s an opinion. The First Amendment doesn’t distinguish between opinions formed by editorial meetings and opinions formed by trained models.

In the earlier internet era, there were companies that sued Google because they didn’t like how their own sites appeared (or didn’t appear) in Google search results. The E-Ventures v. Google case here is instructive. Google determined that E-Venture’s “SEO” techniques were spammy, and de-indexed all its sites. E-Ventures sued. Google (rightly) raised a 230 defense which (surprisingly!) a court rejected.

But the case went on longer, and after lots more money on lawyers was spent, Google did prevail on First Amendment grounds.

Advertisement

This is exactly what we’re discussing here. Google search ranking is an algorithmic recommendation engine, and in this one case a court (initially) rejected a 230 defense, causing everyone to spend more money… to get to the same basic result in the long run. The First Amendment protects a website using algorithms to express an opinion over what it thinks you’ll want… or not want.

Who has agency?

This brings us back to the steelman argument I mentioned above: what about cases where an algorithm recommends something genuinely dangerous?

Our legal system has a clear answer, and it’s grounded in agency. A recommendation feed is not hypnotic. If an algorithm surfaces content suggesting you do something illegal or dangerous, you still have to make the choice to do the illegal or dangerous thing. The algorithm doesn’t control you. You have agency.

Advertisement

But there’s a stronger legal foundation here too. Courts have consistently found that recommending something dangerous is still protected by the First Amendment, particularly when the recommender lacks specific knowledge that what they’re recommending is harmful.

The Winter v. GP Putnam’s Sons case is instructive here. The publisher of a mushroom encyclopedia included recommendations to eat mushrooms that turned out to be poisonous—very dangerous! But the court found the publisher wasn’t liable because they didn’t have specific knowledge of the dangerous recommendation. And crucially, the court noted that the “gentle tug of the First Amendment” would block any “duty of care” that would require publishers to verify the safety of everything they publish:

The plaintiffs urge this court that the publisher had a duty to investigate the accuracy of The Encyclopedia of Mushrooms’ contents. We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

Now, I should acknowledge that Winter was a products liability case involving a physical book, not a defamation or tortious speech case involving an algorithm, but almost all of the current cases challenging social media are self-styled as product liability cases to try (usually without success) to avoid the First Amendment. And that’s all they would be regarding algorithms as well.

The underlying principle remains the same whether you call it a products liability case or one officially about speech: the First Amendment bars requirements that publishing intermediaries must “investigate” whether everything they distribute is accurate or safe. The reason is obvious—such liability would prevent all sorts of things from getting published in the first place, putting a massive damper on speech.

Advertisement

Apply that principle to algorithmic recommendations, and the answer is clear. If a book publisher can’t be required to verify that every mushroom recommendation is safe, a platform can’t be required to verify that every algorithmically surfaced piece of content won’t lead someone to harm.

The end result?

So what would it mean if we somehow “removed 230 from algorithmic recommendations”?

Practically, it means that if companies have to rely on the First Amendment to win these cases, only the biggest companies can afford to do so. The Googles and Metas of the world can absorb $5-10 million in litigation costs. For smaller companies, those costs are existential. They’d either exit the market entirely or become hyper-aggressive about blocking content at the first hint of legal threat—not because the content is harmful, but because they can’t afford to find out in court.

Advertisement

The end result would be that the First Amendment still protects algorithmic recommendations—but only for the very biggest companies that can afford to defend that speech in court.

That means less competition. Fewer services that can recommend content at all. More consolidation of power in the hands of incumbents who already dominate the market.

Remember the frame from earlier: does this give more power to users, companies, or the government? Removing 230 from algorithmic recommendations doesn’t empower users. It doesn’t make platforms more “responsible.” It just makes it vastly harder for anyone other than the giant platforms to exist while also giving more power to governments, like the one currently run by Donald Trump, to define what things an algorithm can, and cannot, recommend.

Rather than diminishing the power of billionaires and incumbents, this would massively entrench it. The people pushing for this carve-out often think they’re fighting Big Tech. In reality, they’re fighting to build Big Tech a new moat.

Advertisement

Filed Under: 1st amendment, algorithmic feeds, algorithmic recommendations, algorithms, feeds, free speech, opinion, section 230

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Wireless Network Turns Interference Into Computation

Published

on

Picture a highway with networked autonomous cars driving along it. On a serene, cloudless day, these cars need only exchange thimblefuls of data with one another. Now picture the same stretch in a sudden snow squall: The cars rapidly need to share vast amounts of essential new data about slippery roads, emergency braking, and changing conditions.

These two very different scenarios involve vehicle networks with very different computational loads. Eavesdropping on network traffic using a ham radio, you wouldn’t hear much static on the line on a clear, calm day. On the other hand, sudden whiteout conditions on a wintry day would sound like a cacophony of sensor readings and network chatter.

Normally this cacophony would mean two simultaneous problems: congested communications and a rising demand for computing power to handle all the data. But what if the network itself could expand its processing capabilities with every rising decibel of chatter and with every sensor’s chirp?

Traditional wireless networks treat communication as separate from computation. First you move data, then you process it. However, an emerging new paradigm called over-the-air computation (OAC) could fundamentally change the game. First proposed in 2005 and recently developed and prototyped by a number of teams around the world, including ours, OAC combines communication and computation into a single framework. This means that an OAC sensor network—whether shared among autonomous vehicles, Internet-of-Things sensors, smart-home devices, or smart-city infrastructure—can carry some of the network’s computing burden as conditions demand.

Advertisement

The idea takes advantage of a basic physical fact of electromagnetic radiation: When multiple devices transmit simultaneously, their wireless signals naturally combine in the air. Normally, such cross talk is seen as interference, which radios are designed to suppress—especially digital radios with their error-correcting schemes and inherent resistance to low-level noise.

But if we carefully design the transmissions, cross talk can enable a wireless network to directly perform some calculations, such as a sum or an average. Some prototypes today do this with analog-style signaling on otherwise digital radios—so that the superimposed waveforms represent numbers that have been added before digital signal processing takes place.

Researchers are also beginning to explore digital, over-the-air computation schemes, which embed the same ideas into digital formats, ultimately allowing the prototype schemes to coexist with today’s digital radio protocols. These various over-the-air computation techniques can help networks scale gracefully, enabling new classes of real-time, data-intensive services while making more efficient use of wireless spectrum.

OAC, in other words, turns signal interference from a problem into a feature, one that can help wireless systems support massive growth.

Advertisement

For decades, engineers designed radio communications protocols with one overriding goal: to isolate each signal and recover each message cleanly. Today’s networks face a different set of pressures. They must coordinate large groups of devices on shared tasks—such as AI model training or combining disparate sensor readings, also known as sensor fusion—while exchanging as little raw data as possible, to improve both efficiency and privacy. For these reasons, a new approach to transmitting and receiving data may be worth considering, one that doesn’t rely on collecting and storing every individual device’s contributions.

By turning interference into computation, OAC transforms the wireless medium from a contested battlefield into a collaborative workspace. This paradigm shift has far-reaching consequences: Signals no longer compete for isolation; they cooperate to achieve shared outcomes. OAC cuts through layers of digital processing, reduces latency, and lowers energy consumption.

Even very simple operations, such as addition, can be the building blocks of surprisingly powerful computations. Many complex processes can be broken down into combinations of simpler pieces, much like how a rich sound can be re-created by combining a few basic tones. By carefully shaping what devices transmit and how the result is interpreted at the receiver, the wireless channel running OAC can carry out other calculations beyond addition. In practice, this means that with the right design, wireless signals can compute a number of key functions that modern algorithms rely on.

THE PROBLEM (TRADITIONAL APPROACH) 

Advertisement

Diagram of cars at mixed speeds with complex dashed feedback loops between them

Consider five connected vehicles traveling within sight of one another. Each car reports its speed to the network. In this example, the speeds are slow, medium, and fast. Using existing standards, all five connected cars must independently track and count all incoming signals. Even in this very simplified case, the network is already congested.

Mark Montgomery

For instance, many key tasks in modern networks don’t require the logging and storage of every individual network transmission. Rather, the goal is instead to infer properties about aggregate patterns of network traffic—reaching agreement or identifying what matters most about the traffic. Consensus algorithms rely on majority voting to ensure reliable decisions, even when some devices fail. Artificial intelligence systems depend on matrix reduction and simplification operations such as “max pooling” (keeping only peak values) to extract the most useful signals from noisy data.

Advertisement

In smart cities and smart grids, what matters most is often not individual readings but distribution. How many devices report each traffic condition? What is the range of demand across neighborhoods? These are histogram questions—summaries of the device counts per category.

With type-based multiple access (TBMA), an over-the-air computation method we use, devices reporting a given condition transmit together over a shared channel. Their signals add up, and the receiver sees only the total signal strength per category. In a single transmission, the entire histogram emerges without ever identifying individual devices. And the more devices there are, the better the estimate. The result is greater spectrum efficiency, with lower latency and scalable, privacy-friendly operations—all from letting the wireless medium do the aggregating and counting.

It’s easy to imagine how analog values transmitted over the air could be summed via superposition. The amplitudes from different signals add together, so the values those amplitudes represent also simply add together. The more challenging question concerns preserving that additive magic, but with digital signals.

Here’s how OAC does it. Consider, for instance, one TBMA approach for a network of sensors that gives each possible sensor reading its own dedicated frequency channel. Every sensor on the network that reads “4” transmits on frequency four; every sensor that reads “7” transmits on frequency seven. When multiple devices share the same reading, their amplitudes combine. The stronger the combined signal at a given frequency, the more devices there are reporting that particular value.

Advertisement

A receiver equipped with a bank of filters tuned to each frequency reads out a count of votes for every possible sensor value. In a single, simultaneous transmission, the whole network has reported its state.

It might seem paradoxical—digital computation riding atop what appears to be an analog physical effect. But this is also true of all “digital” radio. A Wi-Fi transmitter does not launch ones and zeroes into the air; it modulates electromagnetic waves whose amplitudes and phases encode digital data. The “digital” label ultimately refers to the information layer, not the physics. What makes OAC digital, in the same sense, is that the values being computed—each sensor reading, each frequency-bin count—are discrete and quantized from the start. And because they are discrete, the same error-correction machinery that has made digital communications robust for decades can be applied here too.

Synchronization is where OAC’s demands diverge most sharply from digital wireless conventions. Many OAC variants today require something akin to a shared clock at nanosecond precision: Every signal’s phase must be synchronized, or the superposition runs the risk of collapsing into destructive interference. While TBMA relaxes this burden a bit—devices need only share a time window—real engineering challenges lie ahead regardless, before over-the-air computation is ready for the mobile world.

How will over-the-air computation work in the field?

Over-the-air computation has in recent years moved from theory to initial proofs-of-concept and network test runs. Our research teams in South Carolina and Spain have built working prototypes that deliver repeatable results—with no cables and no external timing sources such as GPS-locked references. All synchronization is handled within the radios themselves.

Advertisement

Our team at the University of South Carolina (led by Sahin) started with off-the-shelf software-defined radios—Analog Devices’ Adalm-Pluto. We modified the devices’ field-programmable gate array hardware inside each radio so it can respond to a trigger signal transmitted from another radio. This simple hack enabled simultaneous transmission, a core requirement for OAC. Our setup used five radios acting as edge devices and one acting as a base station. The task involved training a neural network to perform image recognition over the air. Our system, whose results we first reported in 2022, achieved a 95 percent accuracy in image recognition without ever moving raw data across the network.

THE OVER-THE-AIR COMPUTATION (OAC) APPROACH

Illustration of cars adjusting speed with colored dashed lines indicating traffic signal control.

Using over-the-air computation, all five cars transmit their speeds simultaneously. Vehicles reporting the same speed share the same channel; their signals merely combine over the air.

Advertisement

Mark Montgomery

We also demonstrated our initial OAC setup at a March 2025 IEEE 802.11 working group meeting, where an IEEE committee was studying AI and machine learning capabilities for future Wi-Fi standards. As we showed, OAC’s road ahead doesn’t necessarily require reinventing wireless technology. Rather, it can also build on and repurpose existing protocols already in Wi-Fi and 5G.

However, before OAC can become a routine feature of commercial wireless systems, networks must provide finer-tuned coordination of timing and signal power levels. Mobility is a difficult problem, too. When mobile devices move around, phase synchronization degrades quickly, and computational accuracy can suffer. Present-day OAC tests work in controlled lab environments. But making them robust in dynamic, real-world settings—vehicles on highways, sensors scattered across cities—remains a new frontier for this emerging technology.

Both of our teams are now scaling up our prototypes and demonstrations. We are together aiming to understand how over-the-air computation performs as the number of devices increases beyond lab-bench scales. Turning prototypes and test-beds into production systems for autonomous vehicles and smart cities will require anticipating tomorrow’s mobility and synchronization problems—and no doubt a range of other challenges down the road.

Advertisement

Where OAC goes from here

To realize the technological ambitions of over-the-air computation, nanosecond timing and exquisite RF signal design will be crucial. Fortunately, recent engineering advances have made substantial progress in both of these fields.

Because OAC demands waveform superposition, it benefits from tight coordination in time, frequency, phase, and amplitude among RF transmitters. Such requirements build naturally on decades of work in wireless communication systems designed for shared access. Modern networks already synchronize large numbers of devices using high-precision timing and uplink coordination.

OAC uses the same synchronization techniques already in cellular and Wi-Fi systems. But to actually run over-the-air computations, more precision still will be needed. Power control, gain adjustment, and timing calibration are standard tools today. We expect that engineers will further refine these existing methods to begin to meet OAC’s more stringent accuracy demands.

THE OAC RESULT 

Advertisement

OAC result bar chart: slow 1 (blue), medium 3 (green), fast 1 (red).

One transmission yields the full picture: One car is going slow; three are traveling at medium speed; and one vehicle is moving fast. The majority condition is immediately identified—with no individual vehicle data shared or processed.

Mark Montgomery

In some cases, in fact, imperfect timing standards may be all that’s needed. Designs and emerging standards in 5G and 6G wireless systems today use clever encoding that tolerates imperfect synchronization. Minor timing errors, frequency drift, and signal overlap can in some cases still work capably within an OAC protocol, we anticipate. Instead of fighting messiness, over-the-air computation may sometimes simply be able to roll with it.

Advertisement

Another challenge ahead concerns shifting processing to the transmitter. Instead of the receiver trying to clean up overlapping signals, a better and more efficient approach would involve each transmitter fixing its own signal before sending. Such “pre-compensation” techniques are already used in MIMO technology (multi-antenna systems in modern Wi-Fi and cellular networks). OAC would just be repurposing techniques that have already been developed for 5G and 6G technologies.

Materials science can also help OAC efforts ahead. New generations of reconfigurable intelligent surfaces shape signals via tiny adjustable elements in the antenna. The surfaces catch radio signals and reshape them as they bounce around. Reconfigurable surfaces can strengthen useful signals, eliminate interference, and synchronize wavefront arrivals that would otherwise be out of sync. OAC stands to benefit from these and other emerging capabilities that intelligent surfaces will provide.

At the system level, OAC will represent a fundamental shift in wireless network system design. Wireless engineers have traditionally tried to avoid designing devices that transmit at the same time. But over-the-air systems will flip the old, familiar design standards on their head.

One might object that OAC stands to upend decades of existing wireless signal standards that have always presumed data pipes to be data pipes only—not microcomputers as well. Yet we do not anticipate much difficulty merging OAC with existing wireless standards. In a sense, in fact, the IEEE 802.11 and 3GPP (3rd Generation Partnership Project) standards bodies have already shown the way.

Advertisement

A network can set aside certain brief time windows or narrow slices of bandwidth for over‑the‑air computation, and use the rest for ordinary data. From the radio’s point of view, OAC just becomes another operating mode that is turned on when needed and left off the rest of the time.

Over the past decade, both the IEEE and 3GPP have integrated once-experimental technologies into their wireless standards—for example, millimeter-wave mobile communications, multiuser MIMO, beamforming, and network slicing—by defining each new technological advance as an optional feature. OAC, we suggest, can also operate alongside conventional wireless data traffic as an optional service. Because OAC places high demands on timing and accuracy, networks will need the ability to enable or disable over‑the‑air computation on a per‑application basis.

With continued progress, OAC will evolve from lab prototype to standardized wireless capability through the 2020s and into the decade ahead. In the process, the wireless medium will transform from a passive data carrier into an active computational partner—providing essential infrastructure for the real-time intelligent systems that future wireless technologies will demand.

So on that snowy highway sometime in the 2030s, vehicles and sensors won’t wait for permission to think together. Using the emerging over-the-air computation protocols that we’re helping to pioneer, simultaneous computation will be the new default. The networks will work as one.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Cry til you laugh: Chris Pirillo vibe codes his job-search frustrations into brutally honest apps

Published

on

Chris Pirillo tears up one of his fake rejection letters. (Photo courtesy of Chris Pirillo)

At a time when finding a job in tech has turned into a frustrating cycle of rejections, ghostings or worse, Chris Pirillo‘s work speaks for itself — in that it makes a mockery of the whole process.

Pirillo, the longtime tech enthusiast and entrepreneur, has been showing off his skills by illustrating how hard it is to get anyone to pay attention to his skills. Two of his latest vibe-coded creations include a Resume Analyzer and a pre-rejection letter generator called Dear Applicant.

Each is sure to hit home with job seekers dealing with a seemingly hopeless process, and the recruiters and employers who are hopefully trying to fill roles with some shred of humanity.

The Resume Analyzer is exactly what it sounds like — and nothing like what you’d hope. Users are invited to paste in a job description and upload their CV for what’s billed as a “semantic scan” that generates a “personalized, actionable gap-analysis report.” The punchline, of course, is that no matter what you submit, the verdict is the same: “Nah, you’re f**ked, mate.”

Pirillo said the idea started as a half-joke on social media earlier this week. The response was immediate and resonant enough to convince him the joke was actually a mirror. A few hours later, the app was live.

Advertisement

The Dear Applicant generator arrived shortly after, born from a comment on Threads suggesting the logical next step: a rejection letter that arrives before you’ve even applied. “Imagine all the efficiency gains,” the commenter wrote. Pirillo obliged.

I tested both and left laughing both times, especially at the methodology fine print on the Resume Analyzer, which read, in part: “No resumes were analyzed in the production of this report. No data left your browser. The job market is, in fact, a burning dumpster. This tool confirms what you already suspected. Have you considered goat farming?”

“These apps are funny because they aren’t,” Pirillo told GeekWire via email.

The frustration Pirillo is spoofing is well-documented — and the timing is no coincidence. A report last month from pre-employment testing company Criteria found that more than half of job seekers had been ghosted by an employer in the past year, a three-year high. It comes at a time when tech layoffs have remained brutal: more than 178,000 tech workers were cut in 2025 alone, flooding an already strained job market with qualified candidates competing for fewer openings — and hearing nothing back.

Advertisement
Dear Applicant allows job seekers to pre-generate a rejection letter when applying for a job. (Image via Dear Applicant)

The tension — between the gag and the genuine grievance underneath it — is what gives both tools their edge. Pirillo, who describes himself as more than qualified for positions he applies to, said he’s given up on the traditional job search in favor of fractional and contract work, not because he wants to, but because the alternative feels like shouting into a void.

“That behavior and expectation has been normalized,” he said of ghosting by employers. “The entire process that few of us are in control of teeters on abusive.”

Building the apps, he noted, took less than an hour each — a pointed contrast to the hours job seekers routinely sink into tailoring resumes and cover letters that often vanish without acknowledgment. Pirillo has now shipped more than 300 of these “mini-products” on his Vibe Arcade website and is actively teaching others, technical and non-technical alike, to do the same.

The question he’s asking, implicitly, is whether building things is now a better use of a job-seeker’s time than applying for jobs. For him, the answer is yes — and he’s leaning into what he calls an emerging archetype: the “product developer,” someone who shows their work rather than curates a resume.

“I believe I put more thought into making these apps than any company has in considering my application,” he said. “Probably more than all of them combined — even when someone made a personal referral to the hiring manager.”

Advertisement

He’s even considering attaching Dear Applicant’s pre-rejection letter to his own future applications — partly as an experiment, partly because he says he has nothing to lose.

“Worst that could happen is I get ignored differently,” he said.

Pirillo isn’t pretending the apps are activism — or that they’ll change anything. He knows HR won’t find it funny. But that’s not really who he’s talking to.

“If there was an intent behind these specific apps, it’s not just to evoke a sense of ‘you’re not alone’ but to laugh at the absurdity of the situation some of us find ourselves forced into,” he said.

Advertisement

Previously: Vibe-coding a new reality: Chris Pirillo on the rise of AI-powered apps, features, and founders

Source link

Continue Reading

Tech

Century City will be hosting an immersive fan experience for Apple TV shows

Published

on

Apple is taking Apple TV off the screen and into the real world with a public fan event designed to showcase its growing slate of original series.

Apple TV box and Siri Remote on a wooden table beneath an LG television displaying a streaming menu with shows like Causeway and Ted Lasso
Apple TV 4K

The company will host a free “Think Apple TV” activation at Westfield Century City in Los Angeles across two weekends, running April 23 through April 26 and April 30 through May 3.
Interactive experiences tied to shows include Pluribus, Shrinking, Your Friends & Neighbors, The Morning Show, Slow Horses, Stick, and Margo’s Got Money Troubles.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Estonia is the rare EU country opposing child social media bans

Published

on

As child social media bans spread across Europe and beyond, Estonia isn’t having it. On Friday, the country’s education minister said the bans won’t “actually solve problems,” while warning that the kids will find a way regardless.

Although companies like Meta would love for you to believe it’s a fairy tale, social media addiction is associated with tangible negative repercussions for children. Studies show that its harms range from depression and anxiety to sleep deprivation and obesity. (The latter is from all the targeted junk food advertising.) On the other hand, teens can find community and support from social media.

A growing list of countries looked at the negative data and concluded that the answer was to ban social media altogether for children. Although the age cutoff varies, legislation has been floated or enacted in Australia, Greece, France, Austria, Spain, Indonesia, Malaysia, the UK and Denmark — just to name a few.

Estonia’s education minister believes these countries are coming at the very real problem from the wrong angle. “The way to approach this, to me, is not to make kids responsible for that harm and start self-regulating,” Kristina Kallas said at a Politico forum in Barcelona. She added that “kids will find very quickly the ways to go around and to still use social media.”

Advertisement

Instead, she said the responsibility lies with governments and corporations. “Europe pretends to be weak when it comes to big American and international corporations,” she added. But she called that a “pretense,” challenging the EU to “actually take this power and start regulating the big American corporations.”

To be fair, the EU regulates the tech industry more effectively than anywhere else in the world. But the point on childhood social bans stands.

Another argument against the bans is that it’s a short path from the well-meaning to a more sinister erosion of basic freedoms. In February, France suggested that the next logical step after passing an under-15 social media ban would be to go after VPNs. After all, once you pass the ban, you need to enforce it — and that can mean snuffing out the tools children could use to work around it.

Source link

Advertisement
Continue Reading

Tech

Tech Moves: Syndio names 7 execs; avante and Tanium add to C-suite; Amazon leaders depart

Published

on

Syndio’s new leadership, top row from left: Erik Darby, Shonna Waters, Devin Luquist. Bottom row: Erin McClintock, Elizabeth Temples, Manuj Bahl and Meredith Conroy. (LinkedIn Photos)

Syndio, a Seattle startup that helps companies analyze and address pay equity, announced seven new additions to its leadership team.

“This next phase of growth requires innovation and velocity,” said Maria Colacurcio, Syndio’s CEO, adding that the new hires bring “proven track records of transforming innovative enterprise solutions into industry-defining brands.”

Several of the new hires come from BetterUp, a professional coaching and training platform. The new leaders include:

  • Erik Darby, president, previously co-founded Motive Software, a San Francisco Bay Area startup using AI to understand employee and customer experience that was acquired by BetterUp in 2021.
  • Devin Luquist, senior vice president of product, was previously at BetterUp after holding technology and leadership roles at multiple companies.
  • Erin McClintock, SVP of marketing, joined from Workhuman and was formerly at BetterUp.
  • Elizabeth Temples, SVP of revenue, joined from the revenue platform Clari and was previously at BetterUp.
  • Shonna Waters, SVP of executive engagement and insights, is an organizational psychologist and adjunct professor at Georgetown University, and was at BetterUp.
  • Manuj Bahl, VP and head of architecture, joined from Talent.com and previously held roles at Seattle-area companies Microsoft, OfferUp, Apptivate.IO and RealNetworks.
  • Meredith Conroy, VP of account management, joined from Clari.
Rajeev Rajan. (LinkedIn Photo)

Rajeev Rajan joined Stripe to serve as business lead for the payment company’s Revenue and Financial Automation (RFA) division, in which he will oversee product and engineering.

“I’ve long admired the company’s focus on engineering excellence, developer experience, and its ambition to increase the GDP of the internet,” Rajan said on LinkedIn.

Rajan began his career at Microsoft, where he spent 22 years — starting as a software design engineer in 1994 and rising to corporate VP for Office 365. He went on to serve as VP of engineering at Meta, leading the company’s Pacific Northwest engineering hub, before joining Atlassian as chief technology officer. He stepped down from that role last month amid company layoffs.

Advertisement
Nick Cecil. (LinkedIn Photo)

Nick Cecil was promoted to chief technology officer for avante, a Seattle startup whose software helps companies reduce HR administrative workload and provides employees with an AI assistant for benefits guidance.

“What makes Nick unusual as an engineering leader is that he is just as obsessed with the customer as he is with the code. He spends real time understanding the pain our customers feel,” said Rohan D’Souza, avante’s CEO, on LinkedIn.

Cecil joined avante in 2023 as founding head of engineering. Previous roles include leadership positions at Salesforce, Tableau Software and Yapta.

Levent Besik has joined SailPoint, an Austin-based cybersecurity company, as chief product officer. He comes from Microsoft, where he spent four years as VP of product management for the company’s identity authentication platform, covering both human and agentic AI users.

“With the rapid rise of AI, enterprises are urgently seeking solutions to secure, govern, and protect agents end to end,” Besik said in a statement. “The world demands an identity solution that provides AI security and governance across all clouds and platforms.”

Advertisement

Based in the San Francisco Bay Area, Besik previously held roles at Okta and Google. He also has an earlier chapter at Microsoft, having joined in 2002 as a software engineer working on Internet Explorer — a tenure that lasted nearly a decade.

Hélène Bouffard. (LinkedIn Photo)

Hélène Bouffard has left Amazon after more than 17 years, most recently serving as HR director for the Seattle company’s new Artificial General Intelligence division.

She said on LinkedIn that it was “the privilege of a lifetime” to work for the tech giant.

Bouffard is now chief people officer at Circana, a Chicago-based market research and data analytics company. Her role will include aligning employees’ jobs and skills with Circana’s AI-driven efforts.

Chris Atkins. (LinkedIn Photo)

Chris Atkins, director of Amazon Worldwide Operations Sustainability, has resigned after 15 years with the company. In his most recent role, Atkins helped align Amazon’s fulfillment and transportation operations with its climate goals.

“For me, my time at Amazon was truly transformative,” Atkins said on LinkedIn. “I started as a night shift frontline manager working on the ship dock and finished leading ops sustainability for the world’s largest logistics organization.”

Advertisement

Before joining Amazon, Atkins served as an operations manager in the U.S. Army following his graduation from West Point. He is taking a few weeks off before starting a new role, which he plans to announce at a later date.

Jake Oster. (LinkedIn Photo)

Jake Oster, Amazon director of energy, environment and sustainability policy, has resigned after nearly a decade. Oster joined the company in 2017, with his tenure including a stint in Belgium leading policy for AWS.

“It was never dull and constantly rewarding. Every project, document, or accomplishment was the result of collaborative work with some of the smartest people that I have known,” he said on LinkedIn. Oster didn’t indicate his next career move, saying he was taking a “quick respite.”

Prior to Amazon, Oster worked at Seattle’s EnergySavvy, a startup that helped utilities manage their relationships with customers and was acquired in 2019.

Carol MacKinlay. (LinkedIn Photo)

Carol MacKinlay is the new chief people officer for Tanium, a Kirkland, Wash.-based cybersecurity company. MacKinlay, who is based in Carmel, Calif., joins from Pebl where she served for two years.

MacKinlay has worked as a CPO and in other human resource leadership roles for more than 20 years with previous jobs at Binance, UserTesting and Matterport.

Advertisement

Eric Emans is now chief financial officer for Insurity, a Connecticut startup providing software for insurance carriers and brokers. Emans joins Insurity from the Bellevue, Wash.-based workflow automation company Nintex, where he was CFO for four years. He was previously CFO for Lighthouse and A Place for Mom.

— Seattle’s Remitly appointed Adam Messinger, the former CTO of Twitter (now X), to its board of directors. The global remittance company disclosed the news in an SEC filing. Messinger left Twitter a decade ago.

Stephanie Rogers. (LinkedIn Photo)

Paper Crane Factory, a Seattle-based creative branding agency that works exclusively with startups, has named Stephanie Rogers as head of communications and public relations. Rogers will lead the agency’s new East Coast expansion and run its operations there. She joins from DataRobot and brings more than 15 years of experience in communications and PR.

“As we continue to grow, bringing in leadership across disciplines allows us to better support founders at the earliest and most critical stages — from defining their story to scaling it in market,” said Cal McAllister, Paper Crane Factory founder and creative director, in a statement.

Barbara Schmid is leaving Starbucks after nearly 22 years, resigning from the role of Global Coffee and Cocoa Sustainability program manager. Schmid expressed gratitude to the company and colleagues in a LinkedIn in post, adding she is grateful “most of all to the coffee and cocoa producers, without whom none of this would have been possible, and who remain the reason behind it all.”

Advertisement

Source link

Continue Reading

Tech

Meta to pay CoreWeave $21bn for additional cloud capacity

Published

on

AI rival Anthropic has also agreed to rent data centre capacity from CoreWeave.

Meta has agreed to pay CoreWeave around $21bn to access the company’s AI cloud capacity until December 2032.

The new agreement comes after Meta inked a $14.2bn deal with the company in September, taking the total CoreWeave has in Meta contracts to $35bn. Meta is one of CoreWeave’s largest customers, the company said.

CoreWeave stocks jumped to around $97 a share yesterday (9 April). Prices have since settled at around $92.

Advertisement

“This is another example that leading companies are choosing CoreWeave’s AI cloud to run their most demanding workloads,” said Michael Intrator, the co-founder, CEO and chairperson of CoreWeave.

The US cloud compute provider is one of the prime beneficiaries of the AI race, having previously inked an expanded $22.4bn deal with OpenAI last year.

Today (10 April), Anthropic has also agreed to rent data centre capacity from CoreWeave to help train its Claude model. The multi-year agreement will bring compute online starting later this year.

CoreWeave also promised up to £2.5bn in data centre investments in the UK alongside promises from other Big Tech leaders during US president Donald Trump’s visit to the country last September.

Advertisement

Meanwhile, Meta, much like its AI rivals, has been ramping up spending to bolster its position in the race. Earlier this year, the company announced a planned spending of up to $135bn in 2026.

Earlier this week, the company’s Superintelligence Labs launched its debut product Muse Spark, a multimodal, “purpose-built” model for Meta’s own products. The model will be rolled out to several countries via Meta’s Instagram, Facebook and WhatsApp platforms, as well as the company’s AI glasses.

The company has spent billions in major AI acquisitions, including $2bn for the Chinese-founded AI start-up Manus, as well as picking up the viral Moltbook platform and hiring its founders Matt Schlicht and Ben Parr.

The company has also hired the team behind the personal AI agent builder Dreamer, co-founder of Safe Superintelligence Daniel Gross and Apple’s former AI lead Ruoming Pang.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Epic is reportedly building an extraction shooter for Disney

Published

on

Besides a wealth of Fortnite skins based on Disney IP, it hasn’t really been clear what the entertainment company has gotten in return for its $1.5 billion investment in Epic from 2024. That could change this November, Bloomberg reports, when Epic releases a Disney-themed extraction shooter. The game is one of three Disney projects the publisher is currently working on, and is reportedly expected to be Epic’s comeback after the company laid off 1,000 employees in March due to a “downturn in Fortnite engagement.”

The game is reportedly similar to Arc Raiders, a multiplayer shooter where players fight for resources before escaping through an extraction point, but with Disney characters fighting enemies instead of post-apocalyptic survivors. Bloomberg writes that internal reviewers have worried that the game’s mechanics are “not very original,” but the project is the most promising of the three Epic is developing. The second title received middling internal reviews, according to Bloomberg, and Epic moved resources off the third project “after reports that Disney was disappointed by Epic’s release timeline.”

While details of Epic’s work for Disney are coming into focus, it’s still unclear whether this new extraction shooter will be a standalone game or incorporated as a mode in Fortnite. In its efforts to sell the title as a “multiverse” and a competitor to Roblox, Epic has introduced multiple games inside Fortnite over the last few years with distinct mechanics. The developer announced that it would shut down three of those titles — Rocket Racing, Ballistic and Fortnite Festival Battle Stage — as part of its recent round of layoffs. According to current and former Epic employees Bloomberg spoke to, several affected employees were also working on these unannounced Disney games.

When it invested in Epic in 2024, Disney suggested it would build an “entertainment universe” with the developer, where players could “play, watch, shop and engage with content, characters and stories from Disney, Pixar, Marvel, Star Wars, Avatar, and more.” Epic’s current plans sound far less ambitious than that, but if they manage to increase engagement with Fortnite and Disney’s brand, that might not matter.

Advertisement

Source link

Continue Reading

Tech

Amazon Leo targets mid-2026 commercial launch as enterprise beta goes live

Published

on

In short: Amazon’s satellite internet service, rebranded from Project Kuiper to Amazon Leo in November 2025, entered enterprise beta on April 8, 2026, with commercial availability targeted for mid-2026 per Andy Jassy’s annual shareholder letter. The service offers three terminal tiers delivering up to 1 Gbps for enterprise users, with Verizon, AT&T, Vodafone, JetBlue, and NASA among the beta partners. Amazon has approximately 210 to 241 satellites in orbit against a Federal Communications Commission requirement of 1,618 by July 30, 2026, has applied for a two-year deadline extension, and has contracted 22 additional launches to close the gap.

From Project Kuiper to Amazon Leo, the rebrand and the beta

Amazon received Federal Communications Commission approval for a 3,236-satellite low-earth-orbit constellation in 2020, then spent five years building the hardware, regulatory infrastructure, and carrier partnerships needed to turn that approval into a commercially viable service. The first production satellites launched in April 2025 aboard an Atlas V rocket operated by United Launch Alliance, and by November 2025 Amazon had enough operational hardware in orbit to retire the Project Kuiper name in favour of Amazon Leo, a rebrand that signals a deliberate shift from development programme to commercial product. A business preview programme opened to select enterprise partners shortly after the rebrand. The full enterprise beta launched on April 8, 2026. The following day, Jassy’s annual letter to shareholders confirmed mid-2026 as the commercial launch window, placing Leo alongside Amazon’s $50 billion Trainium chip investment as one of the defining bets in the company’s current capital allocation cycle. Beta customers span Verizon and AT&T in North America, Vodafone and Vodacom across Europe and Africa, JetBlue for in-flight connectivity, NBN Co in Australia, Vrio in Latin America, and NASA, along with enterprise logistics clients Hunt Energy and Crane Worldwide Logistics.

Three terminals, three speed tiers

Amazon has engineered three terminal models to address distinct market segments without forcing a single hardware compromise across all of them. The Leo Nano is the consumer and light-enterprise unit: seven inches square, 2.2 pounds, and rated to 100 Mbps download. The Leo Pro is aimed at small businesses, rural operators, and mobile backhaul deployments: eleven inches square, 5.3 pounds, priced at under $400, and rated to 400 Mbps. The Leo Ultra is the enterprise flagship, a 20-by-30-inch installation weighing 43 pounds and capable of 1 Gbps download with 400 Mbps upload, designed for maritime vessels, commercial aircraft, and large-campus enterprise deployments. Jassy claimed in his shareholder letter that Leo terminals deliver six to eight times better uplink performance and twice the downlink performance compared with the satellite internet alternatives currently available to enterprise customers, a claim that will be scrutinised closely once commercial service begins and independent benchmarks are possible.

The FCC deadline and the launch shortfall

Amazon’s FCC licence for its Generation 1 constellation requires exactly half the planned 3,236 satellites, or 1,618, to be in orbit and operational by July 30, 2026. As of early April 2026, Amazon has between 210 and 241 satellites in orbit, a figure that makes the original deadline effectively unreachable. The company filed a formal request with the FCC in January 2026 for a two-year extension, citing a shortage of available launch vehicles. Alongside the extension filing, Amazon disclosed ten additional Falcon 9 launch contracts with SpaceX and twelve additional New Glenn contracts with Blue Origin. Bezos is betting heavily on orbital infrastructure beyond Leo itself: Blue Origin separately filed with the FCC for a 51,600-satellite Project Sunrise constellation and a 5,408-satellite TeraWave optical backhaul network, making the New Glenn launch pipeline central to multiple overlapping ambitions simultaneously. The FCC separately approved Amazon’s Generation 2 constellation in February 2026, clearing the path to a potential 7,727-satellite network once the current launch bottleneck is resolved. The contracted vehicle fleet now spans Atlas V and Vulcan Centaur (United Launch Alliance), Falcon 9 (SpaceX), Ariane 6 (Arianespace), and New Glenn (Blue Origin).

Advertisement

Taking on Starlink, and the Globalstar play

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Starlink is not a vulnerable incumbent. SpaceX’s satellite internet service generated $10.6 billion in revenue in 2025 at a 54 per cent EBITDA margin and serves more than 10 million paying subscribers across more than 100 countries, operating a constellation of 7,600 to 8,000-plus satellites. SpaceX has filed for the largest IPO in history, seeking to raise $75 billion at a valuation of up to $1.75 trillion, potentially as early as June 2026, which would cement Starlink’s position as a capital-markets-validated infrastructure business before Amazon Leo has completed its initial rollout. Amazon’s response involves two distinct moves. The first is distribution: Leo is being sold primarily through carrier partners and enterprise integrators in its launch phase, using Verizon’s, AT&T’s, and Delta’s existing customer relationships rather than competing for consumers directly. Delta has contracted Leo for in-flight Wi-Fi across 500 aircraft starting in 2028, with free access available to SkyMiles members. The second move is spectrum acquisition. Amazon is reportedly in talks to acquire Globalstar for approximately $9 billion, a deal that would give Leo access to L-band spectrum currently anchoring Globalstar’s existing satellite network and Apple’s emergency satellite connectivity service. Apple holds a 20 per cent stake in Globalstar through a $1.5 billion investment, adding complexity to any acquisition. If the deal closes, Amazon would arrive at commercial launch with not just a new constellation but a second frequency band and an established spectrum position. The year 2025 established satellite internet as a serious enterprise infrastructure market rather than a connectivity experiment, and Amazon Leo’s mid-2026 commercial launch arrives precisely as that market enters its most contested phase.

Advertisement

Source link

Continue Reading

Tech

You won’t believe this $599 Android tablet includes a built-in projector, infrared night vision, and extreme durability features

Published

on


  • 8849 TANK Pad Ultra 1080p projector accurately projects clear images from 0.5 to 4 meters
  • Night vision camera captures usable images even in near-total darkness conditions
  • Rugged chassis resists drops, dust, and water for harsh environments

The 8849 TANK Pad Ultra is a rugged Android tablet which combines a 10.95 inch FHD 1200 x 1920 display with a built-in 1080p DLP projector rated at 260 lumens.

The projector can auto-focus and project images from 0.5 to 4 meters, supported by a micro-ranging laser which helps fine-tune the focal distance.

Advertisement

Source link

Continue Reading

Tech

Game development diary: TestFlight, trial by fire, and a trophy

Published

on

The in-development word game “Character Limit” faced testers in the last two months, but as TestFlight got underway, an unexpected game convention opportunity went especially well.

Split view showing TestFlight app dashboard with large blue TestFlight icon on the left, and a crowd at an event booth titled Character Limit on the right
A tale of two tests: TestFlight and a gaming convention.

Back in early February, Character Limit had reached a good stopping point to get some testing done with real players. A lot of the work had been done, so now it was time to get some bug fixing and polishing done, and to get some real feedback.
This previously came in the form of visits to meet other game developers in Cardiff for brief sessions. But you can only go so far in terms of feedback from a kind audience.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025