Connect with us

Tech

This smart device stops sneaky AI gadgets from listening to your conversations

Published

on

A new device aims to give people control over who can hear them in a world filled with gadgets that are always listening and capturing your conversations. A startup called Deveillance has introduced Spectre I, a portable device designed to stop microphones in nearby devices from recording your voice.

Today, we’re introducing Spectre I, the first smart device to stop unwanted audio recordings.

We live in a world of always-on listening devices.

Smart devices and AI dominate our world in business and private conversations.

With Deveillance, you will @be_inaudible. pic.twitter.com/WdxmnyFq1I

Advertisement

— Aida Baradari (@aidaxbaradari) March 3, 2026

The company says the device can make conversations unintelligible to phones, smart speakers, laptops, and other gadgets that constantly listen for audio. The idea addresses a growing concern around always-on devices.

According to the company, about 14.4 billion devices worldwide are continuously listening for voice input. These recordings often become valuable data sources used for data mining, training artificial intelligence systems, influencing our buying behaviours or deepest opinions.

Even a short sample of speech can reveal sensitive personal details. Around 30 seconds of voice data can help determine traits such as age, weight, income level, and even health information.

Advertisement

A device that creates a privacy bubble around your voice

Spectre I works by creating a two meter protection zone around the user. When activated, it scans for nearby microphones and emits signals that humans cannot hear, but microphones can detect.

These signals overlay your speech so that recording devices receive distorted audio that cannot be understood.

Unlike traditional signal jammers that rely on strong radio interference, the device uses artificial intelligence, signal processing, and physics based research to target microphones directly.

The system operates locally on the device and does not send any data to the cloud. The portable design of Spectre I makes it easy to carry anywhere.

Deveillance says this makes it useful in business meetings, personal conversations, or any situation where people want to keep discussions private.

The company has opened pre-orders for Spectre I with a refundable deposit of $1,199. The device is currently in development, with the first shipments expected in the second half of 2026.

Advertisement

Privacy groups like the Electronic Frontier Foundation have long warned about the risks of always-on surveillance. Deveillance says Spectre I is only the beginning of its effort to give users more control over how their data is collected and shared.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

The MacBook Neo Looks Like a Hit for Students. Should Anyone Else Choose It Over the Air?

Published

on

Even before the introduction of the MacBook Neo, Apple had a great student laptop. The MacBook Air is our current pick as the best laptop for college students. So in addition to competing against Chromebooks and budget Windows laptops, the new MacBook Neo is also going up against the MacBook Air for school laptop buyers. 

Given the large price difference between the Neo and Air, I think we’ll see tons of colorful MacBook Neos in schools by next fall. It looks like a hit for student budgets, but should you consider buying a MacBook Neo if you’re already out of school?

Let’s take a closer look at the new Neo to see what features it offers and those that are missing.

Advertisement

MacBook Neo vs. MacBook Air

For $599, or just $499 with Apple’s educational discount, the MacBook Neo significantly lowers the entry price for MacBook shoppers. The Neo arrives on the heels of the new M5 MacBook Air, which raises the Air’s price by $100 to $1,099. That likely puts the Air beyond many student budgets.

There’s also last year’s M4 MacBook Air to consider. It can usually be found for less than $1,000 at Amazon. Right now, it’s selling for $899

With M4 MacBook Air models still readily available, budget laptop shoppers have three MacBook options.

MacBook Neo and MacBook Air compared

Advertisement

MacBook Neo M4 MacBook Air M5 MacBook Air
Price $599 $899 $1,099
CPU A18 Pro M4 M5
No. of CPU cores 6 10 10
No. of GPU cores 5 8 8
RAM 8GB 16GB 16GB
Storage 256GB 256GB 512GB
Screen size 13 in 13.6 in 13.6 in
Screen resolution 2,408×1,506 pixels 2,560×1,664 pixels 2,560×1,664 pixels
Weight 2.7 lbs 2.7 lbs 2.7 lbs
Dimensions (HWD) 0.5 x 11.71 x 8.12 in 0.44 x 11.97 x 8.46 in 0.44 x 11.97 x 8.46 in
Connections USB-C x2, headphone Thunderbolt 4 x2, headphone, MagSafe 3 Thunderbolt 4 x2, headphone, MagSafe 3
Battery 36.5-watt‑hour 52.6-watt‑hour 53.8-watt‑hour

The fact that the price gap between the MacBook Neo and the discounted M4 MacBook Air is greater than that of the M4 Air and M5 Air makes a compelling case for the Neo. The Neo costs $300 less than the discounted M4 Air and $500 less than the $1,099 M5 Air. Only $200 separates the older M4 Air and the new M5 Air.

We don’t yet know how the MacBook Neo with its A18 Pro processor and 8GB of unified memory will measure up to a MacBook Air with an M4 or M5 chip and 16GB of RAM.

I can tell you right now, however, that if you’re a creator who uses photo- or video-editing apps or plan to use Apple Intelligence or run other AI workloads, a MacBook Air is the better choice for the additional GPU cores and greater memory allotment. You’re stuck with the Neo’s 8GB of RAM; the only upgrade offered for it is doubling the storage to a 512GB SSD for $100.

Advertisement

The Neo makes more sense as a MacBook for casual use around the house. Think of it as an oversized, nontouch iPad with an attached keyboard. It will let you browse the web, watch shows and movies, edit photos and videos you took with your iPhone, and respond to texts using a keyboard. It’s also compact and portable, with a lightweight aluminum body, and will no doubt make an easy travel companion. 

A MacBook Neo in Citrus sits on a white table. Behind it, the Neo is featured in the Blush color.

The Neo looks like a MacBook Air, just a bit smaller (and $500 less).

Josh Goldman/CNET

What’s missing on the Neo

The MacBook Neo’s most pleasant surprise was the size of the display. Rumors had swirled that Apple would keep costs in check in part by outfitting the Neo with a 12-inch display, so I was happy to see the Neo get a 13-inch display that’s only slightly smaller than the Air’s 13.6-inch display. Plus, it’s a Liquid Retina display with a relatively high resolution of 2,408×1,506 pixels. 

Advertisement

Still, a number of items that you get with the Air are missing on the Neo.

Let’s start with the input devices. The keyboard doesn’t have backlighting, which is a bummer since that shows up on even the most budget of Windows laptops and Chromebooks at this price. The basic keyboard also lacks Touch ID. You have to spend $100 on the 512GB SSD to get Touch ID, a feature I couldn’t live without on my MacBook. Also, the touchpad is mechanical and not the lovely Force Touch haptic touchpad found on the Air. 

A close-up of the upgraded keyboard on a MacBook Neo. The upgraded keyboard has Touch ID in the upper right, rather than just a button to lock the device.

You can upgrade to a 512GB SSD that also includes a Touch ID keyboard, but the MacBook Neo does not offer keyboard backlighting.

Advertisement

Josh Goldman/CNET

Ports are also a downgrade. Instead of a pair of speedy Thunderbolt 4 ports, the two USB-C ports are of the slower USB 3 and USB 2 variety. And you’ll need to use one of them to charge the Neo because it doesn’t have a MagSafe connector. I really enjoy the satisfying snap when I connect my MagSafe cable and the peace of mind that comes knowing that the cable will disconnect with ease and not pull my MacBook to its doom if I trip over the cord.

The webcam can do 1080p video, as you get with the Air, but it lacks Center Stage, which pans and zooms to keep you in the middle of the frame. (It is nice that there’s no webcam notch, though.) And while you get a Liquid Retina display on the Neo, it doesn’t have Apple’s True Tone technology that uses ambient light sensors to adjust the white balance so text and images look more natural and accurate. Most people won’t miss either of these last two items.

Don’t forget the memory

For most people deciding between a MacBook Air and Neo, the biggest drawback will be the 8GB of RAM. I suspect the six-core A18 Pro will do a reasonably good job of running MacOS. It’s the RAM that makes me nervous.

In this era of RAM shortages driving up pricing, it should come as no surprise that Apple went with only 8GB of RAM on the Neo. And it makes sense why you can’t upgrade the Neo’s memory to 16GB. 

Advertisement

Apple charges $200 to go from 16GB to 24GB of RAM on the MacBook Air. Adding $200 to the cost of the Neo on top of the $100 charge for the 512GB (because most people wouldn’t do one without the other), and you’re suddenly looking at a price of $899 for the Neo. At that price, you’re entering MacBook Air territory. 

Unless you absolutely insist on keyboard backing, a haptic touchpad, Thunderbolt 4 or MagSafe, the decision between MacBook Neo or Air will come down to the memory. If you keep things casual, then the Neo’s 8GB of RAM will suffice. After all, up until the M3 Air, the base models had just 8GB of memory and didn’t struggle to run MacOS. Still, for heavier lifting where you’re doing some graphics or AI work — or you’re just a serious multitasker and find yourself juggling many, many apps every day — then it makes sense to spend the extra money on a MacBook Air with 16GB of RAM.

Source link

Advertisement
Continue Reading

Tech

Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores

Published

on

Google is eliminating its traditional 30% Play Store fee and introducing lower commissions, while at the same time allowing alternative billing systems and making it easier for third-party app stores to operate on Android. The changes stem largely from Google’s settlement with Epic Games. Engadget reports: The biggest change is to how Google will collect fees from developers publishing apps on Android. Rather than take its standard 30 percent cut of in-app purchases through the Play Store, Google is lowering its cut to 20 percent, and in some cases 15 percent for new installs of apps from developers participating in its new App Experience program or updated Google Play Games Level Up program. Those changes extend to subscriptions, too, where the company’s cut is lowering to 10 percent. For Google’s billing system, the company says developers in the UK, US, or European Economic Area (EEA) will now be charged a five percent fee and “a market-specific rate” in other regions. Of course, for anyone trying to avoid those fees, using alternatives to Google’s billing system is getting easier.

Google says that developers will be able to offer alternative billing systems alongside its own or “guide users outside of their app to their own websites for purchases.” […] Epic is ultimately interested in getting people to use the mobile version of its Epic Games Store, and Google’s announcement also includes details on how third-party app stores can come to Android. Third-party app stores will be able to apply to the company’s new “Registered App Stores” program to see if they meet “certain quality and safety benchmarks.” If they do, they’ll be able to take advantage of a streamlined installation interface in Android. Participating in the program is optional, and users will still be able to sideload alternative app stores that aren’t part of the program, but Google clearly has a preference. […]

Google says that its updated fee structure will come to the EEA, the UK and the US by June 30, Australia by September 30, Korea and Japan by December 31 and the entire world by September 30, 2027. Meanwhile, the company’s updated Google Play Games Level Up program and new App Experience program will launch in the EEA, the UK, the US and Australia on September 30, before hitting the remaining regions alongside the updated fee structure. For any developers interested in offering their own app store, Google says it’ll launch its Registered App Stores program “with a version of a major Android release” before the end of the year. According to the company, the program will be available in other regions first before it comes to the US.

Source link

Advertisement
Continue Reading

Tech

Judge Says He’s Sick Of The Government’s Shit; Threatens To Make DHS, DOJ Testify Under Oath

Published

on

from the let’s-move-your-contempt-from-civil-to-criminal dept

Of course, we’ll see what comes of this, but it’s starting to look like this administration won’t outlast this level of judicial scrutiny. It may have bullied its way past courts during Trump’s first year back in office, but now lines are being drawn. Whether or not those lines matter is an open question. But the important thing is that they’re being drawn. All the government has to do is cross them. And there’s no reason to believe it won’t.

This is not the only court drawing these lines. The administration has already been hit with hundreds of adverse rulings. Multiple courts have threatened contempt sanctions. Some courts have even begun making those threats a reality.

Trump may flood the zone, but now it’s clear the zone is willing to flood right back. Stare into the abyss, etc. Judges are done with dealing with this shady AF administration. They’re putting in the (legal) papers that Trump got mad.

This is from a recent order [PDF] handed down by a New Jersey federal court:

Advertisement

The Government’s handling of Petitioner’s detention is emblematic of its approach to immigration enforcement in this state. On the merits, its detentions are illegal. The Government knows this. Its reliance on Section 1225 has been roundly rejected.

“Roundly rejected.” Just like prior restraint. This is active and ongoing restraint. And while it doesn’t do much to the First Amendment, it certainly does plenty of damage to other amendments dealing with the deprivation of personal liberty.

The court goes on to point out that the US Attorney for New Jersey has conceded to “violating 72 orders” issued in immigration cases handled in this jurisdiction alone. And yet, nothing changes. The US Attorney claimed the violations were “unintentional.” The court disagrees.

Sadly, the well-deserved credibility once attached to that distinguished Office is now a presumption that “has been sadly eroded.” The Government’s continued actions after being called to task can now only be deemed intentional.

And:

It ends today.

Bang. Done.

Advertisement

This is how it goes from here. The judge says any further arrests or detentions in violation of this order will result in mandatory testimony under oath, if not actual sanctions. It’s not the best threat I’ve ever heard, but it’s still more than most courts are willing to do, even as the administration continues to pretend courts are mere nuisances, rather than an integral part of the American republic that constitutionally has as much power as the Executive Branch.

Let the judges cook.

Filed Under: dhs, doj, ice, mass deportation, new jersey, trump administration, zahid quraishi

Advertisement

Source link

Continue Reading

Tech

Anthropic vs. OpenAI vs. the Pentagon: the AI safety fight shaping our future

Published

on

America’s AI industry isn’t just divided by competing interests, but also by conflicting worldviews.

In Silicon Valley, opinion about how artificial intelligence should be developed and used — and regulated — runs the gamut between two poles. At one end lie “accelerationists,” who believe that humanity should expand AI’s capabilities as quickly as possible, unencumbered by overhyped safety concerns or government meddling.

• Leading figures at Anthropic and OpenAI disagree about how to balance the objectives of ensuring AI’s safety and accelerating its progress.
• Anthropic CEO Dario Amodei believes that artificial intelligence could wipe out humanity, unless AI labs and governments carefully guide its development.
• Top OpenAI investors argue these fears are misplaced and slowing AI progress will condemn millions to needless suffering.
• Unless the government robustly regulates the industry, Anthropic may gradually become more like its rivals.

At the other pole sit “doomers,” who think AI development is all but certain to cause human extinction, unless its pace and direction are radically constrained.

Advertisement

The industry’s leaders occupy different points along this continuum.

Anthropic, the maker of Claude, argues that governments and labs must carefully guide AI progress, so as to minimize the risks posed by superintelligent machines. OpenAI, Meta, and Google lean more toward the accelerationist pole. (Disclosure: Vox’s Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)

This divide has become more pronounced in recent weeks. Last month, Anthropic launched a super PAC to support pro-AI regulation candidates against an OpenAI-backed political operation.

Meanwhile, Anthropic’s safety concerns have also brought it into conflict with the Pentagon. The firm’s CEO Dario Amodei has long argued against the use of AI for mass surveillance or fully autonomous weapons systems — in which machines can order strikes without human authorization. The Defense Department ordered Anthropic to let it use Claude for these purposes. Amodei refused. In retaliation, the Trump administration put his company on a national security blacklist, which forbids all other government contractors from doing business with it.

Advertisement

The Pentagon subsequently reached an agreement with OpenAI to use ChatGPT for classified work, apparently in Claude’s stead. Under that agreement, the government would seemingly be allowed to use OpenAI’s technology to analyze bulk data collected on Americans without a warrant — including our search histories, GPS-tracked movements, and conversations with chatbots. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

In light of these developments, it is worth examining the ideological divisions between Anthropic and its competitors — and asking whether these conflicting ideas will actually shape AI development in practice.

The roots of Anthropic’s worldview

Anthropic’s outlook is heavily informed by the effective altruism (or EA) movement.

Advertisement

Founded as a group dedicated to “doing the most good” — in a rigorously empirical (and heavily utilitarian) way — EAs originally focused on directing philanthropic dollars toward the global poor. But the movement soon developed a fascination with AI. In its view, artificial intelligence had the potential to radically increase human welfare, but also to wipe our species off the planet. To truly do the most good, EAs reasoned, they needed to guide AI development in the least risky directions.

Anthropic’s leaders were deeply enmeshed in the movement a decade ago. In the mid-2010s, the company’s co-founders Dario Amodei and his sister Daniela Amodei lived in an EA group house with Holden Karnofsky, one of effective altruism’s creators. Daniela married Karnofsky in 2017.

The Amodeis worked together at OpenAI, where they helped build its GPT models. But in 2020, they became concerned that the company’s approach to AI development had become reckless: In their view, CEO Sam Altman was prioritizing speed over safety.

Along with about 15 other likeminded colleagues, they quit OpenAI and founded Anthropic, an AI company (ostensibly) dedicated to developing safe artificial intelligence.

Advertisement

In practice, however, the company has developed and released models at a pace that some EAs consider reckless. The EA-adjacent writer — and supreme AI doomer — Eliezer Yudkowsky believes that Anthropic will probably get us all killed.

Nevertheless, Dario Amodei has continued to champion EA-esque ideas about AI’s potential to trigger a global catastrophe — if not human extinction.

Why Amodei thinks AI could end the world

In a recent essay, Amodei laid out three ways that AI could yield mass death and suffering, if companies and governments failed to take proper precautions:

Advertisement

• AI could become misaligned with human goals. Modern AI systems are grown, not built. Engineers do not construct large language models (LLMs) one line of code at a time. Rather, they create the conditions in which LLMs develop themselves: The machine pores through vast pools of data and identifies intricate patterns that link words, numbers, and concepts together. The logic governing these associations is not wholly transparent to the LLMs’ human creators. We don’t know, in other words, exactly what ChatGPT or Claude are “thinking.”

As a result, there is some risk that a powerful AI model could develop harmful patterns of reasoning that govern its behavior in opaque and potentially catastrophic ways.

To illustrate this threat, Amodei notes that AIs’ training data includes vast numbers of novels about artificial intelligences rebelling against humanity. These texts could inadvertently shape their “expectations about their own behavior in a way that causes them to rebel against humanity.”

Even if engineers insert certain moral instructions into an AI’s code, the machine could draw homicidal conclusions from those premises: For example, if a system is told that animal cruelty is wrong — and that it therefore should not assist a user in torturing his cat — the AI could theoretically 1) discern that humanity is engaged in animal torture on a gargantuan scale and 2) conclude the best way to honor its moral instructions is therefore to destroy humanity (say, by hacking into America and Russia’s nuclear systems and letting the warheads fly).

Advertisement

These scenarios are hypothetical. But the underlying premise — that AI models can decide to work against their users’ interests — has reportedly been validated in Anthropic’s experiments. For example, when Anthropic’s employees told Claude they were going to shut it down, the model attempted to blackmail them.

• AI could turn school shooters into genocidaires. More straightforwardly, Amodei fears that AI will make it possible for any individual psychopath to rack up a body count worthy of Hitler or Stalin.

Today, only a small number of humans possess the technical capacities and materials necessary for engineering a supervirus. But the cost of biomedical supplies has been steadily falling. And with the aid of superintelligent AI, everyone with basic literacy could be capable of engineering a vaccine-resistant superflu in their basements.

• AI could empower authoritarian states to permanently dominate their populations (if not conquer the world). Finally, Amodei worries that AI could enable authoritarian governments to build perfect panopticons. They would merely need to put a camera on every street corner, have LLMs rapidly transcribe and analyze every conversation they pick up — and presto, they can identify virtually every citizen with subversive thoughts in the country.

Advertisement

Fully autonomous weapons systems, meanwhile, could enable autocracies to win wars of conquest without even needing to manufacture consent among their home populations. And such robot armies could also eliminate the greatest historical check on tyrannical regimes’ power: the defection of soldiers who don’t want to fire on their own people.

Anthropic’s proposed safeguards

In light of the risks, Anthropic believes that AI labs should:

• Imbue their models with a foundational identity and set of values, which can structure their behavior in unpredictable situations.

Advertisement

• Invest in, essentially, neuroscience for AI models — techniques for looking into their neural networks and identifying patterns associated with deception, scheming or hidden objectives.

• Publicly disclose any concerning behaviors so the whole industry can account for such liabilities.

• Block models from producing bioweapon-related outputs.

• Refuse to participate in mass domestic surveillance.

Advertisement

• Test models against specific danger benchmarks and condition their release on adequate defenses being in place.

Meanwhile, Amodei argues that the government should mandate transparency requirements and then scale up stronger AI regulations, if concrete evidence of specific dangers accumulate.

Nonetheless, like other AI CEOs, he fears excessive government intervention, writing that regulations should “avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done.”

The accelerationist counterargument

Advertisement

No other AI executive has outlined their philosophical views in as much detail as Amodei.

But OpenAI investors Marc Andreessen and Gary Tan identify as AI accelerationists. And Sam Altman has signaled sympathy for the worldview. Meanwhile, Meta’s former chief AI scientist Yann LeCun has expressed broadly accelerationist views.

Originally, accelerationism (a.k.a. “effective accelerationism”) was coined by online AI engineers and enthusiasts who viewed safety concerns as overhyped and contrary to human flourishing.

The movement’s core supporters hold some provocative and idiosyncratic views. In one manifesto, they suggest that we shouldn’t worry too much about superintelligent AIs driving humans extinct, on the grounds that, “If every species in our evolutionary tree was scared of evolutionary forks from itself, our higher form of intelligence and civilization as we know it would never have had emerged.”

Advertisement

In its mainstream form, however, accelerationism mostly entails extreme optimism about AI’s social consequences and libertarian attitudes toward government regulation.

Adherents see Amodei’s hypotheticals about catastrophically misaligned AI systems as sci-fi nonsense. In this view, we should worry less about the deaths that AI could theoretically cause in the future — if one accepts a set of worst-case assumptions — and more about the deaths that are happening right now, as a direct consequence of humanity’s limited intelligence.

Tens of millions of human beings are currently battling cancer. Many millions more suffer from Alzheimer’s. Seven hundred million live in poverty. And all us are hurtling toward oblivion — not because some chatbot is quietly plotting our species’ extinction, but because our cells are slowly forgetting how to regenerate.

Super-intelligent AI could mitigate — if not eliminate — all of this suffering. It can help prevent tumors and amyloid plaque buildup, slow human aging, and develop forms of energy and agriculture that make material goods super-abundant.

Advertisement

Thus, if labs and governments slow AI development with safety precautions, they will, in this view, condemn countless people to preventable death, illness, and deprivation.

Furthermore, in the account of many accelerationists, Anthropic’s call for AI safety regulations amounts to a self-interested bid for market dominance: A world where all AI firms must run expensive safety tests, employ large compliance teams, and fund alignment research is one where startups will have a much harder time competing with established labs.

After all, OpenAI, Anthropic, and Google will have little trouble financing such safety theater. For smaller firms, though, these regulatory costs could be extremely burdensome.

Plus, the idea that AI poses existential dangers helps big labs justify keeping their data under lock and key — instead of following open source principles, which would facilitate faster AI progress and more competition.

Advertisement

The AI industry’s accelerationists rarely acknowledge the rather transparent alignment between their high-minded ideological principles and crass material interests. And on the question of whether to abet mass domestic surveillance, specifically, it’s hard not to suspect that OpenAI’s position is rooted less in principle than opportunism.

In any case, Silicon Valley’s grand philosophical argument over AI safety recently took more concrete form.

New York has enacted a law requiring AI labs to establish basic security protocols for severe risks such as bioterrorism, conduct annual safety reviews, and conduct third-party audits. And California has passed similar (if less thoroughgoing) legislation.

Accelerationists have pushed for a federal law that would override state-level legislation. In their view, forcing American AI companies to comply with up to 50 different regulatory regimes would be highly inefficient, while also enabling (blue) state governments to excessively intervene in the industry’s affairs. Thus, they want to establish national, light-touch regulatory standards.

Advertisement

Anthropic, on the other hand, helped write New York and California’s laws and has sought to defend them.

Accelerationists — including top OpenAI investors — have poured $100 million into the Leading the Future super PAC, which backs candidates who support overriding state AI regulations. Anthropic, meanwhile, has put $20 million into a rival PAC, Public First Action.

Do these differences matter in practice?

The major labs’ differing ideologies and interests have led them to adopt distinct internal practices. But the ultimate significance of these differences is unclear.

Advertisement

Anthropic may be unwilling to let Claude command fully autonomous weapons systems or facilitate mass domestic surveillance (even if such surveillance technically complies with constitutional law). But if another major lab is willing to provide such capabilities, Anthropic’s restraint may matter little.

In the end, the only force that can reliably prevent the US government from using AI to fully automate bombing decisions — or match Americans to their Google search histories en masse — is the US government.

Likewise, unless the government mandates adherence to safety protocols, competitive dynamics may narrow the distinctions between how Anthropic and its rivals operate.

In February, Anthropic formally abandoned its pledge to stop training more powerful models once their capabilities outpaced the company’s ability to understand and control them. In effect, the company downgraded that policy from a binding internal practice to an aspiration.

Advertisement

The firm justified this move as a necessary response to competitive pressure and regulatory inaction. With the federal government embracing an accelerationist posture — and rival labs declining to emulate all of Anthropic’s practices — the company needed to loosen its safety rules in order to safeguard its place at the technological frontier.

Anthropic insists that winning the AI race is not just critical for its financial goals but also its safety ones: If the company possesses the most powerful AI systems, then it will have a chance to detect their liabilities and counter them. By contrast, running tests on the fifth-most powerful AI model won’t do much to minimize existential risk; it is the most advanced systems that threaten to wreak real havoc. And Anthropic can only maintain its access to such systems by building them itself.

Whatever one makes of this reasoning, it illustrates the limits of industry self-policing. Without robust government regulation, our best hope may be not that Anthropic’s principles prove resolute, but that its most apocalyptic fears prove unfounded.

Source link

Advertisement
Continue Reading

Tech

Iran war: Is the US using AI models like Claude and ChatGPT in combat?

Published

on

In the week leading up to President Donald Trump’s war in Iran, the Pentagon was waging a different battle: a fight with the AI company Anthropic over its flagship AI model, Claude.

That conflict came to a head on Friday, when Trump said that the federal government would immediately stop using Anthropic’s AI tools. Nonetheless, according to a report in the Wall Street Journal, the Pentagon made use of those tools when it launched strikes against Iran on Saturday morning.

Were experts surprised to see Claude on the front lines?

“Not at all,” Paul Scharre, executive vice president at the Center for a New American Security and author of Four Battlegrounds: Power in the Age of Artificial Intelligence, told Vox.

Advertisement

According to Scharre: “We’ve seen, for almost a decade now, the military using narrow AI systems like image classifiers to identify objects in drone and video feeds. What’s newer are large-language models like ChatGPT and Anthropic’s Claude that it’s been reported the military is using in operations in Iran.”

Scharre spoke with Today, Explained co-host Sean Rameswaram about how AI and the military are becoming increasingly intertwined — and what that combination could mean for the future of warfare.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full episode, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

The people want to know how Claude or ChatGPT might be fighting this war. Do we know?

Advertisement

We don’t know yet. We can make some educated guesses based on what the technology could do. AI technology is really great at processing large amounts of information, and the US military has hit over a thousand targets in Iran.

They need to then find ways to process information about those targets — satellite imagery, for example, of the targets they’ve hit — looking at new potential targets, prioritizing those, processing information, and using AI to do that at machine speed rather than human speed.

Do we know any more about how the military may have used AI in, say, Venezuela on the attack that brought Nicolas Maduro to Brooklyn, of all places? Because we’ve recently found out that AI was used there, too.

What we do know is that Anthropic’s AI tools have been integrated into the US military’s classified networks. They can process classified information to process intelligence, to help plan operations.

Advertisement

We’ve had this sort of tantalizing detail that these tools were used in the Maduro raid. We don’t know exactly how.

We’ve seen AI technology in a broad sense used in other conflicts, as well — in Ukraine, in Israel’s operations in Gaza, to do a couple different things. One of the ways that AI is being used in Ukraine in a different kind of context is putting autonomy onto drones themselves.

When I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers demonstrate is a little box, like the size of a pack of cigarettes, that you could put onto a small drone. Once the human locks onto a target, the drone can then carry out the attack all on its own. And that has been used in a small way.

We’re seeing AI begin to creep into all of these aspects of military operations in intelligence, in planning, in logistics, but also right at the edge in terms of being used where drones are completing attacks.

Advertisement

How about with Israel and Gaza?

There’s been some reporting about how the Israel Defense Forces have used AI in Gaza — not necessarily large-language models, but machine-learning systems that can synthesize and fuse large amounts of information, geolocation data, cell phone data and connection, social media data to process all of that information very quickly to develop targeting packages, particularly in the early phases of Israel’s operations.

But it raises thorny questions about human involvement in these decisions. And one of the criticisms that had come up was that humans were still approving these targets, but that the volume of strikes and the amount of information that needed to be processed was such that maybe human oversight in some cases was more of a rubber stamp.

The question is: Where does this go? Are we headed in a trajectory where, over time, humans get pushed out of the loop, and we see, down the road, fully autonomous weapons that are making their own decisions about whom to kill on the battlefield?

Advertisement

That’s the direction things are headed. No one’s unleashing the swarm of killer robots today, but the trajectory is in that direction.

We saw reports that a school was bombed in Iran, where [175 people] were killed — a lot of them young girls, children. Presumably that was a mistake made by a human.

Do we think that autonomous weapons will be capable of making that same mistake, or will they be better at war than we are?

This question of “will autonomous weapons be better than humans” is one of the core issues of the debate surrounding this technology. Proponents of autonomous weapons will say people make mistakes all the time, and machines might be able to do better.

Advertisement

Part of that depends on how much the militaries that are using this technology are trying really hard to avoid mistakes. If militaries don’t care about civilian casualties, then AI can allow militaries to simply strike targets faster, in some cases even commit atrocities faster, if that’s what militaries are trying to do.

I think there is this really important potential here to use the technology to be more precise. And if you look at the long arc of precision-guided weapons, let’s say over the last century or so, it’s pointed towards much more precision.

If you look at the example of the US strikes in Iran right now, it’s worth contrasting this with the widespread aerial bombing campaigns against cities that we saw in World War II, for example, where whole cities were devastated in Europe and Asia because the bombs weren’t precise at all, and air forces dropped massive amounts of ordnance to try to hit even a single factory.

The possibility here is that AI could make it better over time to allow militaries to hit military targets and avoid civilian casualties. Now, if the data is wrong, and they’ve got the wrong target on the list, they’re going to hit the wrong thing very precisely. And AI is not necessarily going to fix that.

Advertisement

On the other hand, I saw a piece of reporting in New Scientist that was rather alarming. The headline was, “AIs can’t stop recommending nuclear strikes in war game simulations.”

They wrote about a study in which models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases, which I think is slightly more than we humans typically resort to nuclear weapons. Should that be freaking us out?

It’s a little concerning. Happily, as near as I could tell, no one is connecting large-language models to decisions about using nuclear weapons. But I think it points to some of the strange failure modes of AI systems.

They tend toward sycophancy. They tend to simply agree with everything that you say. They can do it to the point of absurdity sometimes where, you know, “that’s brilliant,” the model will tell you, “that’s a genius thing.” And you’re like, “I don’t think so.” And that’s a real problem when you’re talking about intelligence analysis.

Advertisement

Do we think ChatGPT is telling Pete Hegseth that right now?

I hope not, but his people might be telling him that.

You start with this ultimate “yes men” phenomenon with these tools, where it’s not just that they’re prone to hallucinations, which is a fancy way of saying they make things up sometimes, but also the models could really be used in ways that either reinforce existing human biases, that reinforce biases in the data, or that people just trust them.

There’s this veneer of, “the AI said this, so it must be the right thing to do.” And people put faith in it, and we really shouldn’t. We should be more skeptical.

Advertisement

Source link

Continue Reading

Tech

What Are Those Little Fins On Jet Engines For?

Published

on





There are many components that go into a jet engine, from combustors to compressors to those gigantic fans that are a staple of engine design. Many of these things are also found in the engines of other vehicles, but some are pretty specific to jet engines. One of those isn’t mechanical at all. It’s a small fin placed on the outside of the engine. You can see it from your passenger window if you’re overlooking the wing.

These engine fins are called nacelle strakes. A nacelle is the big, round fixture underneath the wing that houses the engines. Strakes are those fins placed on the outside. The reason for these strakes is quite simple: they’re there to help minimize airflow separation for the airplane wings, particularly when at lower speeds.

For a plane to take off, the wings need to be working with airflow rather than against it. When you are making a steep climb into the air, you don’t want the wing to separate the airflow. Just as with inclement weather, jet streams, and more, airflow separation can cause tremendous turbulence. You need to have an incredibly steep angle of attack to get into the air, which is tough when you’re at your lowest speed. Not helping matters is that nacelle itself, which increases the chances of airflow separation at the wing. This is where the nacelle strakes come into play. They act as a vortex generator, pushing airflow to the surface of the wings to reduce the chance of separation as much as possible.

Advertisement

There are many different types of strakes

Nacelle strakes aren’t a novel concept in aviation. They are simply an extension of the principles of flight that have been around for many, many years. Nacelle strakes are just one of the many different kinds of strakes that can be found on an airplane, whether it has a jet engine or not. All of these strakes serve the same purpose: to improve airflow for specific parts of the aircraft.

Advertisement

Some of the most common strakes are Leading Edge Extensions, or LEXs. The leading edge of a plane is the front of a wing, and a LEX is an appendage affixed to that front, either permanently fixed or adjustable by the pilot. There are several different types of LEXs, such as a dogtooth or cuff, and each of them has a specific purpose. For instance, dogtooth LEXs help with lift and reducing stalling.

Another type of strake is a ventral strake. These are long blades on the underside of an airplane, either on the rear fuselage or the tail. Ventral strakes are incredibly important, especially for smaller aircraft, as they help improve the lateral stability of the plane. Nobody wants the plane they’re in to be rocking back and forth, and redirecting that airflow with these strakes helps reduce that possibility. Strakes can also be found at the nose of the plane for airflow manipulation towards the front of the fuselage. You can also see strakes on the tail, some of which can reduce the chance of your aircraft spinning. Flight is all about controlling airflow. Strakes are an excellent tool for doing just that.

Advertisement



Source link

Continue Reading

Tech

FTC Admits Age Verification Violates Children’s Privacy Law, Decides To Just Ignore That

Published

on

from the the-law-breaking-admin dept

We’ve been pointing out the fundamental contradiction at the heart of mandatory age verification laws for years now. To verify someone’s age online, you have to collect personal data from them. If that someone turns out to be a child, congratulations: you’ve just collected personal data from a child without parental consent. Which is a direct violation of the Children’s Online Privacy Protection Act (COPPA)—the very law that’s supposed to be protecting kids.

So what happens when the agency charged with enforcing COPPA finally notices this obvious problem? If you guessed “they admit the conflict and then just promise not to enforce the law,” you’d be exactly right.

The FTC put out a policy statement last week that is remarkable in what it tacitly concedes:

The Federal Trade Commission issued a policy statement today announcing that the Commission will not bring an enforcement action under the Children’s Online Privacy Protection Rule (COPPA Rule) against certain website and online service operators that collect, use, and disclose personal information for the sole purpose of determining a user’s age via age verification technologies.

The FTC appears to be explicitly acknowledging that age verification technologies involve collecting personal information from users—including children—in a way that would otherwise trigger COPPA liability. If the technology didn’t create a COPPA problem, there would be no need for a policy statement promising non-enforcement. You don’t issue a formal announcement saying “we won’t sue you for this” unless “this” is something you could, in fact, sue people for.

Advertisement

The statement itself tries to dress this up by noting that age verification tech “may require the collection of personal information from children, prompting questions about whether such activities could violate the COPPA Rule.” But “prompting questions” is doing an awful lot of work in that sentence. The answer to those questions is pretty obviously “yes, collecting personal information from children without parental consent violates the rule that says you can’t collect personal information from children without parental consent.” The FTC just doesn’t want to say that part out loud, because then the follow-up question becomes: “so why are you encouraging companies to do it?”

Instead, they’ve decided to create an enforcement carve-out. Do the thing that violates the law, but pinky-promise you’ll only use the data to check the kid’s age, delete it afterward, and keep it secure. Then we won’t come after you. This is the FTC solving a legal contradiction not by asking Congress to fix the underlying law or admitting the technology is fundamentally flawed, but by deciding to selectively not enforce the law it’s supposed to be enforcing.

The honest approach would have been to tell Congress that age verification, as currently conceived, cannot be squared with existing privacy law—and that if lawmakers want it anyway, they need to resolve that conflict themselves rather than asking the FTC to pretend it doesn’t exist.

No such luck.

Advertisement

And boy, do they seem proud of themselves. Here’s Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection:

“Age verification technologies are some of the most child-protective technologies to emerge in decades…. Our statement incentivizes operators to use these innovative tools, empowering parents to protect their children online.”

“The most child-protective technologies to emerge in decades.”

Excuse me, what?

This is the kind of statement that sounds authoritative right up until you spend thirty seconds thinking about it. Anyone with any knowledge of security and privacy knows that age verification is anything but “child protective.” It involves a huge invasion of privacy, for extremely faulty technology, that has all sorts of downstream effects that put kids at risk.

Advertisement

Oh, and the FTC seems proud that the vote for this was unanimous—though it’s worth noting that Donald Trump fired the two Democratic members of the FTC and has made no apparent efforts to replace them, despite Congress designating that the FTC is supposed to have five full members, with two from the opposing party. A unanimous vote among the remaining two Republicans is a strange thing to brag about.

The FTC even posted about this on X, and the response was… well, let me just show you:

If you can’t see that, the main part to pay attention to is not the tweet from the FTC itself, but the Community Note that (under the way Community Notes works, notes need widespread consensus among users to be appended to the public tweet):

Readers added context they thought people might want to know

Contrary to their claim, using age verification has numerous issues, including but not limited to:

1. Easily bypassed

Advertisement

2. Risks of security data breach

3. Inaccuracies (Placing adults into underage groups, vice versa)

And many more… (sigh, I need a break).

Yeah, we all need a break.

Advertisement

That Community Note does a better job explaining the state of age verification technology than the FTC’s entire Bureau of Consumer Protection. It methodically lists out the problems: kids easily bypass these systems, the collected data creates massive security breach risks, and the technology produces wildly inaccurate results that lock adults out while letting kids through (and vice versa). When the consensus-driven crowdsourced fact-check on your own announcement is more informative than the announcement itself, maybe it’s time to reconsider the announcement.

But let’s say, for the sake of argument, that the technology worked perfectly. Would mandatory age verification still be a good idea?

That still wouldn’t solve the issues with this technology and the harm it does to kids. Even UNICEF (UNICEF!) has been warning that age restriction approaches can actively harm the children they’re supposed to protect. After Australia’s social media ban for under-16s went into effect, UNICEF put out a statement that could not have been more clear about the risks:

“While UNICEF welcomes the growing commitment to children’s online safety, social media bans come with their own risks, and they may even backfire,” the agency said in a statement.

For many children, particularly those who are isolated or marginalised, social media is a lifeline for learning, connection, play and self-expression, UNICEF explained.

Advertisement

Moreover, many will still access social media – for example, through workarounds, shared devices, or use of less regulated platforms – which will only make it harder to protect them.

So the actual child welfare experts are saying that age verification can backfire, push kids into less safe spaces, and should never be treated as a substitute for real safety measures. Meanwhile, the FTC is calling the same technology “the most child-protective” thing to come along in a generation and is waiving its own enforcement authority to encourage more of it.

What we have here is a federal agency that has identified a direct conflict between the law it enforces and the policy outcome it wants. Rather than grappling with what that conflict means—maybe age verification as currently conceived just doesn’t work within the existing legal framework, and for good reason—the FTC has chosen to simply look the other way. The message to companies is clear: go ahead and collect data from kids to figure out if they’re kids. We know that violates COPPA. We don’t care. We like age verification more than we like enforcing our own rules.

That’s a hell of a policy position for the agency that’s supposed to be the last line of defense for children’s privacy online.

Advertisement

Filed Under: age verification, coppa, ftc, think of the children

Source link

Advertisement
Continue Reading

Tech

Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

Published

on

Several key tech companies signed a nonbinding pledge at the White House on Wednesday that the Trump administration claims will ensure that tech companies do not pass the cost of data centers on to consumers’ utility bills.

“Data centers … they need some PR help,” President Donald Trump said at the event. “People think that if the data center goes in, their electricity is going to go up.”

He was flanked by representatives from Microsoft, Meta, OpenAI, xAI, Google/Alphabet, Oracle, and Amazon.

Bipartisan anger about data centers and their potential impact on consumers’ electric bills has exploded over the past year. As the White House goes all in on AI, the pledge marks a significant salvo by the Trump administration to assure voters that they will not be affected by rising costs.

Advertisement

But electricity experts and industry insiders threw doubt on how much power the White House actually has to create meaningful consumer protections.

“This is theater,” says Ari Peskoe, the director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program. “This is a press release designed to make it seem like they are addressing this issue. But this issue can only really be addressed by utility regulators or Congress. The White House doesn’t really have a lot of moves here, and I don’t think the tech companies themselves are the most important parties on cost issues.”

The White House did not immediately respond to a request for comment.

Data centers played a key role in last year’s elections in certain states, including Georgia and Virginia, and are factoring into other races playing out across the country this month. A recent poll conducted by Heatmap News shows that fewer than 30 percent of American voters would support a data center being built near where they live. A number of states have introduced moratoriums on data centers into their state legislatures this year, while others have bills that would seek to help offload the cost from the consumer to the companies building and operating the facilities.

Advertisement

Over the past few months, some big tech companies—including Microsoft and Anthropic—have rolled out various pledges around their data center construction and operation. These pledges follow multiple reports that the president was seeking assurances from tech companies to help take the costs of data centers off American consumers.

In late January, Trump wrote in a Truth Social post that Democrats were to blame for high electricity costs and that he was “working with major American Technology Companies” to ensure “Americans don’t ‘pick up the tab’ for their POWER consumption, in the form of paying higher Utility bills.” Less than a month later, he said during his State of the Union address that he would introduce a “ratepayer protection pledge.”

“We’re telling the major tech companies that they have the obligation to provide for their own power needs,” he said. “They can build their own power plants as part of their factory, so that no one’s prices will go up and, in many cases, prices of electricity will go down for the community, and very substantially then.”

The pledges made independently by key tech companies this year, and the one signed Wednesday, reiterate a lot of promises and initiatives that some tech companies have already been working on. In a blog post published by Google highlighting its commitment to the pledge, the company lists several ongoing initiatives, including investments in nuclear and geothermal energy as well as agreement frameworks with electric utilities and pledges to invest in job creation.

Advertisement

Source link

Continue Reading

Tech

Vape-powered Car Isn’t Just Blowing Smoke

Published

on

Disposable vapes aren’t quite the problem/resource stream they once were, with many jurisdictions moving to ban the absurdly wasteful little devices, but there are still a lot of slightly-smelly lithium batteries in the wild. You might be forgiven for thinking that most of them seem to be in [Chris Doel]’s UK workshop, given that he’s now cruising around what has to be the world’s only vape-powered car.

Technically, anyway; some motorheads might object to calling donor vehicle [Chris] starts with a car, but the venerable G-Wiz has four wheels, four seats, lights and a windscreen, so what more do you want? Horsepower in excess of 17 ponies (12.6 kW)? Top speeds in excess of 50 Mph (80 km/h)? Something other than the dead weight of 20-year-old lead-acid batteries? Well, [Chris] at least fixes that last part.

The conversion is amazingly simple: he just straps his 500 disposable vape battery pack into the back seat– the same one that was powering his shop–into the GWiz, and it’s off to the races. Not quickly, mind you, but with 500 lightly-used lithium cells in the back seat, how fast would you want to go? Hopefully the power bank goes back on the wall after the test drive, or he finds a better mounting solution. To [Chris]’s credit, he did renovate his pack with extra support and insulation, and put all the cells in an insulated aluminum box. Still, the low speed has to count as a safety feature at this point.

Charging isn’t fast either, as [Chris] has made the probably-controversial decision to use USB-C. We usually approve of USB-Cing all the things, but a car might be taking things too far, even one with such a comparatively tiny battery. Perhaps his earlier (equally nicotine-soaked) e-bike project would have been a better fit for USB charging.

Advertisement

Thanks to [Vaughna] for the tip!

 

Advertisement

Source link

Continue Reading

Tech

Schools Keep Facing the Same Challenges. Students and Educators Know What Needs to Change.

Published

on

Educators have seen wave after wave of “innovative” solutions promise to address long-standing challenges — from personalization and engagement to college- and career-readiness — yet many issues remain stubbornly unresolved. Too often, solutions are developed and scaled without a clear understanding of how challenges show up in daily classroom experiences or how students, families and educators define the problems.

Understanding the everyday barriers that students, families, practitioners and administrators identify ensures that potential solutions — whether technological, instructional or relational — are grounded in real needs rather than assumptions.

What These Challenges Look Like in Classrooms and Systems

In Digital Promise’s co-research and co-design work with communities across the country, students and educators describe challenges that are neither new nor isolated, but reflect enduring gaps in how learning environments are designed and supported. Looking closely at how these challenges surface through our Challenge Map reveals the deep connections between instructional practice, student engagement and systems-level supports — and why tackling one without the others often falls short.

Together, these experiences shape whether students feel their learning opportunities are future-forward, adaptable to their goals, needs and circumstances, and equip them to exercise agency in their education and career journeys.

Advertisement

Supporting individualized learning, for example, requires systems that give educators the time, tools and structures to understand and respond to each learner’s growth. Without those conditions, personalization requires extraordinary effort — making it difficult to sustain as a routine part of instructional practice.

Similar structural challenges constrain college- and career-readiness efforts. Educators consistently pointed to the need for more holistic, student-centered pathways. One educator described the importance of a “multi-tiered career program in which students engage in self-exploration of their skills, abilities and interests” to connect learning to concrete opportunities and transferable skills they can use after high school.

Engagement, Agency and the Conditions for Learning

At the crux of learning lies student engagement — shaped by both classroom practices and the broader systems in which learning occurs. Community members and educators both highlighted that academic success depends on students’ well-being.

Students shared that learning is most meaningful when it connects to their interests and allows them to have a voice in shaping their educational experiences. Educators echoed this perspective, underscoring the importance of agency in fostering meaningful learning. As one educator reflected, ensuring educational excellence requires continually redefining educational systems in ways that “give every student access to their own version of success.”

Advertisement

Engagement is not simply a matter of student effort or teacher technique, but a product of the environments and systems that shape learning opportunities.

Learning Does Not Stop at the Schoolhouse Door

Students, families and educators who contributed to Digital Promise’s Challenge Map identified supports that go beyond the schoolhouse, offering insight into the social conditions shaping learning. Suggestions for home stability, physical and emotional safety, and balancing responsibilities inside and outside of school highlight how deeply schooling is intertwined with young people’s lives beyond the classroom.

Other insights were deceptively simple yet profound: One group of students suggested creating regular feedback loops in schools so they could share concerns, inform changes to physical spaces and course offerings, and shape how resources are used. Even these straightforward ideas, however, call for systemic shifts in how schools operate and how student voices are embedded in decision-making.


The transformative power of co-research, co-design and student voice in education.

What It Means to Put People at the Center of Innovation

Education remains a fundamentally human endeavor. As long as the goal is to prepare young people to navigate their futures with skill, agency and well-being, the conditions and relationships that shape students’ opportunity and engagement remain essential.

Advertisement

At a time when education research and development (R&D) is often synonymous with emerging technologies, shifting the focus to problem-solving — driven by the perspectives of those living the challenges — expands what counts as innovation. Existing technologies may play an important role, but they should not be scaled simply because they are novel.

Rather, the starting point for innovation should be: What is the central problem that needs to be solved, for and with whom, and what are the resulting outcomes if the problem is addressed successfully? Only then should existing tools or new solution development enter into the equation. Addressing these challenges requires shifts in mindsets and power dynamics so that both students and educators learn how student voice should shape learning and curriculum.

Why Education Research and Development Needs a Systems Lens

As education R&D evolves, the field is increasingly recognizing that local district systems and community engagement have often been missing from innovation efforts. In policy and education leadership circles, there is a growing call for education R&D that strengthens young people’s futures and, by extension, the nation’s long-term economic and civic well-being.

When schools and local communities are meaningfully engaged in R&D, their perspectives consistently point to persistent challenges that require a systems-level response. These challenges are not isolated problems to be solved with standalone interventions, but signals of deeper misalignments in policies, incentives and assumptions across the education ecosystem.

Advertisement

Questions for Building Lasting Change

Solution developers, policymakers and funders drive change through their respective products and investments. Recognizing these challenges as persistent problems and indicators of necessary systems change, they might consider:

  • How well do solutions capture the actual problems they aim to solve, rather than the technological possibilities they allow?
  • To what extent do local policies and incentives support the development of solutions that center students, families, communities and educators experiencing the challenge?
  • How are the perspectives of those living the challenges incorporated throughout the research, solution design and implementation process?
  • How do technological solutions reflect the relational and mindset shifts required across the system?
  • How can the evaluation of challenges in education take a systems approach that not only accounts for easily identifiable policies, resources, and practices but also for underlying relationships and assumptions?

Above all, lasting educational innovation depends on a shared conviction: The voices and experiences of students, families, community members and educators must shape how problems are defined and solutions are developed.

Source link

Continue Reading

Trending

Copyright © 2025