Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

OpenAI’s KOSA Endorsement Is Regulatory Capture With A Smiley Face

Published

on

from the not-the-flex-dc-thinks-it-is dept

Earlier this week, OpenAI became the latest tech company to publicly endorse KOSA, the Kids Online Safety Act. The company, conveniently, tries to frame this as being about its support of child safety. It’s not. It’s about political horse trading, desperation for good publicity, and building a regulatory moat.

KOSA would help create stronger online protections for young social media users through safer default settings, expanded parental controls, and greater accountability for online harms.

The path forward on kids safety, however, also requires AI-specific rules. And we believe KOSA is complementary to the work we’re doing at the federal and state level. Young people should be able to benefit from AI in ways that are safe, age-appropriate, and grounded in real-world support, including referrals to crisis resources and parental notifications in serious safety situations. That means building safeguards from the start, giving families better tools, and taking responsibility for reducing risks before they become harms.

The broader point is an important one: AI companies still have the opportunity to build protections early, before these technologies become fully embedded in everyday life. As OpenAI Chief Global Affairs Officer Chris Lehane has put it, “We can’t repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren’t put in place until the platforms were already deeply embedded in young people’s lives.”

All of this is, of course, nonsense. As we’ve explained repeatedly, the underlying mechanisms of KOSA are deeply problematic and will do real damage. It will, inherently, make the internet worse for everyone. At its heart, KOSA is a surveillance and censorship bill, and it’s the last thing that we need for the internet today.

While it’s positioned as being about something no one can be against (“kid safety!”), that is all too often the facade with which terrible rights-killing laws are passed. And KOSA is no exception.

But a bunch of tech companies have endorsed it anyway. Why? Because they know it makes life way more difficult for smaller upstart competitors. The additional compliance costs it will add for companies will be ruinous to smaller, less well-resourced companies. For big companies with big bank accounts, however, it gives them a leg up.

Advertisement

OpenAI, perhaps more than most others in the space, needs that kind of government-backed protection against growing competition.

Almost exactly three years ago, I wrote a piece about Sam Altman going to Congress and asking for the federal government to regulate the AI space, calling it Sam Altman Wants The Government To Build Him A Moat. As I pointed out at the time, AI researchers were coming to the conclusion that there was little to no real competitive advantage that any frontier AI model could really have for any extended period of time. That situation has only gotten worse since then. The jockeying between the various leading AI models has meant that they’re all effectively comparable, and more and more builders are realizing that since you can separate out the context, the computer, and the agentic tools from the underlying LLM, that technology is quickly turning into a commodity where any one will do (and this situation is becoming even more tenuous as open weight/local models are getting better and better).

While OpenAI has a huge number of users (one of the fastest growing tech companies in history), it’s unclear if those users are particularly loyal. Indeed, there are a few indications that when OpenAI does something stupid, a large segment of users will quickly leave.

Given that, all of the large AI companies keep looking for ways to create some sort of lock-in for users. Most of them haven’t gone down the fully siloed path (knowing at this stage that would probably drive away their most valuable users). For the most part, the focus between the likes of OpenAI, Anthropic, Google and others is to build in more features to make it more convenient to stay than to swap out an underlying LLM. That and the continued leapfrogging, combined with various experiments regarding how much they’re willing to subsidize with their subscription plans.

Advertisement

But having the government wipe out competitors, or create “mandatory” tools that create lock-in, might be another path towards such a result. And that’s exactly what KOSA would lead to. It certainly wouldn’t protect kids. Indeed, all evidence suggests it would put plenty of marginalized kids at much greater risk.

However, it would create something of a regulatory moat for those larger companies.

On top of that, is there any company more desperate for a headline talking about how it’s “helping” protect children than OpenAI? The company has been accused of being “responsible” for suicide and other harmful behavior. And, even if those claims and lawsuits are misleading (they are!), culturally that message has been sticking. I’ve heard multiple people refer to ChatGPT as a suicide machine.

So, if you need a good headline to claim that you’re “protecting children” and doing so in a way where the law will have little direct impact on your business, but will damage some of your competitors in the space (not to mention the wider open internet), why not? It’s hard not to be cynical about OpenAI’s reasoning here.

Advertisement

Separately, it’s likely that the AI companies see this as a bit of political horse trading. While KOSA would have some impact on AI tools, it’s much more directed at social media platforms than AI. And it’s likely that the bet being made by OpenAI here is “hey, we’ll back KOSA for you, and you get rid of the AI-specific bills.” OpenAI’s Chris Lehane, who announced the endorsement and is featured in every press release about it, is infamous as a political trickster. He’s a political operator, not a tech or policy expert. You roll him out to cut a deal, not to advance a principled position on child safety. And that’s exactly what’s happening here.

You can see the KOSA authors gleefully using the OpenAI endorsement to falsely claim that only Mark Zuckerberg now opposes the law:

Yeah, that’s Senator Richard Blumenthal choosing to spend time on X, a site run by a guy who has made it clear he thinks Blumenthal’s political party is evil and needs to be wiped out, using that platform to lie and claim that the only people opposed to KOSA are “Mark Zuckerberg & his lobbyists.” That ignores the long list of civil society and public interest groups who have made it clear just how dangerous the law would be.

Marsha Blackburn (who has been vocal about how she wants KOSA to silence LGBTQ voices) put out a silly press release about this endorsement, saying:

“Lip service won’t save lives – Congress must take action to establish guardrails in the virtual space. I look forward to chairing a hearing on why the verdicts in California and New Mexico should spur Congress to hold Big Tech accountable for exploiting children to turn a profit.”

What? As bad as the rulings in California and New Mexico are, they seem to suggest that the courts already think they have the authority to order companies to do the impossible and magically stop anything bad from ever happening to kids who also (incidentally) use the internet.

Advertisement

All of this is for show. No one is being honest. Blackburn wants to censor LGBTQ speech she considers “dangerous to kids” because it terrifies her. Blumenthal wants to end encryption and the ability of tech companies to keep information, because he’s always been a cop and wants the ability to spy on your kids. And OpenAI wants Congress to direct their bad policies at social media companies rather than AI companies.

And all of us internet users are simply collateral damage for the mad power dreams of those in charge.

Filed Under: censorship, child safety, chris lehane, kosa, marsha blackburn, regulatory capture, richard blumenthal, surveillance

Companies: openai

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Forced to vibe code at work, programmers say their skills are deteriorating

Published

on


Coders from various companies recently told 404 Media that their initial curiosity about vibe coding has soured as they feel their skills deteriorating while technical debt mounts. Many developers who aren’t being forced to use AI are returning to coding by hand.
Read Entire Article
Source link

Continue Reading

Tech

Cowboy Space raises $275M as it seeks 40-60 employees for new satellite and rocket hub in Seattle

Published

on

(Cowboy Space Corp. Photo)

Cowboy Space Corp., a space startup growing out a new satellite and rocket engineering center in Seattle, raised $275 million in a Series B funding round this week that valued the company at $2 billion.

The Bay Area-based company — formerly known as Aetherflux — was founded in 2024 by CEO Baiju Bhatt, the billionaire co-founder of the trading platform Robinhood.

Cowboy Space is building satellites that double as data centers, powered by solar energy harvested in orbit. The idea is to sidestep the two biggest bottlenecks for AI computing on Earth — the soaring demand for electricity and the scarcity of land and water needed to cool traditional data centers.

The company also builds its own rockets to launch the satellites, and has designed the rocket’s upper stage and the data center as a single unit rather than separate pieces.

Cowboy Space opened a Seattle office earlier this year with a focus on satellite design and rocket/propulsion engineering. A rep for the company told GeekWire Wednesday that they anticipate 40-60 employees in Seattle initially, and there are currently 18 positions advertised across roles including avionics, mechanical engineering, spacecraft design, and software.

Advertisement

Director of Satellite Engineering David Larson, a SpaceX and Amazon vet, and Head of Propulsion Warren Lamont, a Blue Origin and IonQ vet, will be leading the office. The company is not yet sharing details on the specific location for the satellite center.

The startup is competing for talent in the Seattle area with a robust aerospace community of companies big and small. They include Blue Origin, Stoke Space, Aerojet Rocketdyne, Starfish Space, Starcloud, Xplore and many more. SpaceX also produces satellites for its Starlink broadband constellation from its Redmond, Wash., facility and Amazon produces satellites for its Amazon Leo broadband satellite network in Kirkland, Wash.

The company is collaborating with NVIDIA to deploy its Space-1 Vera Rubin Modules in low Earth orbit, and plans to launch its first satellite later this year to demonstrate space-to-Earth power beaming.

Total funding is now $365 million. The latest round was led by Index Ventures, with participation from new investors IVP, Blossom Capital, and SAIC, alongside existing investors Breakthrough Energy Ventures, Construct Capital, Andreessen Horowitz, NEA, Interlagos and Bhatt.

Advertisement

Source link

Continue Reading

Tech

Accelerating Chipmaking Innovation for the Energy-Efficient AI Era

Published

on

This sponsored article is brought to you by Applied Materials.

At pivotal moments in history, progress has required more than individual brilliance. The most consequential breakthroughs — such as those achieved under the Human Genome Project — required a new operating paradigm: Concentrate the world’s best talent around a single mission, establish a common platform, share critical infrastructure, and collapse feedback loops. When stakes are high and timelines are compressed, sequential and siloed innovation simply cannot keep pace.

Today’s AI era is creating an engineering race with similar demands. Every company is pushing to deliver higher-performance AI systems, faster. But performance is no longer defined by compute alone. AI workloads are increasingly dominated by the movement of data: In many cases, moving bits consumes as much — or more — energy than compute itself. As a result, reducing energy per bit can extend system‑level performance alongside gains in peak compute.

The path to energy‑efficient AI therefore runs through system‑level engineering, spanning three tightly interconnected domains:

Advertisement
  • Logic, where performance per watt depends on efficient transistor switching, low‑loss power, and signal delivery through dense wiring stacks.
  • Memory, where surging bandwidth and capacity demands expose the memory wall, with processor capability advancing faster than memory access.
  • Advanced packaging, where 3D integration, chiplet architectures, and high‑density interconnects bring compute and memory closer together — enabling system designs monolithic scaling can no longer sustain.

These domains can no longer be optimized independently. Gains in logic efficiency stall without sufficient memory bandwidth. Advances in memory bandwidth fall short if packaging cannot deliver proximity within thermal and mechanical constraints. Packaging, in turn, is constrained by the precision of both front‑end device fabrication and back‑end integration processes.

In the angstrom era, the hardest problems arise at the boundaries — between compute and memory in the package, front‑end and back‑end integration, and the tightly coupled process steps needed for precise 3D fabrication. And it is precisely this boundary‑driven complexity where the traditional innovation model breaks down.

The Traditional R&D Workflow Is Too Slow for Angstrom‑Era AI

For decades, the semiconductor industry’s R&D model has resembled a relay race. Capabilities are developed in one part of the ecosystem, handed off downstream through integration and manufacturing, evaluated by chip and system designers, and only then fed back for the next iteration. That model worked when progress was dominated by relatively modular steps that could be scaled independently and simply dropped into the manufacturing flow.

But the AI timeline has upended these rules. At angstrom‑scale dimensions, the physics enforces inescapable coupling across the entire stack: materials choices shape integration schemes; integration defines design rules; design rules dictate power delivery; wiring sets thermal budgets; and thermals ultimately constrain packaging scaling. System architects simply cannot wait 10–15 years for each major semiconductor technology inflection to mature.

Representing a roughly $5 billion investment, EPIC is the largest commitment to advanced semiconductor equipment R&D in U.S. history.

Advertisement

A long‑term perspective is essential to align materials innovation with emerging device architectures — and to develop the tools and processes required to integrate both with manufacturable precision. At Applied Materials, together with our customers, we are charting a course across the next 3–4 generations, extending as far as 10 years down the roadmap.

The angstrom era demands that we break down silos and bring together the industry’s best minds — from leading companies to leading academic institutions. If the problem is coupled, the solution must be coupled. If the timeline is compressed, the learning loop must be compressed. It’s not enough to just innovate — we must innovate how we innovate.

EPIC: A Center and Platform for High‑Velocity Co‑Innovation

This is the challenge that Applied Materials EPIC Center is designed to solve.

Representing a roughly US $5 billion investment, EPIC is the largest commitment to advanced semiconductor equipment R&D in U.S. history. When it opens in 2026, it will deliver state‑of‑the‑art cleanroom capabilities built from the ground up to shorten the path from early‑stage research to full‑scale manufacturing. But the facilities are only one component of the model. EPIC is also a platform, an operating system for high-velocity co‑innovation that revolutionizes how ideas move from the lab to the fab.

Advertisement

Diagram comparing traditional and EPIC chip innovation timelines showing 2x faster path EPIC is a platform, an operating system for high-velocity co‑innovation that revolutionizes how ideas move from the lab to the fab.Applied Materials

The EPIC model compresses the traditional workflow. Customer engineers work side‑by‑side with Applied technologists from day one — moving beyond isolated process optimization and downstream handoffs. Within a shared, secure environment, EPIC tightly integrates atomistic modeling, test vehicles, process development, validation, and metrology feedback. Constraints that once surfaced late in development are identified and addressed early.

The result is a potentially 2x faster path that benefits the entire ecosystem under one roof:

  • Chipmakers gain earlier access to Applied’s R&D portfolio, faster learning cycles, and accelerated transfer of next‑generation technologies into high‑volume manufacturing.
  • Ecosystem partners gain earlier access to advanced manufacturing technology and collaboration opportunities that expand what is possible through materials innovation.
  • Academic institutions gain opportunities to strengthen the lab‑to‑fab pipeline and help develop future semiconductor talent.

Building on decades of co‑development, we are reinventing the innovation pipeline with our partners across logic, memory, and advanced packaging to deliver the next leap in energy‑efficient AI.

Accelerating Advanced Logic

Logic remains the engine of AI compute. In the angstrom era, however, system‑level gains are increasingly constrained by power and energy. Extending AI performance now depends on architectures that deliver more performance per watt — accelerating the move to 3D devices such as gate‑all‑around (GAA) transistors, which boost density within a compact footprint while preserving power efficiency.

These architectural shifts are unfolding at unprecedented scale, with the logic roadmap already extending beyond first‑generation GAA toward more advanced designs. One key example is GAA with backside power delivery, which relocates thick power lines to the backside of the wafer, reducing resistive losses and freeing front‑side routing for tighter logic cell integration. Another example brings adjacent GAA PMOS and NMOS transistors closer together while inserting a dielectric isolation wall between them to minimize electrical interference. Further out, complementary FETs (CFETs) push density scaling even more by stacking PMOS and NMOS devices directly atop one another.

Advertisement

While these architectures deliver compelling gains in performance per watt and logic density without relying solely on tighter lithography, they significantly raise integration complexity. Manufacturing a single GAA device today can involve more than 2,000 tightly interdependent process steps. At the same time, wiring stacks continue to grow taller and denser to connect these advanced logic devices. Modern leading‑edge GPUs now in development pack more than 300 billion transistors into an area little larger than a postage stamp, interconnected by over 2,000 miles of wiring.

At this level of complexity, the process steps used to create these precise 3D devices and wiring stacks cannot be optimized independently. Design and process must evolve in lockstep, and materials innovation and fabrication methods must advance alongside device architecture. EPIC’s co‑innovation model is designed to accelerate exactly this convergence — enabling logic compute to continue advancing the frontiers of AI at the pace the roadmap demands.

Powering the Memory Roadmap

At the same time, the AI computing era is fundamentally reshaping how data is generated, moved, and processed — making memory technologies, especially DRAM, central to delivering the energy‑efficient performance AI systems require. As models grow larger and more data‑hungry, the DRAM roadmap is shifting toward architectures that deliver higher density, greater bandwidth, and faster access per watt.

At the DRAM cell level, this shift is driving a transition from 6F² buried‑channel array transistors (BCAT) to more compact 4F² architectures, which orient the transistor vertically to boost density and reduce chip area. Looking beyond 4F², sustaining gains in performance per watt will require moving past what 2D scaling alone can deliver. The industry is therefore turning to 3D DRAM, stacking memory cells vertically to add capacity within a constrained footprint. As these structures grow taller and aspect ratios intensify, high-mobility materials engineering in three dimensions becomes increasingly critical to performance and reliability.

Advertisement

Beyond the memory cell array, another powerful lever for DRAM scaling is shrinking the peripheral circuitry, which includes logic transistors and interconnect wiring. One emerging approach places select periphery functions beneath the DRAM array by bonding two wafers — one optimized for the DRAM cells and the other for CMOS logic — using multiple wiring layers.

In parallel, DRAM performance is being extended by leveraging logic‑proven enhancers in the memory periphery. These include mobility boosters such as embedded silicon germanium and stress films, along with wiring upgrades like improved low‑k dielectrics and advanced copper interconnects. Memory manufacturers are also transitioning periphery transistors from planar devices to FinFET architectures, following the logic roadmap to further improve I/O speed. These valuable inflections are central to EPIC’s mission — where they can be co-developed and rapidly validated for next‑generation memory systems.

Driving System Scaling With Advanced Packaging

As data movement becomes the dominant energy cost in AI systems, advanced packaging has emerged as a critical lever for improving system‑level efficiency—shortening interconnect distances, increasing bandwidth density, and reducing the power required to move data between logic and memory.

High‑bandwidth memory (HBM) marks a major inflection along this path. By stacking DRAM dies — scaling to 16 layers and beyond — and placing memory much closer to the processor, HBM enables rapid access to ever‑larger working datasets. This delivers step‑function gains in both bandwidth and energy efficiency.

Advertisement

More broadly, the rise of 3D packages such as HBM underscores why advanced packaging is becoming central to the AI era. Packaging now addresses system‑level constraints that logic and memory device scaling alone can no longer overcome. It also enables a move away from monolithic systems‑on‑chip toward chiplet‑based architectures, as AI workloads increasingly demand flexible designs that combine logic, memory, and specialized accelerators optimized for specific tasks.

A vital technology powering this roadmap is hybrid bonding. With interconnect pitches approaching those of on‑chip wiring, conventional bumps and microbumps run into fundamental limits in density, power, and signal integrity. Hybrid bonding removes these barriers by allowing dramatically higher interconnect and I/O density, supporting a broad range of chiplet architectures — from memory stacking to tighter compute‑memory integration.

As bonded structures like HBM stacks grow larger and more complex, warpage control, die placement, stack alignment, and thermal management become first‑order challenges. EPIC tackles these and other high‑value advanced‑packaging challenges through early, parallel co‑innovation across materials, integration, and manufacturing.

Bringing It All Together

Across logic, memory, and advanced packaging, our industry faces an ambitious roadmap that promises significant gains in energy efficiency for AI systems. But realizing that potential demands breakthrough materials innovation at a time when feature sizes are shrinking, interfaces are multiplying, and process interdependencies are escalating. These challenges cannot be solved on 10–15‑year timelines under the traditional relay‑race model. We must break down silos, align earlier across the ecosystem, and parallelize learning to keep pace with AI’s demands.

Advertisement

In the AI era, progress will be defined by the speed at which lightbulb moments turn into manufacturing and commercialization reality. The only viable path forward is a new innovation model — and EPIC is how we are driving it.

Source link

Continue Reading

Tech

Ireland and Northern Ireland share strong skill commonalities, finds research

Published

on

The Skills Insight Note is the first in the EGFSN’s Skills Insights series for 2026.

The Expert Group on Future Skills Needs (EGFSN) recently published a new ‘Skills Insight Note’ titled Cross Border Skills and Commonalities between Ireland and Northern Ireland. The research explores the labour markets of both Northern Ireland and the Republic of Ireland, with a particular focus on cross‑border workers, sectoral employment trends, education profiles and shared skills priorities.

The research – the first in the EGFSN’s Skills Insights series for 2026 – identified strong similarities between the Republic of Ireland and Northern Ireland, including a continued reliance on critical sectors, such as manufacturing, health and education, and a shared policy focus on future‑oriented skills in areas such as digitalisation, the green economy and apprenticeships.

Welcoming the data, Minister for Enterprise, Tourism and Employment Peter Burke, TD noted the importance of gaining insight into how both jurisdictions can cooperate effectively. 

Advertisement

“This Skills Insight Note provides valuable analysis of the labour market links and shared challenges between Ireland and Northern Ireland. The findings underline the importance of collaboration in skills development, particularly as both economies adapt to technological and demographic change. 

“Understanding these cross‑border dynamics strengthens our ability to plan effectively for enterprise growth, employment and long‑term competitiveness.”

Commuting figures, particularly from Northern Ireland to the Republic, were also found to have grown significantly over the course of the last 10 years. The research stated that this is reflective of labour market opportunities and shared economic strengths.

Minister of State with special responsibility for Trade Promotion, AI and Digital Transformation Niamh Smyth, TD said: “The findings clearly demonstrate the strong links that exist across the two jurisdictions, including shared skills priorities, sectoral strengths and growing levels of cross‑border commuting.

Advertisement

“This research highlights how closely connected our labour markets are and the opportunities that exist to address shared skills challenges through cooperation and coordinated policy approaches.”

Late last year, a new €9.85m cross-border project aiming to address critical public health challenges was launched in Belfast. The four-year OneHealth project is a health and life sciences partnership that will use AI and digital health approaches to tackle pressing health and agrifood challenges.

The initiative is being led by science and technology hub Catalyst in partnership with Atlantic Technology University, Queen’s University Belfast, Health Innovation Research Alliance Northern Ireland, Tyndall National Institute Cork and the University of Galway.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

IEEE Society ‘s Pitch Sessions Link Lab With Market

Published

on

The IEEE Communications Society (ComSoc)’s Research Collaboration Pitch Session initiative is proving to be a catalyst for meaningful engagement between academic researchers and industry innovators. Launched last year, the program connects promising researchers with industry leaders who can offer them funding, mentorship, and connections to bring interesting ideas closer to real-world deployment.

Rather than relying on chance encounters at conferences, the pitch sessions create a focused environment. Five academic presenters share their work with five industry representatives, known as “innovation scouts”: senior leaders primarily chosen from ComSoc’s Corporate Program partner companies such as Ericsson, Intel, Keysight, and Nokia. The curated format ensures that each idea receives dedicated attention from professionals who are seeking new concepts aligned with their organization’s priorities.

The initiative was launched in November at the IEEE Middle East Conference on Communications and Networking (MECOM) in Cairo and appeared in December at the IEEE Global Communications Conference (GLOBECOM) in Taipei, Taiwan.

AI-driven communication network

One of the most compelling outcomes came from the inaugural session in Cairo. Angela Waithaka, a student member and biomedical engineering student at Kenyatta University, in Nairobi, Kenya, presented her “AI-Driven Predictive Communication Networks for Enhanced Performance in Resource-Constrained Environments” paper. You can view her presentation along with others on IEEE.tv.

Advertisement

Waithaka’s research tackles a critical challenge: Next-generation communication systems increasingly rely on artificial intelligence and machine learning, yet most existing architectures consume abundant computational and energy resources, which are not always present in developing regions.

Waithaka proposed lightweight, adaptive AI/machine learning models capable of delivering predictive, reliable communication performance even under tight resource constraints.

Her vision resonated with Ruiqi “Richie” Liu, a master researcher at ZTE in China. ZTE is a global leader in integrated information and communication technology solutions. Liu says he recognized the relevance Waithaka’s proposal had to his company’s work with the International Telecommunication Union. He invited her to establish an ITU account so she could participate in the organization’s meetings discussing global telecommunications standardization projects—which would elevate her work to an international stage.

Simplifying data center protocols

The momentum continued at GLOBECOM. Among the presenters was Nirmala Shenoy, a professor at the Rochester Institute of Technology, in New York. Shenoy, an IEEE member, spoke on the topic of simplifying data center network protocols. She highlighted the growing complexity of the critical networks, which underpin cloud services, enterprise IT, and emerging AI workloads.

Advertisement

Shenoy’s focus on reducing protocol complexity while maintaining scalability, resilience, and low latency caught the attention of an innovation scout from Nokia, who heads its eXtended Reality Lab in Madrid. He found the key person at Nokia for Shenoy to connect with to discuss her research, and it led her to record a video for the company detailing her approach and its potential applications.

A model for accelerating innovation

The early success stories demonstrate the power of intentional, structured engagement. By bringing researchers and industry leaders together in a format designed for discovery, ComSoc is helping accelerate innovation and expand opportunities for collaboration. The pitch sessions are not merely conference events; they are becoming a bridge between academic creativity and industry implementation.

This year sessions will be held during the IEEE International Conference on Communications in Glasgow from 24 to 28 May, and more are scheduled during the IEEE International Mediterranean Conference on Communications and Networking in Sardinia from 6 to 9 July, and at GLOBECOM in Macau from 7 to 11 December.

As the program continues to grow, it could become a signature ComSoc initiative, one that strengthens the research ecosystem, supports emerging talent, and ensures that promising ideas find pathways to real-world impact.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Your next free Google account might only come with 5GB of storage

Published

on

Google has quietly altered one of the most reliable promises in consumer tech: 15GB of free cloud storage. For years, signing up for a Google account meant getting 15GB of free storage, shared across Gmail, Drive, and Photos. However, that’s changed. 

New accounts are now defaulting to 5GB (same as iCloud), with the full 15GB available only if you have entered your phone number during setup. The prompt users are seeing reads: “Your account includes 5GB of storage. Now get even more storage space with your phone number.”

What exactly changed?

The policy change took effect sometime around March 18, 2026 (via 9To5Google). That’s when the company updated its support page language from definitive to conditional. Initially, the support page read “Your Google account comes with 15GB of cloud storage at no charge.”

Now, it has been updated to say “up to 15GB of cloud storage at no charge.” And Google didn’t announce the change via a tweet or a blog post, as it does for every other update that comes out for consumer-centric products. 

It is during the account setup that users are now seeing two explicit choices: link a phone number to get 15GB of storage or keep 5GB. 

Advertisement

Why is Google doing this?

Google wants to make sure that the 15GB storage is offered to users only once, and not as many times as they create a new account. Linking the free storage to users’ phone numbers is, I’d say, a smart move, as it’s much more difficult to get a new number than to create a new Google account.

So, the company is positioning the change as an anti-duplication measure rather than anything else. A Google spokesperson has also confirmed to Endgadget that this is a regional test, which is why some users are still able to access the 15GB free storage without verifying their phone number. 

At the same time, I’d also like to draw your attention to the timing of this change. Only recently did Google expand the available storage for AI Pro subscribers from 1TB to 5TB, and now, it’s enforcing a tighter space for free users. Ultimately, we should all prepare for slimmer free storage margins.

Source link

Advertisement
Continue Reading

Tech

What the jury will actually decide in the case of Elon Musk vs. Sam Altman

Published

on

Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab.

While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging from the breakup of the founders in 2018 to Altman’s firing and rehiring in 2023, the jurors will be considering a set of fairly narrow questions.

  • Breach of charitable trust — essentially, did OpenAI and cofounders Sam Altman and Greg Brockman violate a specific agreement with Musk to use his donations to OpenAI for a specific, charitable purpose and not general use by the non-profit?
  • Unjust enrichment — did the defendants use Musk’s donations to enrich themselves through OpenAI’s for-profit arm, instead of for charitable purposes?
  • Aiding and abetting breach of charitable trust — Did Microsoft, through its interactions with OpenAI, know that Musk had specific conditions on its donations, and play a significant role in causing harm to Musk?

OpenAI has also made three arguments in its defense that the jury will weigh:

  • Statute of limitations — a legal deadline by which a lawsuit must be filed. Here, if OpenAI can prove that any harms to Musk happened before August 5, 2021 for the first count; August 5, 2022 for the second count; and November 14, 2021 for the first count, then his claims will be moot.
  • Unreasonable delay — Musk, by filing his lawsuit in 2024, delayed his claim in a way that made his request for damages unreasonable.
  • Unclean hands — a legal doctrine holding that Musk’s conduct related to his claims against OpenAI was unconscionable and renders them invalid.

If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however.

Breach of charitable trust

Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction.

That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted.

Advertisement

OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated.

Importantly, a forensic accountant hired by OpenAI testified that all of Musk’s donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk’s donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed.

Mainly, they insist that the for-profit affiliate that conducts most of OpenAI’s actual activity continues to fulfill the organization’s mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world.

Unjust enrichment

The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk’s donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI’s for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit.

Advertisement

OpenAI says all of Musk’s contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit’s work meaningfully advanced the foundation’s mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following “the blip,” when Altman was fired by OpenAI’s non-profit board in 2023 for lack of candor and then rehired just days later.

Aiding and abetting

Musk’s case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI’s tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit’s goals, and suggest that Microsoft’s commercial priorities led OpenAI away from its mission. They’ve focused attention on a clause in Microsoft’s agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI.

Microsoft’s witnesses have insisted that the company’s executives didn’t know of any specific conditions on Musk’s donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company’s investments and compute power allowed OpenAI to achieve its biggest triumphs.

Statute of Limitations

Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft’s plans for a new $10 billion investment that took place in 2023. He wouldn’t file his lawsuit until mid-2024.

Advertisement

OpenAI’s attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn’t read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk’s adviser, even voted to approve these transactions as a member of the OpenAI board.

Ultimately, the OpenAI attorneys emphasize that Musk’s formal role in the organization ended in 2018 and his last donations took place in 2020.

Unreasonable delay

OpenAI’s attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable.

Unclean hands

There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI’s attorneys argue that these efforts undermined OpenAI at a time when it was using Musk’s donations to pursue its mission. They noted that Zilis, the mother of three of Musk’s children, didn’t disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, “Mr. Musk abandoned OpenAI for dead in 2018,” Bill Savitt, OpenAI’s lead attorney, told the jury.

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Tech

5 Useful Google Maps Features You Must Try

Published

on

Google Maps is now one of the most commonly used applications for daily travel. It provides direction, displays real-time traffic information, and facilitates searches of destinations such as food outlets, petrol pumps, or even accommodation centers. Nevertheless, most of its users use only the primary functions and overlook the others. From organizing your saved places to improving navigation accuracy, these tools can make traveling and planning far more convenient.

1. Use Emojis to Organize Your Saved Places

image for use  Emojis to Organize in google maps

The first problem with saved places in Google Maps concerns their similarity in representation. This is problematic because users cannot quickly find what they need when there are too many markers on the map. Google Maps allows people to personalize their categories with emojis. Thus, the user creates associations and sees pictures rather than icons, which facilitates the search process.

  1. Open Google Maps.
  2. Tap on the “You” tab at the bottom.
  3. Open an existing list or create a new one.
  4. Tap Edit (for existing lists).
  5. Select the Choose icon.
  6. Pick an emoji that matches your category and tap Save.

2. Avoid Stairs While Navigating

image for google maps to Avoid Stairs While Navigating

A useful feature of Google Maps is the ability to hide stairs along the way. By enabling the accessibility function, the application will automatically adjust the route and find a better option without any additional steps.

  1. Open the Google Maps app.
  2. Enter your destination and tap Directions.
  3. Select Walking or Transit mode.
  4. Tap the filter/settings icon.
  5. Turn on Wheelchair accessible.
  6. Choose the recommended route.

3. Turn Screenshots Into Saved Locations

These days, people often discover new places through social media. But when you actually need those places, they’re hard to find in your gallery. To make this easier, Google Maps includes a feature that converts screenshots into saved locations. It uses AI to scan text in the image and match it with real places. This keeps all your saved spots in one place, making travel planning more organized and efficient.

  1. Open Google Maps on your Apple device.
  2. Tap on the You tab.
  3. Open the Screenshots list.
  4. Allow access to your photos when prompted.
  5. Tap Choose screenshots and select the screenshots you want to scan.
  6. Tap Add and wait for processing.
  7. Review and save the detected locations.

4. Set Reminders for When to Leave

image to Set Reminders for When to Leave

Most people calculate their departure time manually. They consider their destination and traffic, then use other applications to remind them of the exact departure time. However, Google Maps can do all of this in a single application. It allows users to schedule trips and be reminded of their departure time.

  1. Open Google Maps.
  2. Search your destination.
  3. Tap Directions.
  4. Tap the three-dot icon.
  5. Select Set a reminder to leave.
  6. Choose Leave at or Arrive by.
  7. Enter your time and save your reminder.

5. Fix Incorrect Location on Google Maps

image to Calibrate location. on google maps

One problem that can arise when using maps is inaccurate location tagging. This can make it difficult to find your way through crowded places. This problem can be solved by using the camera-based map calibration system. This will help the phone tag locations of the surrounding buildings be accurately calculated to determine your position.

  • Open the Google Maps app.
  • Tap your current location (blue dot).
  • Select the Calibrate location.
  • Tap Start.
  • Use your camera to scan nearby landmarks.
  • Wait for the accuracy confirmation.

Source link

Continue Reading

Tech

New 3D memory architecture revives old camera technology to smash through AI memory wall – NAND + DRAM hybrid promises to make memory cheaper, faster and with ‘unlimited endurance’

Published

on


  • Researchers have created a NAND-DRAM hybrid, inspired by legacy camera tech
  • Indium Gallium Zinc Oxide also promises benefits over silicon
  • For now, this is just a prototype that needs further work

Belgian semiconductor research hub imec has unveiled what it claims to be the first 3D implementation of charge-coupled device (CCD) memory architecture, which revives technology we’ve already seen used before in digital cameras and camcorders, but for a totally different purpose.

With 3D CCD architecture, the researchers were able to break one of the biggest bottlenecks in AI computing today – the memory wall – where GPUs and accelerators spend more time waiting for data than processing it as a result of poor memory bandwidth and power efficiency.

Source link

Advertisement
Continue Reading

Tech

Ontario auditors find doctors’ AI note takers routinely blow basic facts

Published

on

AI + ML

60% of evaluated AI Scribe systems mixed up prescribed drugs in patient notes, auditors say

The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems.

The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector.

Advertisement

As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy.

What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations. 

Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients’ treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings.

Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report. 

Advertisement

OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems. 

Bad evaluations don’t help, either

AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated.

According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score.

Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points.

Advertisement

In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems.

“Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime.

The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ®

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025