The tech giant’s cloud division, Amazon Web Services, issued an unusually pointed public rebuttal Friday afternoon to a widely cited Financial Times report asserting that Amazon’s own AI coding tools have caused at least two AWS outages in recent months.
The story was picked up by numerous media outlets, and the widely followed tech news aggregator, as an example of the risks of deploying agentic AI tools, and the underlying question of who — or what — is responsible when something goes wrong.
In a blog post titled “Correcting the Financial Times report about AWS, Kiro, and AI,” Amazon acknowledged a limited disruption to a single service in one region last December but attributed it to a user error in configuring access controls, not a flaw in the AI tool itself.
“The issue stemmed from a misconfigured role—the same issue that could occur with any developer tool (AI powered or not) or manual action,” Amazon said, noting that it received no customer inquiries about the disruption.
Advertisement
In addition, the company wrote, “The Financial Times’ claim that a second event impacted AWS is entirely false.”
This is where it gets into semantics, the key phrase being “impacted AWS.” In fact, the FT reported that Amazon itself acknowledged a second incident but said it did not affect a “customer-facing AWS service.”
In other words, if an incident doesn’t impact a service used by customers, does it count as an outage? The FT called it one. Amazon clearly thinks not. And this is ultimately the crux of the dispute.
As for the undisputed outage impacting AWS, the FT’s report cited four people familiar with the matter in describing a 13-hour interruption to an AWS system in mid-December.
Advertisement
The sources said engineers had allowed Amazon’s Kiro AI coding tool — an agentic assistant capable of taking autonomous actions — to make changes, and that the tool determined the best course of action was to “delete and recreate the environment.”
Multiple Amazon employees told the publication that it was the second time in recent months that AI tools had been involved in a service disruption. According to the FT report, a senior AWS employee said the outages were “small but entirely foreseeable,” adding that engineers had let the AI agent resolve issues without human intervention.
AWS is Amazon’s most profitable division. It generated $35.6 billion in revenue last quarter, up 24%, and $12.5 billion in operating income. The cloud unit is a significant focus of the company’s planned $200-billion capital spending spree this year, much of it directed toward AI infrastructure.
In addition to using agentic tools in its own operations, Amazon is selling them to AWS customers, making any narrative about AI-caused outages particularly unwelcome.
Advertisement
Amazon’s core defense — that the December incident was “user error, not AI error” — was already included in the FT’s original story. The blog post largely restates that position in a more prominent and pointed way.
“We did not receive any customer inquiries regarding the interruption,” Amazon wrote in its response. “We implemented numerous safeguards to prevent this from happening again—not because the event had a big impact (it didn’t), but because we insist on learning from our operational experience to improve our security and resilience.”
Amazon said the disruption was limited to AWS Cost Explorer, a tool that lets customers track their cloud spending, in one of its 39 geographic regions. Reuters and The Verge reported that the affected region was in mainland China, citing an Amazon spokesperson. It did not affect core services such as compute, storage, or databases, the company said.
The company added that it has since implemented new safeguards, including mandatory peer review for production access.
Advertisement
Posting on X, New York Times reporter Mike Isaac called the Amazon response “the most prickly” he’d seen from Amazon in years, comparing it to the past era when former White House press secretary Jay Carney, who led public policy for the company, spoke out strongly in its defense.
On the list of cars widely regarded as the most reliable vehicles ever built, up there with the Toyota Land Cruiser, the Honda Civic, and the Mercedes W123 diesels, is the unassuming Toyota Prius. Although it adds a bit of complexity with its hybrid drivetrain, its design eliminates a number of common wear items and also tunes it for extreme efficiency, lengthening its life and causing minimal mechanical stress. The Prius has a number of other tricks up its sleeve as well, which is why parts of its hybrid systems are often used in EV conversions like [Jeremy]’s electric CJ-5 Jeep.
Inside the Prius inverter is a buck/boost converter used for stepping up the battery voltage to power the inverter and supply power to the electric motor. [Jeremy]’s battery is much higher voltage than the stock Prius battery pack, though, which means he can bypass the converter and supply energy from his battery directly to the inverter. Since the buck/boost converter isn’t being used, he can put it to work doing other things. In this case, he’s using it as a charger. Sending the AC from a standard EV charging cord through a rectifier and then to this converter allows the Prius hardware to charge the Jeep’s battery, without adding much in the way of extra expensive electronics.
There are some other modifications to the Prius equipment in this Jeep, though, namely that [Jeremy] is using an open-source controller as the brain of this conversion. Although this video only goes into detail on some of the quirks of the Prius hardware, he has a number of other videos documenting his journey to convert this antique Jeep over to a useful electric farm vehicle which are worth checking out as well. There are plenty of other useful things that equipment from hybrid and electric vehicles can do beyond EV conversions as well, like being used for DIY powerwalls.
In 1997, as the term “dotcom” came to describe a wave of internet start-ups, web portal king Yahoo! acquired a free web email service called RocketMail and launched Yahoo! Mail, a service to compete with Microsoft’s recently acquired Hotmail.
Seven years later, Google would launch Gmail with a gigabyte of free storage, a bold move at the time; Not to be outdone, Yahoo! responded by upping the free storage of its service to a terabyte.
Last year, though, without much explanation and just a few months of notice, Yahoo! changed the free storage limit of its mail service to 20 GB. Yahoo! had long offered Yahoo! Mail Plus.
Advertisement
This premium tier included some perks like ad removal and enabling the sending of email to your own address, which lifts a rare and onerous limitation (particularly since Yahoo! Mail includes a long-forgotten notebook feature for a similar purpose, albeit one not available through its mobile app).
However, that didn’t include any extra storage since the free service already offered more than many folks might need in a lifetime.
The new 20 GB limit is still relatively generous compared to the 15 GB of a new Google account shared across Gmail, Drive, Google Photos and other services. However, it’s only two percent of the previous allotment.
Advertisement
Users over the 20 GB quota must delete enough email to comply or pay Yahoo! – $2/month for 100 GB or $10/month to get the terabyte back. And the Plus offering now includes 200 GB of storage in addition to the other benefits for $5/month. Fail to either slim down or pay up results and you can’t send or receive any new emails.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
As someone who had been using Yahoo! Mail since its early days as a secondary address for, for example, list subscriptions, I had accumulated just under 100 GB of email when I was first notified of the new limit.
But the volume and size of emails had increased greatly year to year, especially after HTML emails with rich graphics became standard. And while Yahoo!’s storage rates align with, say Google’s or Microsoft’s, those companies offer a richer range of online services across which to use that cloud storage for your subscription price
Advertisement
MailStore Server is the commercial version of MailStore Home (Image credit: MailStore)
Mail pattern boldness
I could have just downloaded the entire mailbox to a local mail client, deleting them from the server. But in addition to encountering challenges using Yahoo!’s IMAP server, I would have detracted from the location independence that’s one of the benefits of webmail.
Yahoo! Mail offers filters to search for and delete emails. However, they can’t, for example, search for attachments over a certain size, and sometimes the interface limits you to deleting one screen of emails at a time, which is impractical for deleting tens of thousands of emails.
Advertisement
After all, for 20 years, Yahoo! Mail users were given a license to shovel everything into their account without a second thought. Now, it was full steam ahead in the other direction for an ill-equipped ship. Sometimes, I would find that deleted email simply wasn’t deleted, possibly just due to a delay in processing.
My escape route came in the form of a Windows and web app. First, to create an archive of the entire account, I used MailStore Home, a free version of an email archiving tool for Windows from OpenText, which also offers a server version for businesses and another flavor for service providers.
It can download tens of gigabytes of web email messages and save them as an Outlook .PST file or other mbox file. For Mac users, MacSonik Yahoo Mail Backup Tool ($49/year for the least expensive service tier) offers similar functionality.
(Image credit: Clean Email)
Next, I used Clean Email to analyze the mailbox and filter older messages and those with large attachments. Keeping just the last three years’ worth of emails got me under the 20 GB mark, but I was ultimately able to get the mix of access and headroom I wanted at under 11 GB.
Advertisement
Clean Email offers a useful free tier for getting started. Upgrading within the website presents only its annual subscription plans, which start at about $30 per year. However it offers monthly pricing starting at $10 per month on its plans page. You can also sign up for plans that cover multiple email accounts and access the service via mobile app.
A clean break
Clean Email costs a bit more per year than just paying Yahoo! for the expanded storage, but the storage fee would likely repeat indefinitely whereas I will likely cancel Clean Email once I’ve sorted things out.
That said, the service includes worthwhile and well-implemented filtering, filing, forwarding, and automatic unsubscribing tools that tempt a longer subscription.
Advertisement
It’s worth noting that Yahoo! puts up a small barrier to using these tools by requiring you to register a one-time password for them to access your account. The process is counterintuitive and requires a trip to the account settings.
Other restrictions may be coming; The Clean Email website notes that Yahoo! has implemented new restrictions limiting access to 100,000 emails at a time for Yahoo! or AOL email (which is now just another brand of Yahoo! Mail (one you can use to send emails to “yourself” for free).
The world – particularly Yahoo!’s world – has changed a lot since Yahoo! Mail helped extend email access to millions of users, and Yahoo! is free to bring its mail storage pricing in line with competitors.
But doing so without reasonably convenient ways to comply with the quota makes the decision to upgrade at the threat of a cut-off feel less like an upsell and more like a shakedown.
The University of Mississippi Medical Center (UMMC) closed all its clinic locations statewide on Thursday following a ransomware attack.
UMMC has over 10,000 employees and, as one of the largest employers in Mississippi, operates seven hospitals, 35 clinics, and more than 200 telehealth sites statewide. The medical center includes the state’s only children’s hospital, only Level I trauma center, only organ and bone marrow transplant program, and the only Telehealth Center of Excellence, one of two across the United States.
As revealed on Thursday afternoon, the cyberattack took down many of its IT systems and blocked access to the Epic electronic medical records. While UMMC cancelled outpatient and ambulatory surgeries/procedures and imaging appointments, officials said hospital services continue via downtime procedures.
UMMC is now investigating the incident with assistance from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the FBI.
“We have activated our Emergency Operations Plan and are working with authorities including the FBI and Homeland Security, who are helping us to evaluate this situation and determine next steps,” the UMMC said.
Advertisement
When this article was published, the UMCC’s website was still down, and officials said that the hospital had shut down all IT systems while they assessed the attack’s impact.
“We are still evaluating the extent of systems impacted. As a precaution, we have shut down all our network systems and will conduct risk assessments before bringing anything back online. In-person class schedules remain normal,” they said.
Officials confirm ransomware attack
Hospital officials have also revealed during a press conference on Thursday afternoon that they are communicating with the ransomware operation behind the attack and working with authorities on the next steps, according to The Daily Mississippian.
“The attackers have communicated to us and we are working with the authorities and specialists on next steps. We do not know how long this situation may last,” said LouAnn Woodward, the dean of the school of medicine at UMMC.
Advertisement
“Patients in our hospital and our emergency department are being cared for. Clinical equipment and operations remain functional. We are using our downtime procedures. For our students, in-person classes will continue as scheduled.”
“All of our equipment works. All of our patients are being taken care of safely. There will be no patient impact as a result of this downtime,” Dr. Alan Jones, associate vice chancellor for health affairs at UMMC, told reporters.
No ransomware group has claimed responsibility for this attack, as they’re likely still negotiating with the UMMC and want to pressure it into paying an extortion demand.
However, with ransomware involved, data may have also been stolen and will be used as additional leverage to convince the hospital to pay.
Advertisement
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
IEEE has enhanced its standing as a trusted, neutral authority on the role of technology in climate change mitigation and adaption. Last year it became the first technical association to be invited to a U.N. Conference of the Parties on Climate Change.
IEEE representatives participated in several sessions at COP30, held from 11 to 20 November in Belém, Brazil. More than 56,000 delegates attended, including policymakers, technologists, and representatives from industry, finance, and development agencies.
Following the conference, IEEE helped host the selective International Symposium on Achieving a Sustainable Climate. The International Telecommunication Union and IEEE hosted ISASC on 16 and 17 December at ITU’s headquarters in Geneva. Among the more than 100 people who attended were U.N. agency representatives, diplomats, senior leaders from academia, and experts from government, industry, nongovernment organizations, and standards development bodies.
“Over successive COPs, IEEE’s role has evolved from contributing individual technical sessions to being recognized as a trusted partner in climate action,” Rahman noted in a summary of COP30. “There is [a] growing demand for engineering insight, not just to discuss technologies but [also] to help design pathways for deployment, capacity-building, and long-term resilience.”
The three also joined several panels organized by the IYNC that addressed climate resilience, career pathways in sustainability, and a mentoring program.
“Over successive COPs, IEEE’s role has evolved from contributing individual technical sessions to being recognized as a trusted partner in climate action.” —Saifur Rahman, 2023 IEEE president
The IYNC hosted the Voices of Transition: Including Pathways to a Clean Energy Future session, for which Tôrres and Rahman were panelists. They discussed the need to include underrepresented and marginalized groups, which often get overlooked in projects that convert communities to renewable energy.
Advertisement
Rahman, Canizares, and Tôrres visited the COP Village, where they met several of the 5,000 Indigenous leaders participating in the conference and discussed potential partnerships and collaborations. Climate change has made the land where the Indigenous people live more susceptible to severe droughts and wildfires, particularly in the Amazon region.
Tôrres, who says representing IEEE at COP30 was transformative, wrote a detailed report about the event.
“The experience reaffirmed my belief that engineering and technology, when combined with respect for cultural diversity, can play a critical role in shaping a more sustainable and equitable world,” he wrote. “It highlighted the importance of combining cutting-edge technological solutions with Indigenous wisdom and cultural knowledge to address the climate crisis.”
Advertisement
Rahman and Canizares give an overview of their COP30 experiences in an IEEE webinar.
“IEEE has a place at the table,” Rahman says in the video. “We want to showcase outside our comfort zone what IEEE can do. We go to all these global events so that our name becomes a familiar term. We are the first technical association organization ever to go to COP and talk about engineering.”
Canizares added that IEEE is now collaborating closely with the United Nations.
“This is an important interaction. And I think, moving forward, IEEE will become more relevant, particularly in the context of technology deployment,” he said. “As governments start technology deployments, they will see IEEE as a provider of solutions.”
Sessions were organized around six themes: energy transition, information and communication technology, financing, case studies, technical standards, and public-private collaborations. A detailed report includes the discussions, insights, and opportunities identified throughout ISASC.
Here are some key takeaways.
Although the technology exists to transition to renewable energy, most power grid systems are not ready. Deployment is increasingly constrained by transmission bottlenecks, interconnection delays, permitting challenges, and system flexibility. There’s also a skills shortage.
Energy transition pathways must be region-specific and should consider local resources, social conditions, funding opportunities, and development priorities.
Information and communication technologies are central to climate mitigation solutions, despite growing concerns about their environmental impact. Even though the technologies are used in beneficial ways, such as early-warning systems for natural disasters and smart water management, they also are driving the rapid growth of data centers for artificial intelligence applications—which has increased energy prices and driven up water demand.
Technical standards are a means of accelerating adoption, interoperability, and trust in green technology. There needs to be greater coordination among standards development organizations, particularly at the convergence of energy systems, information technologies, and AI. Fragmented standards hinder interoperability. The lack of technical standards is a major constraint on project financing, limiting investors’ confidence and slowing technology deployment.
Training and outreach efforts are important for successfully implementing standards, especially in developing regions. IEEE’s global membership and regional sections can be critical channels to address the needs.
A technology assessment tool
As part of ISASC, IEEE presented a technology assessment tool prototype. The web-based platform is designed to help policymakers, practitioners, and investors compare technology options against climate goals.
The tool can run a comparative analysis of sustainable climate technologies and integrate publicly available, expert-validated data.
Advertisement
IEEE can help the world meet its goals
The ISASC report concluded that by connecting engineering expertise with real-world deployment challenges, IEEE is working to translate global climate goals into measurable actions.
The discussions highlighted that the path forward lies less in inventing new technologies and more in aligning systems to deliver ones that already exist.
Podcast listeners seem to prefer YouTube and Spotify over Apple. Apple wants to change that.
Apple is finally introducing videos to podcasts as it takes aim at the market’s reigning leaders YouTube and Spotify.
But instead of using RSS for the videos, Apple said it is bringing in Http Live Streaming (HLS) technology, which lets consumers’ devices adapt to changing network conditions by automatically raising or lowering the quality of the stream.
Plus, users will be able to toggle between watching podcasts with videos turned on or off.
Advertisement
The company is also introducing dynamic advertising, which will allow podcasters to swap ads in or out. YouTube and Spotify have both said they would launch the tool, but are yet to do so.
Apple does not charge hosting providers or creators to distribute podcasts on Apple Podcasts – though it will begin charging participating ad networks an impression-based fee to deliver dynamic ads in HLS video podcasts later this year, it said.
“20 years ago, Apple helped take podcasting mainstream by adding podcasts to iTunes, and more than a decade ago, we introduced the dedicated Apple Podcasts app,” said Eddy Cue, Apple’s senior vice-president of services. The new step is a “defining milestone”, he added.
The consumer tech giant was one of the first to popularise the concept of podcasts all the way back in 2005. But in recent years, its leadership in the area has dwindled.
Advertisement
Recent data showed that 39pc of podcast listeners said they prefer YouTube as their go-to platform, 21pc Spotify, and only 8pc chose Apple. Last year, YouTube said that it had reached 1bn monthly podcast listeners.
There’s a few reasons Apple Podcasts lost its steam. One, for most of its existence, the service didn’t generate revenue from podcasts. Apple only launched podcast subscriptions in 2021, and at that point, hadn’t even considered introducing ads.
Meanwhile, not everyone uses Apple products, with Google’s Android phones remaining the most popular mobile operating system worldwide. Meaning, for a large period of time, only Apple users would have been able to tune into podcasts on its platforms.
However, that changed in 2024 with the introduction of an Apple Podcasts web app, available regardless of whether users are on a Mac or not.
Advertisement
Updated, 11.43am, 20 February 2026: This article was amended to include correct figures in relation to podcast listener preferences.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Despite growing chatter about a future when much human work is automated by AI, one of the ironies of this current tech boom is how stubbornly reliant on human beings it remains, specifically the process of training AI models using reinforcement learning from human feedback (RLHF).
At its simplest, RLHF is a tutoring system: after an AI is trained on curated data, it still makes mistakes or sounds robotic. Human contractors are then hired en masse by AI labs to rate and rank a new model’s outputs while it trains, and the model learns from their ratings, adjusting its behavior to offer higher-rated outputs. This process is all the more important as AI expands to produce multimedia outputs like video, audio, and imagery which may have more nuanced and subjective measures of quality.
Historically, this tutoring process has been a massive logistical headache and PR nightmare for AI companies, relying on fragmented networks of foreign contractors and static labeling pools in specific, low-income geographic hubs, cast by the media as low wage — even exploitative. It’s also inefficient: requiring AI labs wait weeks or months for a single batch of feedback, delaying model progress.
Now a new startup has emerged to make the process far more efficient: Rapidata‘s platform effectively “gamifies” RLHF by pushing said review tasks around the globe to nearly 20 million users of popular apps, including Duolingo or Candy Crush, in the form of short, opt-in review tasks they can choose to complete in place of watching mobile ads, with data sent back to a commissioning AI lab instantly.
Advertisement
As shared with VentureBeat in a press release, this platform allows AI labs to “iterate on models in near-real-time,” significantly shortening development timelines compared to traditional methods.
CEO and founder Jason Corkill stated in the same release that Rapidata makes “human judgment available at a global scale and near real time, unlocking a future where AI teams can run constant feedback loops and build systems that evolve every day instead of every release cycle.””
Rapidata founder and CEO Jason Corkill. Credit: Rapidata
Rapidata treats RLHF as high-speed infrastructure rather than a manual labor problem. Today, the company exclusively announced to us at VentureBeat its emergence with an $8.5 million seed round co-led by Canaan Partners and IA Ventures, with participation from Acequia Capital and BlueYard, to scale its unique approach to on-demand human data.
Advertisement
The pub conversation that built a human cloud
The genesis of Rapidata was born not in a boardroom, but at a table over a few beers. When Corkill was a student at ETH Zurich, working in robotics and computer vision, when he hit the wall that every AI engineer eventually faces: the data annotation bottleneck.
“Specifically, I’ve been working in robotics, AI and computer vision for quite a few years now, studied at ETH here in Zurich, and just always was frustrated with data annotation,” Corkill recalled in a recent interview. “Always when you needed humans or human data annotation, that’s kind of when your project was stopped in its tracks, because up until then, you could move it forward by just pushing longer nights. But when you needed the large scale human annotation, you had to go to someone and then wait for a few weeks”.
Frustrated by this delay, Corkill and his co-founders realized that the existing labor model for AI was fundamentally broken for a world moving at the speed of modern compute. While compute scales exponentially, the traditional human workforce—bound by manual onboarding, regional hiring, and slow payment cycles—does not. Rapidata was born from the idea that human judgment could be delivered as a globally distributed, near-instantaneous service.
Technology: Turning digital footprints into training data
The core innovation of Rapidata lies in its distribution method. Rather than hiring full-time annotators in specific regions, Rapidata leverages the existing attention economy of the mobile app world. By partnering with third-party apps like Candy Crush or Duolingo, Rapidata offers users a choice: watch a traditional ad or spend a few seconds providing feedback for an AI model.
Advertisement
“The users are asked, ‘Hey, would you rather instead of watching ads and having, you know, companies buy your eyeballs like that, would you rather like annotate some data, give feedback?’” Corkill explained. According to Corkill, between 50% and 60% of users opt for the feedback task over a traditional video advertisement.
This “crowd intelligence” approach allows AI teams to tap into a diverse, global demographic at an unprecedented scale.
The global network: Rapidata currently reaches between 15 and 20 million people.
Massive parallelism: The platform can process 1.5 million human annotations in a single hour.
Speed: Feedback cycles that previously took weeks or months are reduced to hours or even minutes.
Quality control: The platform builds trust and expertise profiles for respondents over time, ensuring that complex questions are matched with the most relevant human judges.
Anonymity: While users are tracked via anonymized IDs to ensure consistency and reliability, Rapidata does not collect personal identities, maintaining privacy while optimizing for data quality.
Online RLHF: Moving into the GPU
The most significant technological leap Rapidata is enabling is what Corkill describes as “online RLHF”. Traditionally, AI is trained in disconnected batches: you train the model, stop, send data to humans, wait weeks for labels, and then resume. This creates a “circle” of information that often lacks fresh human input.
Rapidata is moving this judgment directly into the training loop. Because their network is so fast, they can integrate via API directly with the GPUs running the model.
Advertisement
“We’ve always had this idea of reinforcement learning for human feedback… so far, you always had to do it like in batches,” Corkill said. “Now, if you go all the way down, we have a few clients now where, because we’re so fast, we can be directly, basically in the process, like in in the processor on the GPU right, and the GPU calculate some output, and it can immediately request from us in a distributed fashion. ‘Oh, I need, I need, I need a human to look at this.’ I get the answer and then apply that loss, which has not been possible so far”.
Currently, the platform supports roughly 5,500 humans per minute providing live feedback to models running on thousands of GPUs. This prevents “reward model hacking,” where two AI models trick each other in a feedback loop, by grounding the training in actual human nuance.
Product: Solving for taste and global context
As AI moves beyond simple object recognition into generative media, the requirements for data labeling have evolved from objective tagging to subjective “taste-based” curation. It is no longer just about “is this a cat?” but rather “is this voice synthesis convincing?” or “which of these two summaries feels more professional?”.
Lily Clifford, CEO of the voice AI startup Rime, notes that Rapidata has been transformative for testing models in real-world contexts. “Previously, gathering meaningful feedback meant cobbling together vendors and surveys, segment by segment, or country by country, which didn’t scale,” Clifford said. Using Rapidata, Rime can reach the right audiences—whether in Sweden, Serbia, or the United States—and see how models perform in real customer workflows in days, not months.
Advertisement
“Most models are factually correct, but I’m sure you’re you have received emails that feel, you know, not authentic, right?” Corkill noted. “You can smell an AI email, you can smell an AI image or a video, it’s immediately clear to you… these models still don’t feel human, and you need human feedback to do that”.
The economic and operational shift
From an operational standpoint, Rapidata positions itself as an infrastructure layer that eliminates the need for companies to manage their own custom annotation operations. By providing a scalable network, the company is lowering the barrier to entry for AI teams that previously struggled with the cost and complexity of traditional feedback loops.
Jared Newman of Canaan Partners, who led the investment, suggests that this infrastructure is essential for the next generation of AI. “Every serious AI deployment depends on human judgment somewhere in the lifecycle,” Newman said. “As models move from expertise-based tasks to taste-based curation, the demand for scalable human feedback will grow dramatically”.
A future of human use
While the current focus is on the model labs of the Bay Area, Corkill sees a future where the AI models themselves become the primary customers of human judgment. He calls this “human use”.
Advertisement
In this vision, a car designer AI wouldn’t just generate a generic vehicle; it could programmatically call Rapidata to ask 25,000 people in the French market what they think of a specific aesthetic, iterate on that feedback, and refine its design within hours.
“Society is in constant flux,” Corkill noted, addressing the trend of using AI to simulate human behavior. “If they simulate a society now, the simulation will be stable for and maybe mirror ours for a few months, but then it completely changes, because society has changed and has developed completely differently”.
By creating a distributed, programmatic way to access human brain capacity worldwide, Rapidata is positioning itself as the vital interconnect between silicon and society. With $8.5 million in new funding, the company plans to move aggressively to ensure that as AI scales, the human element is no longer a bottleneck, but a real-time feature.
The colors that the upcoming budget MacBook will be sold in were reportedly first considered for the 2022 MacBook Air before Apple chose Silver, Starlight, and Space Gray.
Rumors currently have the low-cost MacBook shipping in blue, green, and yellow. Now, in a post on the Weibo Chinese social network, leaker Instant Digital says this isn’t the first time these colors have been in the works. The post didn’t elaborate on why Apple decided against using the more colorful hues for the M2 MacBook Air. But they did add that Apple’s new color range “looks fresh.” Continue Reading on AppleInsider | Discuss on our Forums
According to Vijaya Kaza, Google’s VP of app and ecosystem trust, the company rejected more than 1.75 million potentially harmful apps during the review process and blocked over 80,000 developer accounts for various policy violations. Both figures are significantly lower than in 2024, when 2.36 million apps were rejected and 158,000 developer accounts were blocked. Read Entire Article Source link
The company’s customised AI chips aim to achieve cheaper and faster results than traditional AI hardware.
Toronto-based start-up Taalas has raised $169m for its specialised AI hardware models.
Total investment in the company stands at $219m, with funding from Quiet Capital and Fidelity, among others, according to Reuters.
In a blogpost from CEO Ljubisa Bajic announcing the release of its first models, the company said it wants to mitigate the “high latency and astronomical cost” of AI, and that its specialised method is faster and cheaper than traditional AI chip approaches.
Advertisement
The company said it took a team of 24 and a spend of $30m since its founding less than three years ago to bring to market its first product, a hard-wired Llama 3.1 8B, which is available as both a chatbot demo and an inference API service.
The company’s aim is to mitigate the need for vast and expensive data centres through the principles of specialisation, merging storage with computation, and simplification.
Taalas said its “platform for transforming any AI model into custom silicon” means that “from the moment a previously unseen model is received, it can be realised in hardware in only two months”.
It claimed its hardware output is “an order of magnitude faster, cheaper and lower power than software-based implementations”, achieved through physically customising chips depending on the bespoke needs of the AI model in question.
Advertisement
Taalas claimed its silicon Llama chip, for example, is nearly 10 times faster than the current state of the art, costs 20 times less to build and consumes 10 times less power.
Taalas aims to release two further models in 2026.
AI chipmaking giant Nvidia this week announced a huge deal with Meta to provide millions of chips for Meta’s AI infrastructure in exchange for billions of dollars.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
The Nothing Phone (4a) series is shaping up to be more expensive than its predecessor according to fresh leaks detailing pricing, specs and release dates ahead of the company’s March 5 launch event.
A new report from Dealabs suggests the Nothing Phone (4a) will start at around €400. This marks roughly a €50 increase over the Phone (3a), with pricing said to vary slightly by region. For instance, Germany and Spain will reportedly see a €389 starting price. Meanwhile, France, Belgium and Italy could see it land at €409. A 12GB RAM variant is expected to cost between €429 and €449.
The Nothing Phone (4a) Pro could see an even steeper jump. Dealabs claims pricing will begin at €479 in Germany and Spain. This rises to €499 in France, Belgium and Italy — around €90 more than the previous Pro model. A higher-tier 12GB version could reach as much as €569 depending on the market.
As for availability, the base Phone (4a) is tipped to release on March 12. The Pro model could potentially follow on March 26.
Advertisement
Advertisement
Specs are where things get slightly less clear.
Android Headlines, in a separate leak, claims the base model will instead include an 8MP ultrawide alongside a 50MP main sensor and a 3.5x telephoto lens. It also added that the Phone (4a) will run on Snapdragon 7s Gen 4, paired with a 5,400mAh battery and 50W charging. It will also include IP65 dust and water resistance.
Meanwhile, the Pro model is tipped to include a 50MP Sony main sensor, improved optical zoom, an aluminium chassis, a larger 6.83-inch 144Hz display, and a new “Glyph Matrix” lighting system on the rear. The standard model is expected to retain the familiar Glyph Bar and a 6.78-inch display.
Advertisement
Colour options are said to include black, white, pink and blue for the base model. The Pro may arrive in black, silver and pink.
Nothing founder Carl Pei previously hinted that price increases were on the way. These latest leaks appear to confirm that shift. We’ll have full details once Nothing makes it official next month.