True or false? Your green laser pointer is more powerful than your red one. The answer is almost certainly false. They are, most likely, the same power, but your eye is far more sensitive to green, so it seems stronger. [Brandon Li] was thinking about how to best represent colors on computer screens and fell down the rabbit hole of what colors look like when arranged in a spectrum. Spoiler alert: almost all the images you see of the spectrum are incorrect in some way. The problem isn’t in our understanding of the physics, but more in the understanding of how humans perceive color.
Perception may start with physics, but it also extends to the biology of your eye and the psychology of your brain. What follows is a lot of math that finally winds up with the CIE 1931 color space diagram and the CIE 2012 system.
Some people obsess about fonts, and some about colors. If you are in the latter camp, this is probably old hat for you. However, if you want a glimpse into just how complicated it is to accurately represent colors, this is a fascinating read. You can learn about the Bezold-Brücke shift, the Helmholtz-Kohlrausch effect, and the Abney effect. Maybe that’ll help you win a bar bet one day.
Advertisement
The post winds up in the strangest place: spectroscopy. So if you want to see how color representation applies to analyzing blue sky, neon tubes, and a MacBook display, you’ll want to skip to the end.
Despite growing chatter about a future when much human work is automated by AI, one of the ironies of this current tech boom is how stubbornly reliant on human beings it remains, specifically the process of training AI models using reinforcement learning from human feedback (RLHF).
At its simplest, RLHF is a tutoring system: after an AI is trained on curated data, it still makes mistakes or sounds robotic. Human contractors are then hired en masse by AI labs to rate and rank a new model’s outputs while it trains, and the model learns from their ratings, adjusting its behavior to offer higher-rated outputs. This process is all the more important as AI expands to produce multimedia outputs like video, audio, and imagery which may have more nuanced and subjective measures of quality.
Historically, this tutoring process has been a massive logistical headache and PR nightmare for AI companies, relying on fragmented networks of foreign contractors and static labeling pools in specific, low-income geographic hubs, cast by the media as low wage — even exploitative. It’s also inefficient: requiring AI labs wait weeks or months for a single batch of feedback, delaying model progress.
Now a new startup has emerged to make the process far more efficient: Rapidata‘s platform effectively “gamifies” RLHF by pushing said review tasks around the globe to nearly 20 million users of popular apps, including Duolingo or Candy Crush, in the form of short, opt-in review tasks they can choose to complete in place of watching mobile ads, with data sent back to a commissioning AI lab instantly.
Advertisement
As shared with VentureBeat in a press release, this platform allows AI labs to “iterate on models in near-real-time,” significantly shortening development timelines compared to traditional methods.
CEO and founder Jason Corkill stated in the same release that Rapidata makes “human judgment available at a global scale and near real time, unlocking a future where AI teams can run constant feedback loops and build systems that evolve every day instead of every release cycle.””
Rapidata founder and CEO Jason Corkill. Credit: Rapidata
Rapidata treats RLHF as high-speed infrastructure rather than a manual labor problem. Today, the company exclusively announced to us at VentureBeat its emergence with an $8.5 million seed round co-led by Canaan Partners and IA Ventures, with participation from Acequia Capital and BlueYard, to scale its unique approach to on-demand human data.
Advertisement
The pub conversation that built a human cloud
The genesis of Rapidata was born not in a boardroom, but at a table over a few beers. When Corkill was a student at ETH Zurich, working in robotics and computer vision, when he hit the wall that every AI engineer eventually faces: the data annotation bottleneck.
“Specifically, I’ve been working in robotics, AI and computer vision for quite a few years now, studied at ETH here in Zurich, and just always was frustrated with data annotation,” Corkill recalled in a recent interview. “Always when you needed humans or human data annotation, that’s kind of when your project was stopped in its tracks, because up until then, you could move it forward by just pushing longer nights. But when you needed the large scale human annotation, you had to go to someone and then wait for a few weeks”.
Frustrated by this delay, Corkill and his co-founders realized that the existing labor model for AI was fundamentally broken for a world moving at the speed of modern compute. While compute scales exponentially, the traditional human workforce—bound by manual onboarding, regional hiring, and slow payment cycles—does not. Rapidata was born from the idea that human judgment could be delivered as a globally distributed, near-instantaneous service.
Technology: Turning digital footprints into training data
The core innovation of Rapidata lies in its distribution method. Rather than hiring full-time annotators in specific regions, Rapidata leverages the existing attention economy of the mobile app world. By partnering with third-party apps like Candy Crush or Duolingo, Rapidata offers users a choice: watch a traditional ad or spend a few seconds providing feedback for an AI model.
Advertisement
“The users are asked, ‘Hey, would you rather instead of watching ads and having, you know, companies buy your eyeballs like that, would you rather like annotate some data, give feedback?’” Corkill explained. According to Corkill, between 50% and 60% of users opt for the feedback task over a traditional video advertisement.
This “crowd intelligence” approach allows AI teams to tap into a diverse, global demographic at an unprecedented scale.
The global network: Rapidata currently reaches between 15 and 20 million people.
Massive parallelism: The platform can process 1.5 million human annotations in a single hour.
Speed: Feedback cycles that previously took weeks or months are reduced to hours or even minutes.
Quality control: The platform builds trust and expertise profiles for respondents over time, ensuring that complex questions are matched with the most relevant human judges.
Anonymity: While users are tracked via anonymized IDs to ensure consistency and reliability, Rapidata does not collect personal identities, maintaining privacy while optimizing for data quality.
Online RLHF: Moving into the GPU
The most significant technological leap Rapidata is enabling is what Corkill describes as “online RLHF”. Traditionally, AI is trained in disconnected batches: you train the model, stop, send data to humans, wait weeks for labels, and then resume. This creates a “circle” of information that often lacks fresh human input.
Rapidata is moving this judgment directly into the training loop. Because their network is so fast, they can integrate via API directly with the GPUs running the model.
Advertisement
“We’ve always had this idea of reinforcement learning for human feedback… so far, you always had to do it like in batches,” Corkill said. “Now, if you go all the way down, we have a few clients now where, because we’re so fast, we can be directly, basically in the process, like in in the processor on the GPU right, and the GPU calculate some output, and it can immediately request from us in a distributed fashion. ‘Oh, I need, I need, I need a human to look at this.’ I get the answer and then apply that loss, which has not been possible so far”.
Currently, the platform supports roughly 5,500 humans per minute providing live feedback to models running on thousands of GPUs. This prevents “reward model hacking,” where two AI models trick each other in a feedback loop, by grounding the training in actual human nuance.
Product: Solving for taste and global context
As AI moves beyond simple object recognition into generative media, the requirements for data labeling have evolved from objective tagging to subjective “taste-based” curation. It is no longer just about “is this a cat?” but rather “is this voice synthesis convincing?” or “which of these two summaries feels more professional?”.
Lily Clifford, CEO of the voice AI startup Rime, notes that Rapidata has been transformative for testing models in real-world contexts. “Previously, gathering meaningful feedback meant cobbling together vendors and surveys, segment by segment, or country by country, which didn’t scale,” Clifford said. Using Rapidata, Rime can reach the right audiences—whether in Sweden, Serbia, or the United States—and see how models perform in real customer workflows in days, not months.
Advertisement
“Most models are factually correct, but I’m sure you’re you have received emails that feel, you know, not authentic, right?” Corkill noted. “You can smell an AI email, you can smell an AI image or a video, it’s immediately clear to you… these models still don’t feel human, and you need human feedback to do that”.
The economic and operational shift
From an operational standpoint, Rapidata positions itself as an infrastructure layer that eliminates the need for companies to manage their own custom annotation operations. By providing a scalable network, the company is lowering the barrier to entry for AI teams that previously struggled with the cost and complexity of traditional feedback loops.
Jared Newman of Canaan Partners, who led the investment, suggests that this infrastructure is essential for the next generation of AI. “Every serious AI deployment depends on human judgment somewhere in the lifecycle,” Newman said. “As models move from expertise-based tasks to taste-based curation, the demand for scalable human feedback will grow dramatically”.
A future of human use
While the current focus is on the model labs of the Bay Area, Corkill sees a future where the AI models themselves become the primary customers of human judgment. He calls this “human use”.
Advertisement
In this vision, a car designer AI wouldn’t just generate a generic vehicle; it could programmatically call Rapidata to ask 25,000 people in the French market what they think of a specific aesthetic, iterate on that feedback, and refine its design within hours.
“Society is in constant flux,” Corkill noted, addressing the trend of using AI to simulate human behavior. “If they simulate a society now, the simulation will be stable for and maybe mirror ours for a few months, but then it completely changes, because society has changed and has developed completely differently”.
By creating a distributed, programmatic way to access human brain capacity worldwide, Rapidata is positioning itself as the vital interconnect between silicon and society. With $8.5 million in new funding, the company plans to move aggressively to ensure that as AI scales, the human element is no longer a bottleneck, but a real-time feature.
The colors that the upcoming budget MacBook will be sold in were reportedly first considered for the 2022 MacBook Air before Apple chose Silver, Starlight, and Space Gray.
Rumors currently have the low-cost MacBook shipping in blue, green, and yellow. Now, in a post on the Weibo Chinese social network, leaker Instant Digital says this isn’t the first time these colors have been in the works. The post didn’t elaborate on why Apple decided against using the more colorful hues for the M2 MacBook Air. But they did add that Apple’s new color range “looks fresh.” Continue Reading on AppleInsider | Discuss on our Forums
According to Vijaya Kaza, Google’s VP of app and ecosystem trust, the company rejected more than 1.75 million potentially harmful apps during the review process and blocked over 80,000 developer accounts for various policy violations. Both figures are significantly lower than in 2024, when 2.36 million apps were rejected and 158,000 developer accounts were blocked. Read Entire Article Source link
The company’s customised AI chips aim to achieve cheaper and faster results than traditional AI hardware.
Toronto-based start-up Taalas has raised $169m for its specialised AI hardware models.
Total investment in the company stands at $219m, with funding from Quiet Capital and Fidelity, among others, according to Reuters.
In a blogpost from CEO Ljubisa Bajic announcing the release of its first models, the company said it wants to mitigate the “high latency and astronomical cost” of AI, and that its specialised method is faster and cheaper than traditional AI chip approaches.
Advertisement
The company said it took a team of 24 and a spend of $30m since its founding less than three years ago to bring to market its first product, a hard-wired Llama 3.1 8B, which is available as both a chatbot demo and an inference API service.
The company’s aim is to mitigate the need for vast and expensive data centres through the principles of specialisation, merging storage with computation, and simplification.
Taalas said its “platform for transforming any AI model into custom silicon” means that “from the moment a previously unseen model is received, it can be realised in hardware in only two months”.
It claimed its hardware output is “an order of magnitude faster, cheaper and lower power than software-based implementations”, achieved through physically customising chips depending on the bespoke needs of the AI model in question.
Advertisement
Taalas claimed its silicon Llama chip, for example, is nearly 10 times faster than the current state of the art, costs 20 times less to build and consumes 10 times less power.
Taalas aims to release two further models in 2026.
AI chipmaking giant Nvidia this week announced a huge deal with Meta to provide millions of chips for Meta’s AI infrastructure in exchange for billions of dollars.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
The Nothing Phone (4a) series is shaping up to be more expensive than its predecessor according to fresh leaks detailing pricing, specs and release dates ahead of the company’s March 5 launch event.
A new report from Dealabs suggests the Nothing Phone (4a) will start at around €400. This marks roughly a €50 increase over the Phone (3a), with pricing said to vary slightly by region. For instance, Germany and Spain will reportedly see a €389 starting price. Meanwhile, France, Belgium and Italy could see it land at €409. A 12GB RAM variant is expected to cost between €429 and €449.
The Nothing Phone (4a) Pro could see an even steeper jump. Dealabs claims pricing will begin at €479 in Germany and Spain. This rises to €499 in France, Belgium and Italy — around €90 more than the previous Pro model. A higher-tier 12GB version could reach as much as €569 depending on the market.
As for availability, the base Phone (4a) is tipped to release on March 12. The Pro model could potentially follow on March 26.
Advertisement
Advertisement
Specs are where things get slightly less clear.
Android Headlines, in a separate leak, claims the base model will instead include an 8MP ultrawide alongside a 50MP main sensor and a 3.5x telephoto lens. It also added that the Phone (4a) will run on Snapdragon 7s Gen 4, paired with a 5,400mAh battery and 50W charging. It will also include IP65 dust and water resistance.
Meanwhile, the Pro model is tipped to include a 50MP Sony main sensor, improved optical zoom, an aluminium chassis, a larger 6.83-inch 144Hz display, and a new “Glyph Matrix” lighting system on the rear. The standard model is expected to retain the familiar Glyph Bar and a 6.78-inch display.
Advertisement
Colour options are said to include black, white, pink and blue for the base model. The Pro may arrive in black, silver and pink.
Nothing founder Carl Pei previously hinted that price increases were on the way. These latest leaks appear to confirm that shift. We’ll have full details once Nothing makes it official next month.
It has never been more expensive to buy a new car. The average transaction price last month for buyers in the United States was $48,576, up nearly a third from 2019, according to Edmunds. The “affordable” car—$20,000 or less—is dead.
The high prices have been pinned on plenty of economic dynamics: lingering pandemic-era supply-chain issues, the introduction of expensive technology into everyday cars, higher labor and raw materials costs, and new tariffs by the Trump administration affecting imported steel, aluminum, and cars themselves.
Now, despite a US Supreme Court ruling that will nix some of those Trump tariffs, car buyers will likely get no respite.
“The core cost structure facing the auto industry hasn’t fundamentally changed overnight,” writes Jessica Caldwell, Edmunds’ head of insights, in an emailed statement. Put more simply: Cheaper cars aren’t coming, at least not because of this ruling.
Advertisement
The Supreme Court’s decision gets in the way of the president’s power to use the International Emergency Economic Power Act to levy tariffs in response to emergencies. Trump used this power to apply tariffs to countries around the globe, the emergency being “large and persistent” trade deficits. The administration applied other new duties on Canada, China, and Mexico because of what it called emergencies related to the flow of migrants and drugs into the United States.
But most of the tariffs that affect the auto industry come from another law, section 232 of the Trade Expansion Act. That provision can apply to imports that “threaten to impair” the country’s national security. Tariffs on steel, aluminum, copper—key raw materials for cars—and imported auto parts and vehicles themselves came under this provision and are still in effect. This includes 15 percent tariffs on cars built in Europe, Japan, and South Korea.
Automakers have actually done an OK job shielding consumers from the effects of tariffs, Caldwell says. Even as retailers have blamed tariffs for steadily rising prices of consumer goods like electronics and appliances, car prices are up just 1 percent since this time last year, the firm’s data shows. But as the tariff regime drags on, that could change in ways that make new-car buyers even less happy.
“If cost pressures continue to build, automakers may have less room to shield shoppers from higher prices,” Caldwell says, “but for now, the broader market impact is still playing out.”
Reportedly, the organisation intends to hire up to 100 additional employees within its Seed artificial intelligence division.
According to Bloomberg, Chinese technology giant ByteDance has announced plans to employ up to 100 new people in its artificial intelligence (AI) division as a means of competing with some of the world’s leading US-based AI companies.
The positions open to US-based professionals will be in Seed, which is ByteDance’s AI division with laboratories in the US, Singapore and China.
Vacancies will be across various responsibilities and will include work in producing international data for ByteDance’s large language models, advancing its popular text, image and video generation tools, completing research to develop ‘human-like’ AI, and building science models that enable the organisation to pursue drug discovery and design.
Advertisement
ByteDance’s announcement comes despite significant concerns from US lawmakers and regulators around national security. Those in the regulatory space in the US have long argued that there is potential for ByteDance to use TikTok in the collection of its citizens’ private, valuable data or in the spreading of a narrative in favour of Beijing’s leadership via the app’s algorithm.
Last month, after a long period of deliberation, stalling and false starts, ByteDance and representatives in the US reached a deal wherein the organisation would create an entity for US TikTok operations – with non-Chinese majority owners – as a means of addressing some of the pressing security concerns.
As noted by Bloomberg, many know ByteDance solely in the context of popular social media platform TikTok; however, it also operates as a well-known AI company in China, offering chatbot app Doubao, AI video generation model Seedance 2.0, and image generation mode, Seedream 5.0.
Earlier this week (16 February), ByteDance promised to “strengthen current safeguards” against intellectual property theft after Disney threatened legal action regarding videos generated by Seedance 2.0.
Advertisement
Via a cease and desist letter, Disney claimed that Seedance 2.0 operates a “pirated library” of Disney assets, taken from its biggest franchises. The company accused ByteDance of using its proprietary content assets as if they were in the public domain.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
In a year defined by sovereign AI and data, one truth has become unavoidable: humans will matter more than ever. Even the most ambitious AI strategies will stall if organizations fail to invest in their people.
More than 95% of enterprises worldwide now say they want to operate as their own AI and data platforms within the next 850 working days. It’s a stunning recognition from C-suite leaders across 13 countries representing a combined GDP of $48 trillion – and a signal of how rapidly the world is shifting. IDC estimates this transition could generate $17 trillion in GDP growth, effectively creating the world’s third-largest economy if counted as a country.
Yet despite this massive ambition, only 13% of more than 134,000 major enterprises are getting it right.
These early leaders have made AI and data sovereignty a mission-critical priority. Their infrastructure allows intelligence to be accessed securely – anywhere, anytime, and in any form. The results speak for themselves: they see 5x higher ROI than the rest, with 2x more GenAI and agentic systems deployed in mainstream production. They are also 250% more confident in their ability to thrive long term.
Advertisement
Enterprises such as Abbott, AIA Singapore, Aviva India, Boston Scientific, Danske Bank, ENOC, JP Morgan Chase, Mastercard, Singtel, Wells Fargo, Toyota, and others are already proving what scaled success looks like.
But this transformation is not a push-button upgrade. Digital transformation took nearly a decade. The AI and data revolution may peak in just three to four years, and its impact could far surpass anything seen before.
That is why the defining question of the next era is not purely technological. Sovereign AI will rise or fall on human readiness. Organizations that fail to reskill, align, and carry their workforce into this transformation will find their ambitions constrained before they ever scale.
There are three key reasons why.
Advertisement
The Intelligent Systems Economy Will Require Hundreds of Millions of Skilled People
This new AI-driven economy brings greater complexity than the cloud migration wave. According to the World Economic Forum’s 2025 Future of Jobs report, AI is expected to displace 92 million jobs, but also create 170 million new ones—a net gain of 78 million. In some countries, up to 70% of these new roles risk going unfilled due to skill shortages.
“We cannot realize the potential of this new intelligent systems economy unless we invest significant time and energy reskilling and enabling employees in new ways,” says Einav Lavi, CHRO at EDB. “Demand for qualified people will far exceed supply, emphasizing how central humans are to this revolution.”
Enterprise-Wide Agentic Success Requires Everyone – Not Just Specialists
The top 13% of enterprises treat AI and data sovereignty as a company-wide standard. Their 2x density of AI initiatives and 5x ROI stem from building a sovereign foundation that reaches everyone—from HR and frontline staff to product design, engineering, and finance.
They rolled out GenAI and agentic systems in a coordinated, enterprise-wide sequence that embedded AI into organizational DNA. Skill levels varied, but reskilling at scale produced transformation at scale.
As enterprises evolve into AI “factories,” every employee becomes part of the production line, sharing common standards, practices, and a unified vision.
The Future Workforce Will Demand Continuous Reinvention
For most of the past century, people held 1.5 careers across 5–10 employers. That era is ending. By 2050, 60–80% of today’s jobs will be automated, and individuals may have 20–30 roles across a dozen organizations.
“In this environment, continuous reskilling becomes one of the most valuable currencies of success,” Lavi notes. “The enterprises that thrive will invest as much in their people as they do in their AI.”
Advertisement
AI itself will accelerate this reinvention – surfacing internal opportunities faster, matching people to roles or stretch assignments, and building personalized development pathways. Growth will be driven increasingly by skills and contribution, not proximity or bias.
For HR and managers, AI-powered “people copilots” will reshape workforce planning by identifying early signals of burnout, workload imbalance, sentiment shifts, and retention risks – augmenting, not replacing, human judgment.
The goal is not the automation of humanity, but the elevation of what makes us human – freeing people to focus on creativity, judgment, empathy, and innovation, the very things machines cannot replicate.
Digital Trends partners with external contributors. All contributor content is reviewed by the Digital Trends editorial staff.
An anonymous reader quotes a report from NPR: NASA could launch four astronauts on a mission to fly around the moon as soon as March 6th. That’s the launch date (PDF) that the space agency is now working towards following a successful test fueling of its big, 322-foot-tall moon rocket, which is standing on a launch pad at the Kennedy Space Center in Florida.
“This is really getting real,” says Lori Glaze, acting associate administrator of NASA’s exploration systems development mission directorate. “It’s time to get serious and start getting excited.” But she cautioned that there’s still some pending work that remains to be done out at the launch pad, and officials will have to conduct a multi-day flight readiness review late next week to make sure that every aspect of the mission is truly ready to go. “We need to successfully navigate all of those, but assuming that happens, it puts us in a very good position to target March 6th,” she says, noting that the flight readiness review will be “extensive and detailed.” […]
When NASA workers first tested out fueling the rocket earlier this month, they encountered problems like a liquid hydrogen leak. Swapping out some seals and other work seems to have fixed these issues, according to officials who say that the latest countdown dress rehearsal went smoothly, despite glitches such as a loss of ground communications in the Launch Control Center that forced workers to temporarily use backups.
The tech giant’s cloud division, Amazon Web Services, issued an unusually pointed public rebuttal Friday afternoon to a widely cited Financial Times report asserting that Amazon’s own AI coding tools have caused at least two AWS outages in recent months.
The story was picked up by numerous media outlets, and the widely followed tech news aggregator, as an example of the risks of deploying agentic AI tools, and the underlying question of who — or what — is responsible when something goes wrong.
In a blog post titled “Correcting the Financial Times report about AWS, Kiro, and AI,” Amazon acknowledged a limited disruption to a single service in one region last December but attributed it to a user error in configuring access controls, not a flaw in the AI tool itself.
“The issue stemmed from a misconfigured role—the same issue that could occur with any developer tool (AI powered or not) or manual action,” Amazon said, noting that it received no customer inquiries about the disruption.
Advertisement
In addition, the company wrote, “The Financial Times’ claim that a second event impacted AWS is entirely false.”
This is where it gets into semantics, the key phrase being “impacted AWS.” In fact, the FT reported that Amazon itself acknowledged a second incident but said it did not affect a “customer-facing AWS service.”
In other words, if an incident doesn’t impact a service used by customers, does it count as an outage? The FT called it one. Amazon clearly thinks not. And this is ultimately the crux of the dispute.
As for the undisputed outage impacting AWS, the FT’s report cited four people familiar with the matter in describing a 13-hour interruption to an AWS system in mid-December.
Advertisement
The sources said engineers had allowed Amazon’s Kiro AI coding tool — an agentic assistant capable of taking autonomous actions — to make changes, and that the tool determined the best course of action was to “delete and recreate the environment.”
Multiple Amazon employees told the publication that it was the second time in recent months that AI tools had been involved in a service disruption. According to the FT report, a senior AWS employee said the outages were “small but entirely foreseeable,” adding that engineers had let the AI agent resolve issues without human intervention.
AWS is Amazon’s most profitable division. It generated $35.6 billion in revenue last quarter, up 24%, and $12.5 billion in operating income. The cloud unit is a significant focus of the company’s planned $200-billion capital spending spree this year, much of it directed toward AI infrastructure.
In addition to using agentic tools in its own operations, Amazon is selling them to AWS customers, making any narrative about AI-caused outages particularly unwelcome.
Advertisement
Amazon’s core defense — that the December incident was “user error, not AI error” — was already included in the FT’s original story. The blog post largely restates that position in a more prominent and pointed way.
“We did not receive any customer inquiries regarding the interruption,” Amazon wrote in its response. “We implemented numerous safeguards to prevent this from happening again—not because the event had a big impact (it didn’t), but because we insist on learning from our operational experience to improve our security and resilience.”
Amazon said the disruption was limited to AWS Cost Explorer, a tool that lets customers track their cloud spending, in one of its 39 geographic regions. Reuters and The Verge reported that the affected region was in mainland China, citing an Amazon spokesperson. It did not affect core services such as compute, storage, or databases, the company said.
The company added that it has since implemented new safeguards, including mandatory peer review for production access.
Advertisement
Posting on X, New York Times reporter Mike Isaac called the Amazon response “the most prickly” he’d seen from Amazon in years, comparing it to the past era when former White House press secretary Jay Carney, who led public policy for the company, spoke out strongly in its defense.