Tech
Razer’s Iskur V2 X Shows What Happens When a Company Keeps the Essentials and Cuts the Extras

Hours spent staring at a screen do stack up, when parts of your body begin to revolt, and Razer took all of this into account when designing the Iskur V2 X. They focused on day-to-day comfort and long workouts, and priced it all at $194.99 after clipping the on-page coupon (was $300).
Setting this chair up takes about 20 minutes, thanks to the simple-to-follow directions and an included tool. Everything is neatly wrapped, and the metal base, gas lift, and fabric-covered pieces are all individually protected to prevent damage during shipping. Screws fit together perfectly, and the plastic covers simply snap into the joints to keep everything hidden. Once completed, the steel frame and aluminum wheelbase provide a surprisingly sturdy feel from the outset, allowing it to roll effortlessly across carpet or hard floors.
Sale
Razer Iskur V2 X Ergonomic Gaming Chair: Built-in Lumbar Support – Widened Seat Base – High Density Foam…
- BUILT-IN LUMBAR SUPPORT — Designed with an integrated lumbar arch that gently supports the lower back and encourages a neutral sitting position…
- WIDENED SEAT BASE WITH REDUCED EDGES — Sculpted with reduced edges to guide the body towards the center for optimal pressure distribution; enjoy…
- CONTOURED HIGH DENSITY FOAM CUSHIONS — Experience a greater level of lower body support and stability with thick cushion padding that reduces…
The seat is made of thick, high-density foam, so it retains its shape even after weeks of stretching for up to 8 hours at a time. Testers reported no sinking or bottoming out, and the edges remained firm enough to keep your legs from moving forward during prolonged gaming sessions. The base is rather wide, allowing you to alter positions without feeling confined, yet the overall profile is compact enough to fit on a typical desk.
The backrest features a fixed curvature that gently nudges the lower back, attempting to force you into a more neutral position. For persons who are 5’8″ to 6’2″, this curvature lines up quite naturally, reducing the pain caused by slouching. The fabric on the seat and back is also cool and breathable, so you won’t have flesh stuck to it or feel hot on long evenings. The stitching is neat and crisp, and the material appears to hold up far better than the thin vinyl garbage used on cheaper seats.

You can recline it up to 152 degrees using a simple lever on the right side, allowing you to lay back for a quick break or stretch out for an extended cut scene. Tension levels are simply adjustable, allowing the chair to sway gently without feeling too loose. The armrests move up and down and rotate inwards or outwards, so they should be enough for both keyboard work and controller grips, even if they lack the fancy extra directions found on some of the more costly models.

The Iskur V2 X costs $194.99, which is significantly less than its big brother, which costs $649, but it retains the same solid design and core support system that its bigger sibling received accolades for. Razer simply removed a few of the more ostentatious features, such as the adjustable lumbar knobs, four-way armrests, and head pillow that came with the previous model, in order to save money without sacrificing what was most important. The end effect is a chair that feels robust and is built to last, rather than a cheap, fragile item that will fall apart within a few months.
Tech
Meta launches Incognito Chat on WhatsApp, the first AI mode it says even Meta cannot read
The new mode runs Meta AI on WhatsApp inside the company’s Private Processing enclave, with conversations deleted by default and no server-side record retained.
Meta has launched an Incognito Chat mode for Meta AI on WhatsApp and the Meta AI app, an effort to address the awkward fact that its assistant, like every other major AI chatbot, has until now been able to read the conversations users have with it.
The new mode, the company announced on Tuesday, processes user messages inside what Meta describes as a secure environment that even Meta cannot see, with conversations deleted by default once the session ends.
The technical foundation is WhatsApp’s Private Processing system, the architecture the company published in April 2025 to let AI features run on encrypted data inside Trusted Execution Environments on Meta’s servers.
Inside that enclave, the model can read and respond to a query, but the contents are not accessible to Meta’s engineers, its logging systems, or any of its commercial pipelines.
Other apps offer what they call incognito modes for AI conversations, but Meta’s framing in the announcement is pointed: “they can still see the questions coming in and the answers going out.”
The launch responds directly to a category-wide privacy concern. AI chatbots have become a default tool for the sort of question users would once have asked a doctor, a lawyer, or a partner, with all the data exposure that implies.
OpenAI, Google, and Anthropic each store conversation histories by default, with varying user controls. Apple Intelligence routes some queries through Apple’s Private Cloud Compute, an enclave architecture that is the clearest existing analogue to what Meta is now shipping inside WhatsApp.
Two product details follow from the design. First, the conversations are not saved server-side at all; users cannot pull up Incognito Chat history later because there is nothing to pull up.
Second, the disappearing-by-default behaviour means even a compromised device leaks less, since the chat residue clears between sessions.
Meta has published a technical whitepaper describing the cryptographic architecture for outside review.
A second feature is on the way. Sidechat with Meta AI, also protected by Private Processing, will let users get AI help inside an existing WhatsApp conversation, with the assistant aware of the chat’s context but its responses kept invisible to the other participants.
Meta said Sidechat will arrive on WhatsApp “in the coming months,” without a firmer date.
The launch’s commercial logic is straightforward. WhatsApp has been built for a decade around end-to-end encryption as a selling point, and Meta’s pitch for AI on the platform has had to find a way around the central tension that a conversational AI assistant needs to read your messages to be useful.
Private Processing is the company’s attempt at squaring that circle. The Incognito Chat product is the first time the architecture has been put behind a user-facing feature on this scale.
Whether the implementation holds under scrutiny is a separate question. Trusted Execution Environment-based AI systems have been audited and criticised across the industry, with researchers periodically demonstrating side-channel attacks against similar architectures from Apple, Google, and the hyperscalers.
Meta has invited external review of its Private Processing design, and the new whitepaper extends that posture, but the model’s resistance to subpoena, in particular, has not yet been tested in court.
Incognito Chat with Meta AI begins rolling out on WhatsApp and the Meta AI app this week, with broader availability over the coming months.
The launch lands at the end of a difficult fortnight for Meta on the privacy front, with US employees protesting the company’s new mouse-tracking software on Monday and the company a week out from layoffs of roughly 8,000 staff.
Inside Meta, the bet appears to be that consumer-facing privacy moves like this one will outweigh the internal-surveillance optics.
Tech
How to Use Amazon Seller Central Reports to Scale Your Brand
.png)
Amazon Seller Central offers a layered reporting suite — with access determined by selling plan, fulfillment method, and Brand Registry status. In most organizations, these reports serve a single purpose — confirming what has already occurred: sales reconciled, fees reviewed, inventory checked. That operational function is necessary, but it represents only a fraction of what these reports are built to deliver. The same reports hold intelligence that directly determines how a brand scales:
➔ Conversion signals that reveal listing degradation before it erodes rank
➔ Acquisition quality data that separates genuine brand growth from retargeting spend
➔ Product feedback loops that surface quality and listing gaps before they compound in reviews
➔ SKU-level margin intelligence that identifies which products can sustain paid investment and which cannot
The gap is not access — it is how the data is leveraged.
This blog provides a structured approach to leveraging five core Seller Central report categories — Business, Advertising, Fulfillment, Return, and Payments — for measurable brand growth. It covers best practices for leveraging reports effectively, the structural limitations every brand team needs to account for before acting on the data, and how Amazon account management helps.
The Core Categories: How Amazon Seller Central Reports Are Structured
|
Report |
Key |
Role |
|
Business Reports |
Sales Dashboard, Detail Page Sales and |
Conversion health, traffic trends, |
|
Advertising Reports |
Search Term Report, Placement Report, |
Search demand, new-to-brand acquisition, |
|
Fulfillment Reports |
Inventory Ledger, Stranded Inventory, Inbound |
Inventory health, stockout prevention, |
|
Return Reports |
FBA Customer Returns Report, Returns Trend |
Product quality feedback, brand equity |
|
Payments Reports |
Transaction View, Fee Preview Report |
SKU-level margin clarity for reinvestment |
How to Use Amazon Seller Central Reports for Brand Growth
Step 1: Audit Business Reports for Brand Performance Signals
1. Detail Page Sales and Traffic by Child ASIN: Read session count, page views, Unit Session Percentage (conversion rate), and Featured Offer percentage at the variation level. Track these metrics weekly per ASIN — a sustained downward trend in Unit Session Percentage is your leading indicator of listing degradation before it erodes rank.
2. Brand Performance Report: Review Average Customer Review, Number of Customer Reviews, Sales Rank, and Featured Offer percentage together for each ASIN. These four metrics form a direct snapshot of brand health at the listing level. Flag metric combinations that signal brand risk rather than reading each metric in isolation.
3. Sales Dashboard: Review weekly and monthly trend lines across weekly and monthly windows to gauge whether brand momentum is accelerating or declining. Use the Compare Sales feature to layer on year-over-year context — this helps separate genuine trajectory shifts from recurring seasonal patterns.
Cross-reference sessions against Unit Session Percentage: falling sessions signal a visibility problem; falling conversion with stable sessions signals a listing or pricing issue.
Note: Business Reports are available only to sellers on a Professional selling plan, and historical data is retained for up to two years.
Step 2: Extract Brand Acquisition Insights from Advertising Reports
Advertising Reports address two brand growth questions that Business Reports cannot answer. The first is which channels and keywords drive new demand into the brand. The second is what proportion of that demand represents genuinely new-to-brand customers versus returning buyers.
1. Search Term Reports: Review the actual queries customers typed before clicking an ad.
– Negate: Any term that spends money with zero conversions over 30 days is added as a negative keyword. This is the single fastest way to improve ACoS without changing bids.
– Harvest: Any search term that converts at or below your target ACoS is added as an exact match keyword.
For brands managing large campaign portfolios, use the Bulk Operations feature in Campaign Manager to download a custom spreadsheet with Sponsored Products and Sponsored Brands Search Term data. Edit keyword additions and negations directly in the file and upload to update campaigns in a single operation.
.png)
2. Sponsored Brands and Sponsored Display Reports: Isolate the New-to-Brand (NTB) metric inside these campaign types to separate new customer acquisition from repeat buyers. Monitor NTB percentage, NTB order cost, and NTB sales separately from overall ROAS to measure true brand expansion, not branded retargeting.
3. Placement Reports: Compare conversion and spend distribution across top-of-search, product pages, and rest-of-search. Redirect budget toward placements with the strongest NTB and conversion performance. Top-of-search placements carry disproportionate brand visibility and deserve priority investment when NTB indicators support it.
Step 3: Protect Brand Momentum with Fulfillment Reports
1. Inventory Ledger Report: Consolidate inventory movement across Amazon warehouses — adjustments, receipts, and shipments — in a single view. Monitor inventory accuracy and act on discrepancies before stockouts hit high-velocity ASINs.
.png)
2. Stranded Inventory Report: Identify stock held in FBA warehouses but unsellable due to listing issues. Each stranded ASIN represents a direct revenue leak. Recover these listings weekly, before the associated search rank decays.
3. Inbound Performance Report: Track the efficiency of FBA shipments, including missing units, incorrect labeling, and receiving delays. Address recurring inbound issues at the source before they escalate into repeat offenses, as persistent issues extend reimbursement cycles and delay restock.
4. Inventory Performance Index (IPI): Monitor IPI as a brand growth prerequisite, not a warehouse KPI. Calculated from fulfillment data, the score directly affects FBA storage limits. A low IPI restricts scalability and caps paid acquisition ceilings.
Step 4: Read Return Reports as Product Quality Feedback
1. FBA Customer Returns Report: Mine return reasons, order IDs, and SKU-level detail for recurring patterns. Aggregate return reasons by ASIN to reveal product issues that would otherwise appear only in individual customer reviews.
2. Return Trend Monitoring: Flag ASINs with rising return rates as a signal of either a product quality issue, a listing accuracy issue, or both. Each failure mode damages brand equity and search rank. Address the root cause visible in return reasons, rather than treating the symptom through returns management.
For example, when the most frequent return reason on an ASIN is “not as described,” the listing content itself is driving the returns. An updated, accurate listing reduces future returns, improves conversion rate, and reinforces brand trust — three outcomes from a single fix.
3. Schedule Report Generation: Set up daily schedules for All Returns and Prime returns, instead of pulling them manually. Three operational constraints to note:
– One active schedule per report type
– Maximum of 30 reports in the Scheduled Reports section
– Schedule changes require deletion and recreation
Reports can be scheduled by return date for both FBA and seller-fulfilled orders to track return reasons and item condition across fulfillment channels.
.png)
Step 5: Use Payments Reports to Inform Brand Reinvestment
1. Transaction View: Break down every order into referral fees, FBA fees, promotional rebates, and net proceeds. Surface ASIN-level margin visibility to identify which products can sustain paid acquisition pressure and which cannot.
2. Fee Preview Report: Project FBA fulfillment, storage, and referral fees across existing FBA inventory. Review the report to identify ASINs where upcoming fee changes or aged-inventory surcharges will compress margin. Adjust pricing or inventory planning before the fees hit the bottom line.
Limitations & Challenges of Amazon Seller Central Reports
#1 No Built-in Competitive Benchmarks
Seller Central Reports show only your own performance data. There is no native view of how your brand performs against category peers or direct competitors. External benchmarking requires third-party data or Brand Registry-gated reports.
#2 Data Latency Varies Across Reports
Business Reports refresh daily, while Fulfillment and Payments reports often run on weekly or delayed cycles. This inconsistency complicates cross-report analysis when precise attribution windows matter, particularly for reconciling paid performance against organic results within the same reporting period.
#3 Limited Brand-Level Insights Without Brand Registry
Deeper brand-growth tools sit outside the standard Reports tab. These include Brand Analytics dashboards (Search Query Performance, Market Basket Analysis, Customer Loyalty Analytics), the Brand Dashboard, and Voice of the Customer. Brand Registry enrollment unlocks these additional layers.
#4 Attribution Gaps Between Advertising and Organic
Advertising Reports attribute sales to campaigns, while Business Reports track total sales. Reconciliation between the two requires careful segmentation, especially when paid campaigns and organic traffic overlap on the same keywords.
#5 Report Siloing Across Tabs
Seller Central Reports live across multiple tabs — Reports, Advertising, Returns, Payments — with inconsistent naming and export formats. Cross-report analysis almost always requires careful manual reconciliation.
Best Practices for Using Seller Central Reports Effectively
1. Standardize Date Ranges Across Reports
Different reports operate on different default time windows. Business Reports default commonly to 30 days, while granular Advertising Reports — including Search Term and Purchased Product Reports — are subject to a hard 90-day lookback limit, not a display default. Manually aligning date ranges across reports before cross-referencing ensures comparisons reflect the same performance window and eliminates attribution mismatches.
2. Benchmark Week-Over-Week, Not Day-Over-Day
Single-day metrics are statistically volatile, particularly on low-velocity SKUs where marginal order volume can produce significant conversion rate variance. Weekly benchmarking normalizes daily fluctuations while keeping the reporting window tight enough to surface trends before they compound.
3. Cross-Reference Reports for Root-Cause Analysis
A conversion decline in Business Reports frequently correlates with a Buy Box shift, a pricing change, or a stranded listing in Fulfillment Reports. Isolating a single metric without cross-report validation increases the risk of misdiagnosis and misdirected corrective action.
4. Export and Archive Reports Externally
Business Reports retain data for up to two years. Granular Advertising Reports, including those referenced above, are capped at a 90-day lookback window with no native recovery option beyond that threshold. Once the window closes, that data is permanently removed from Seller Central — brands that need historical context must export it on a defined schedule.
5. Align Reporting Depth with Organizational Role
Operational teams require weekly tactical reviews covering stockouts, suppressed listings, and Buy Box performance. Brand leadership requires monthly and quarterly trend analysis focused on category share and customer retention. Calibrating reporting depth to the decision-making level of each function reduces analysis fatigue and maintains actionable review cycles across the organization.
The Business Imperative: Seller Central Reports provide the data. Translating that data into consistent brand decisions — across listings, advertising, inventory, returns, and margins — requires operational discipline that compounds over time.
For brands managing catalog depth, multi-channel fulfillment, and active advertising simultaneously, cross-report analysis, weekly metric reviews, search term management, inventory reconciliation, and fee audits each demand specialization and bandwidth that most in-house teams cannot sustain at the required cadence.
Amazon account management services bring the field-level expertise and technical infrastructure to close that gap — identifying signals early, connecting them across report categories, and converting them into decisions before they compound into performance issues.
As catalog scale increases, inconsistent report review compounds directly into rank loss, wasted ad spend, stranded inventory, and missed reinvestment signals — each one a measurable cost to brand performance. The question is not whether these gaps exist. The question is how long your brand can afford to leave them unaddressed.
Tech
Trump Already Has His ‘Get Out Of Jail Free’ Card. Now He Wants A ‘Get Out Of IRS Audits’ Card
from the nice-work-if-you-can-get-it dept
In a ruling that will clearly be remembered as one of the worst in the history of the Supreme Court, two years ago, the court gave Donald Trump a get out of jail free card, which he appears to be trying to take full advantage of with all the criming in his second term. But, as always with this guy, it’s never enough.
We’ve already covered in detail the ridiculous situation in which Donald Trump acting in his supposed personal capacity, while still being the president, sued his own IRS for $10 billion, because a contractor leaked his tax returns a while back (that contractor is currently in prison for doing so). Again, there is zero indication of any actual harm. Every president — and nearly all major candidates — for the past 50 years released their tax returns to the public. Except Trump.
A decade ago he claimed that it was because he was being audited, and promised to release them once the audit was over. But he’s never done anything. And, as many people have noted, when President Richard Nixon started this tradition of releasing the president’s tax returns, he was actually being audited by the IRS, and was able to release his returns without a problem.
Either way, a contractor (not an IRS employee) leaked some of Trump’s returns to ProPublica and the NY Times, which resulted in a few stories before the news cycle moved on within days. It certainly didn’t stop Trump from being elected in 2024. And even though the returns were leaked in 2019 and 2020, Trump waited until he was back in the White House (and, in charge of the IRS and the DOJ) to file this $10 billion lawsuit.
We’ve covered the ridiculous claim that the “two sides” (there aren’t two sides) were “negotiating a settlement” and how the judge in the case has tried to call timeout, noticing that since Trump is effectively negotiating with himself there’s no cause or controversy, and thus there may be no jurisdiction for the court to hear the case. There’s still briefing going on over that, but the NY Times reports that the supposed (not really) “negotiations” have continued, with Trump apparently proposing that the settlement include the IRS dropping audits of Trump, his businesses, and his family, which would just be a shocking level of corruption from an administration that has spent its first year and a half in office trying to be as blatantly corrupt as possible.
One of the settlement options the Justice Department and White House officials are reviewing is the possibility of the I.R.S. dropping any audits of Mr. Trump, his family members or businesses, according to two of the people.
Again, even though the news cycle moved on quickly, perhaps it should return to exactly what those leaked tax returns showed: which is that at a time when Trump was publicly claiming to be rolling in cash, he basically paid effectively no income taxes and was racking up massive losses — figures that raise serious questions about his financial entanglements and what he stood to gain from his first term in office.
To have the audits of what happened during those years completely dropped — and not just for him, but for his entire family and related businesses — is another form of a get out of jail free card. Call it a “tax cheat for life” card.
To do this at a time when the public is struggling, due almost entirely to Donald Trump’s ridiculous policies — tariffs driving up inflation massively, an illegal war quagmire in Iran driving up energy prices — is even more insulting to the public that Donald Trump is supposed to be working for. The same day this story came out, Trump was asked about whether he was thinking about the impact of his out-of-control war on Americans’ financial situation, and he responded “not even a little bit” and that “I don’t think about Americans financial situation. I don’t think about anybody.”
Well, except himself, apparently.
Filed Under: audits, corruption, donald trump, irs, tax returns
Tech
KDE Receives $1.4 Million Investment From Sovereign Tech Fund
The German Sovereign Tech Fund has invested 1.2 million euros ($1.4 million USD) in KDE Plasma technologies to help strengthen the structural reliability and security of the desktop environment’s core infrastructure, including Plasma, KDE Linux, and the frameworks underlying its communication services. Longtime Slashdot reader jrepin shares an excerpt from the announcement: For 30 years, KDE has been providing the free and open-source software essential for digital sovereignty in personal, corporate, and public infrastructures: operating systems, desktop environments, document viewers, image and video editors, software development libraries, and much more.
KDE’s software is competitive, publicly auditable, and freely available. It can be maintained, adapted, and improved in-house or by local software companies. And modifications (along with their source code) can be freely distributed to all users and departments within an organization.
KDE will use Sovereign Tech Fund’s investment to push its essential software products to the next level, providing every individual, business, and public administration with the opportunity to regain their privacy, security, and control over their digital sovereignty. Slashdot reader Elektroschock also shared a statement from Fiona Krakenburger, Technical Director at the Sovereign Tech Agency.
“We have long invested in desktop technologies for a reason: they are the primary way people access and use digital services in everyday life,” says Krakenburger. “The desktop holds personal data and mediates nearly every service we depend on, from booking the next medical appointment, to education, to the way we work. We are investing in KDE because it is one of the two major desktop environments used across Linux and plays a key role in how millions of people experience open technology. Strengthening KDE’s testing infrastructure, security architecture, and communication frameworks is how we invest in the resilience and reliability of the core digital infrastructure that modern society depends on.”
Tech
The Steam Controller Wilhelm Scream Easter Egg Is Incredible
Thanks to Reddit, one of the best little secrets of the Valve Steam Controller has been discovered. Now I can’t stop dropping it, because it turns out it makes a Wilhelm scream if it does. I tested it, and can confirm. You don’t even need the controller paired to anything to make it happen.
Throughout my several-week review of Valve’s new game controller, I never knew that it made the infamous Wilhelm scream, a stock sound effect that has been used in hundreds of movies, when dropped. How would I know it did that? I don’t drop controllers. Or I don’t intend to.
But that’s exactly what the controller does when dropped even lightly on any surface. I picked up and dropped the controller a bunch of times onto my sofa, from about 3 feet, and that iconic scream that my kids love happened. Check out the video below.
The scream is randomized: It’s not about how hard you drop it, so don’t do that. A harder accidental fall onto the floor produced no scream. Two straight drops made screams. Then none for a bunch after that. That’s the fun of it.
Apparently, the scream is happening via the motor haptics in the controller, which act as a speaker. Or, is it a speaker? It sounds really good, it’s stunning.
The effect occurs even if the Steam Controller isn’t paired to anything. I just turned the controller on, and while it was cycling for Bluetooth pairing, it still made the drop screams, no Steam Deck or PC on or nearby.
I don’t generally recommend dropping $99 game controllers, but this Easter egg is so amazing that I want all game controllers to make little noises now. What if Joy-Cons made Mario sounds? PlayStation DualSense made AstroBot chirps?
I already loved the Steam Controller. I love it even more now.
Tech
Best Desks of 2026: I’ve Spent Nearly 4,000 Hours Testing Desks. These Are the Ones You Want
Testing desks is something of a subjective game. Much like office chairs, the tests are based on comfort, reliability and ease of setup rather than things you can test in electronics such as wattage and battery usage. I still tested each one rigorously and will continue to test them for longevity in the coming months.
I tested these desks by asking three people to try each one. Each of them used the desk for at least 16 hours and then gave me their impressions. The three people were 6 feet, 1 inch tall; 5 feet, 8 inches tall; and 5 feet, 4 inches tall respectively, to give me a good cross-section of average user height.
Setup time and package quality
Building desks can often be difficult and time-consuming. For each desk, I timed how long it took to unpack and assemble, and I noted whether the manual was easy to follow. I followed the instructions as closely as possible so that each build was performed as if I had never built one before. I also thoroughly checked the packaging, to make sure it wasn’t damaged, and if it was secure enough to carry the desk it had in it. Any damage was noted, and images were sent to the manufacturers for review.
Structural integrity
Modern desks need to be able to hold a good amount of weight. If you’re at a writing desk you might only have a small laptop, but if you’re using a gaming desk, it likely has two monitors and a giant gaming PC as well. For each desk, I checked the maximum load specification, and I tried to match that with the materials we actually use on our desks.
I used:
- A heavy gaming PC tower
- Two 27-inch gaming monitors on a dual monitor arm
- A MacBook Pro
- Two different keyboards and assorted mice and trackpads
- My Oculus Quest 2
- My phone stand and USB hub
- A podcasting mic and headphones
Depending on the length and weight capacity of the desk, I mix and match these items, then check for any bowing of the top or inconsistencies in how the desk felt as I worked.
The wibble-wobbles
This is a bit of a throwback from when my dad used to make furniture. Anything my dad built would be critiqued by my mum, and if it didn’t pass muster, she would say, “It’s a bit wibbly-wobbly, isn’t it, dear?” Once I’ve built each desk and loaded it for normal use, I would check it for the wibble-wobbles. This means rocking it from side to side and forward and backward to check that all the screws, bolts and fixtures kept everything rigid.
Tech
Origin Lab raises $8M to help video game companies sell data to world-model builders
As AI begins to interact with the physical world, new types of labs are working to build world models that could be used to operate physical robotics or model objects in physical space. Unlike large language models, there isn’t an easy source of data for those models, which has left many labs scrambling to assemble the necessary training sets.
Now, one startup is emerging with an unlikely data source: the video game industry.
That’s the premise of Origin Lab, which just announced an $8 million seed funding round led by Lightspeed Ventures. SV Angel, Eniac, Seven Stars, and FPV also participated, with angel funding from Twitch co-Founder Kevin Lin and Cruise founder Kyle Vogt.
“The AI systems that are being built now need to understand how the physical world works and how things move,” co-CEO and co-founder Anne-Margot Rodde told TechCrunch. “That data essentially lives in video games.”
In simple terms, Origin Lab will serve as a marketplace where world-model-focused labs such as Yann LeCun’s AMI Labs or Fei-Fei Li’s World Labs can buy high-quality licensed data. On the other side of the trade, video game companies can squeeze additional revenue out of the digital assets they’ve already created. In the middle, Origin Lab will convert the video game assets into a form that works as training data — something that could be as simple as a rendering run or as complex as automating hours of walkthrough footage.
“It became clear that the video game industry was sitting on some incredibly valuable data, but there was no real way or infrastructure to basically connect AI labs and the video game industry,” says Rodde. “So essentially, we built that bridge.”
Labs have long been interested in video game footage as a data source, but licensing and data quality issues have often gotten in the way. In December 2024, OpenAI caused a minor scandal when the first version of its Sora video-generation model seemed to regurgitate footage of popular video games and streamers — presumably because it had been trained on Twitch streams. Amazon has been open about its interest in using Twitch footage to train models.
Origin’s success in fundraising is a sign of a growing market — not just for training data, but for startups that can serve as essential suppliers to major AI labs. Faraz Fatemi, a partner at Lightspeed who led the Origin investment, says the success of companies like Scale.AI has made the opportunity impossible to ignore.
“We’ve seen how sharp the revenue scaling can be for data vendors that are serving the major labs,” Fatemi told TechCrunch. “These are very well-capitalized businesses, and the bottleneck for all of them is data.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
A big price cut makes the Kindle Colorsoft much easier to recommend
Colour displays on e-readers have always come at a premium steep enough to make most readers pause and quietly settle for black and white instead.
The Kindle Colorsoft is Amazon’s answer to that hesitation, and it is now down from £239.99 to £174, with £65.99 off its usual retail price.
A big 27% price cut brings the Kindle Colorsoft under £175, making a colour Kindle far more affordable
The Kindle Colorsoft is Amazon’s answer to the steep prices of colour e-readers, and it is now down to £174.

This is not simply a standard Kindle with a colour filter dropped over the top; the Colorsoft uses a purpose-built 7-inch Colorsoft display optimised specifically for colour reading, delivering 300ppi in black and white and 150ppi in colour, with a paper-like quality that makes book covers and illustrated content genuinely worth looking at.
The adjustable warm light shifts the display from white to amber, which means comfortable reading holds up whether you are outside in direct sunlight at midday or winding down under a lamp late at night.
Battery life reaches up to eight weeks on a single charge, based on half an hour of reading per day with wireless off and the light at a moderate setting, so the colour display does not come at the cost of the long battery life Kindle readers expect.


Storage sits at 16GB, which is enough to hold thousands of books locally, and free cloud storage covers the rest of your Amazon content library without taking up any space on the device itself.
Waterproofing is rated to IPX8, meaning the Kindle Colorsoft can handle submersion in two metres of fresh water for up to 60 minutes, making it genuinely bath-safe and pool-safe rather than just splash-resistant in name.
Highlighting works across four colours, yellow, orange, blue, and pink, which makes it a more active reading tool for anyone who annotates regularly and wants to distinguish between different types of notes across a single book.
This deal makes strong sense for readers who have been watching the colour Kindle category and waiting for the price to become more reasonable, with the Colorsoft now sitting at its most accessible since launch.
Still want to explore the full Kindle lineup before deciding? Our best Kindle 2026 guide has every current model tested and ranked to help you find the right fit.
SQUIRREL_PLAYLIST_10148964
Tech
Turning AI cost spikes into strategic growth opportunities
Presented by Apptio, an IBM company
AI spending is surging, but the full impact often remains an open question. Closing the gap requires clear answers to how AI is governed, measured, and tied to business outcomes.
ROI uncertainty isn’t unique to AI: In the Apptio 2026 Technology Investment Management Report, 90% of technology leaders surveyed said that ROI uncertainty has a moderate or major impact on overall tech investment decisions, a 5-percentage point year-over-year increase. In other words, tech leaders are increasing their reliance on ROI – even if they don’t fully know how to measure it. And AI economics involves new and unpredictable costs, further complicating ROI calculations. Faced with increasing uncertainty and increasing budgets, technology leaders need a clear, reliable framework for evaluating AI ROI.
Organizations increasingly expect scaled AI to pay its own way, at least partially. According to Apptio’s technology investment management report, 45% of organizations surveyed intend to fund innovation by reinvesting savings from AI-driven efficiencies. That model assumes that such savings are both achievable and quantifiable. Meanwhile, the two-thirds of organizations planning to reallocate existing budget capital to AI will need clarity on the trade-offs involved.
Much like the early days of public cloud, AI costs and returns are difficult to predict. Pricing varies widely across providers and continues to evolve, while consumption is unpredictable. The pressure to adopt quickly is also formidable as organizations navigate the threat of disruption by more agile competitors.
The new math of AI ROI
Considering the many variables, tech leaders should view AI ROI as a matter of optimization. At a high level, the implementation of AI initiatives is inevitable. The question is how to achieve the greatest possible returns — both financial and organizational.
Start with the business problem. There are many ways AI can deliver positive impact, but organizational resources and focus may be limited. Make sure you’re prioritizing the right initiatives by basing your AI investment strategy on quantifiable goals tied to real business outcomes. Are you trying to improve decision-making speed? Increase throughput or capacity? Or chasing cool edge cases with high potential returns but minimal strategic relevance?
Determine what success looks like. AI can introduce a new capability or augment an existing one. For new capabilities, articulate the possibilities you’d like to unlock, such as new revenue opportunities, workflows, or decision-making processes. For augmentations, establish baseline performance and the expected lift you aim to achieve with AI.
Consider how finances will influence your evaluation. Some use cases may show minimal results in the near-term but drive significant value in the long-term. What’s your timeframe for return? On the other hand, more successful rollouts with rapid adoption can generate unexpectedly high inference bills. Would that mean pulling the plug — or leaning in further? What should your cost and return curve look like over the years? As you map your timeline, establish clear thresholds to determine whether you’ll proceed, pause, stop, or accelerate your investment.
Identify the right KPIs. The returns on an AI investment can be even more difficult to evaluate than the costs. Usage, efficiency, and financial impact all matter. But AI success metrics won’t always be straightforward. There may be new usage patterns you don’t yet have a way to measure. Your technology environment may experience follow-on shifts that call for further evaluation. Will you be able to lessen your reliance on other tools, such as reducing seats in your data analytics platform? How will you factor in cross-tool pricing comparisons for multiple AI providers with shifting rates?
To gain full context and insight, you must also take into account the alignment of the initiative with your broader strategy and consider the opportunity cost of the investments you might otherwise have made. Remember that you’re not evaluating AI business value in isolation; you’re deciding whether it’s the best use of finite capital across all your investments.
These decisions will call for a level of insight far exceeding what was needed to justify traditional purchases like network infrastructure or enterprise software. Tech leaders navigating the complexities of AI economics should consider a new framework for data-driven decision-making.
Making AI investment sustainable with TBM
Technology business management (TBM) helps make ROI more concrete and measurable, so it can be relevant to the business. By bringing together IT Financial Management (ITFM), AI FinOps (cloud financial management for AI workloads), and Strategic Portfolio Management (SPM), a TBM framework connects financial, operational, and business data across the enterprise.This makes it possible to account for AI value and cost across a wide array of dimensions — and translate hypothetical innovation into board presentations and budget justifications that hold up under scrutiny.
TBM can help leaders build a trustworthy cost foundation that captures AI spend across labor, infrastructure, inference, storage, and applications. As AI workloads shift dynamically, TBM provides visibility into how that spend is distributed across on-premises systems and cloud environments — both of which require different capacity planning for specialized skill sets. The framework also connects investments to business outcomes, aligning AI initiatives with strategic priorities and measurable results. With increased visibility, you’re able to identify issues and make decisions fast, such as catching cost spikes early. Early detection can help to determine if the usage shift merits shifting funding. This unified view of financial and operational data helps leaders scale what’s working and reassess what isn’t as adoption increases. TBM provides essential visibility and context across the entire AI spend management conversation. Even as pricing evolves, tooling changes, and workflows shift, you can apply the same analytical approach and understand what’s actually working and demonstrate ROI. Leaders who operationalize AI within a TBM framework can:
-
Evaluate ROI at both project and portfolio levels
-
Spot unexpected cost spikes
-
Compare multiple AI tools
-
Understand ripple effects across run-the-business systems
-
Defend investment decisions with confidence
-
Understand and manage total costs and usage across the AI investment lifecycle
From theory to practice
Organizations are moving beyond AI experiments, and we’re past the point where these investments can be funded on optimism alone. Amid heightened uncertainty and cost sensitivity, boards are asking more strategic questions and finance wants trustworthy data.
Enterprise leaders who treat AI as a managed investment, rather than a bet on innovation, are those who will scale it successfully. To fund AI responsibly, leaders must establish clarity around scope, outcomes, cost drivers, and readiness. A TBM-driven approach provides the data foundation, visibility, and accountability to make those decisions.
Learn more here about how Apptio TBM transforms IT spend management in the AI era.
Ajay Patel is General Manager at Apptio, an IBM Company.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Tech
Supreme Court Breaks Another Election To Make Sure Black Voters Are Disenfranchised
from the voter-certainty-be-damned dept
Again, I feel like I’m going crazy here, but the obviously extremely partisan Supreme Court has struck again. I will repeat some of the basics, because it’s hard to believe how blatant all of this is. In November, a (Trump-appointed) judge threw out Texas’s new congressional maps, noting that the Texas state government had made it quite clear it was done for racial reasons, making it a violation of the Voting Rights Act. The judge wrote a detailed 160-page ruling showing how the Trump administration itself had essentially locked in the Texas legislature’s need to draw maps based on race, by threatening them with a civil rights complaint if they didn’t.
The Supreme Court, however, blocked that new map in December, saying that because of the upcoming midterm elections (still months away in December), Texas had to use those new maps (which had only been created in August) because (according to Samuel Alito) Texas voters needed “certainty.” Of course, they could have gone right back to the maps Texas had been using up until August — but somehow that would have shaken things up too much.
Then, a few weeks ago, the Supreme Court issued its Callais decision, effectively wiping out the remaining bits of the Voting Rights Act. Louisiana immediately declared a state of emergency and sought to throw out the map it had already started using for primary season — to redraw it in a much more racist way. And Samuel “the voters need certainty” Alito helped them along by rushing the certification of the Callais decision.
Now, just a few days later, the conservative majority on the Supreme Court has also vacated an even more detailed ruling rejecting maps in Alabama for being racist. The conservative majority claims that this is in light of the ruling in Callais:
The judgment of the United States District Court for the Northern District of Alabama in that case is vacated, and the case is remanded to the United States Court of Appeals for the Eleventh Circuit with instructions to remand to the District Court for further consideration in light of Louisiana v. Callais
Now, that’s already odd for the same reason I raised earlier about the Supreme Court (led by Justice Alito) claiming back in December that they couldn’t overturn Texas’ new map (which has only been announced, and never actually used, months earlier) for the sake of “voter certainty.” Yet here they are issuing a ruling EIGHT DAYS before the Alabama primary.
What the fuck?
It’s bizarre for multiple other reasons as well, including that the Supreme Court already heard a related case regarding the map in Alabama and ruled that it violated the Voting Rights Act (Alito, naturally, dissented). The state went to redraw its map based on that, but the lower court rejected the new maps almost exactly a year ago in an astounding 571-page ruling.
Notably, while that ruling does find that the new maps violate the Voting Rights Act (in multiple ways), it also found that the maps directly violate the Fourteenth Amendment (this discussion is towards the end of that 571-page ruling, so perhaps Alito and the other conservative Justices didn’t read that far?). And, as much as the Court believes it can invalidate the Voting Rights Act, it cannot invalidate the Constitution.
So we have a ridiculously thorough 571-page district court ruling — finding that the maps violate not just the VRA but also the Fourteenth Amendment — and the conservative majority just waves it away. Yet the conservatives on the Supreme Court — the same group who said no last-minute map changes for “voter certainty” — just ordered that clearly discriminatory, unconstitutional map into use, because of how they changed their interpretation of the Voting Rights Act.
But, as Justice Sotomayor points out in her dissent, that would totally ignore the Fourteenth Amendment part!
At the end of that trial, the District Court concluded “with great reluctance and dismay and even greater restraint” that Alabama had not merely spurned the opportunity to remedy past discrimination, but in fact had intentionally violated the Fourteenth Amendment.
Given that, the ruling in Callais could only possibly impact the VRA part of the lower court decision. Not the Fourteenth Amendment bit. But the majority on the Supreme Court just ignores that.
Nothing in the District Court’s Fourteenth Amendment analysis is affected by this Court’s opinion in Callais. Most obviously, Callais changed the legal standard for vote-dilution claims under §2. See 608 U. S., at ___ (slip op., at 19) (“[W]e must understand exactly what §2 of the Voting Rights Act demands”). It said not a word about the standard for Fourteenth Amendment intentional-discrimination claims like the one that the District Court decided on remand in round two.
Even worse, Sotomayor points out that in Callais itself, the majority had claimed that the earlier 2022 ruling regarding the Alabama maps (where they said it violated the VRA) remains good law. But this new ruling clearly contradicts that claim.
Callais also insisted that this Court’s prior decision in Allen remains good law. See id., at ___ (slip op., at 36) (“[W]e have not overruled Allen”). These cases are, of course, Allen. So if Allen is good law anywhere, then it must be good law here. This Court’s finding of racially discriminatory vote dilution is an inextricable, permanent feature of this case, and Alabama’s willful decision to respond by entrenching rather than remedying that dilution is, as the District Court correctly recognized, evidence of discriminatory intent
So, was Alito lying a week and a half ago when he said that Allen was still good law? Or did he just change his mind now, because he’s decided that he needs to proactively strip Black voters of their franchise for the sake of helping Republicans get a few more seats in the House?
And John Roberts wonders why people claim the Supreme Court is “partisan.”
Sotomayor also points out the ridiculousness of doing this a week before the election:
Even if Callais had something to say about the evidence necessary to establish discriminatory intent, it still would not be appropriate to vacate the decision below at this time. That is because Alabama’s congressional primary election is next week, and vacating the District Court’s injunction will immediately replace the current map with Alabama’s 2023 Redistricting Plan until the District Court acts, even though voting has already begun. Vacatur is an equitable remedy, and the Court should not lightly wield it to unleash chaos and to confuse voters.
Honestly, I’m a bit disappointed that she didn’t point to Alito’s “voters need certainty” claim for refusing to block Texas’ new maps back in December.
There is no good-faith reading of these events. Alito said Allen was still good law — then acted as if it wasn’t, twelve days later and eight days before an election. He said voters need “certainty” — then vacated a 571-page ruling finding unconstitutional discrimination with a week to go before Alabama’s primary. And the majority just waved away the Fourteenth Amendment finding entirely, as if they simply didn’t notice it was there.
John Roberts keeps insisting the Court isn’t partisan. At some point, the gap between that claim and what the Court actually does becomes its own kind of answer.
Filed Under: 14th amendment, alabama, john roberts, racism, redistricting, samuel alito, sonia sotomayor, supreme court, texas, voting rights act
-
Crypto World5 days agoHarrisX Poll Found 52% of Registered Voters Support the CLARITY Act
-
Fashion5 days agoWeekend Open Thread: Marianne Dress
-
Crypto World6 days agoUpbit adds B3 Korean won pair as Base token gains Korea access
-
NewsBeat6 days agoNCP car park operator enters administration putting 340 UK sites at risk of closure
-
Fashion2 days agoCoffee Break: Travel Steam Iron
-
Fashion3 days agoWhat to Know Before Buying a Curling Wand or Curling Iron
-
Tech3 days agoAuto Enthusiast Carves Functional Two-Stroke Engine from Solid Metal
-
Politics2 days agoWhat to expect when you’re expecting a budget
-
Business4 days agoIgnore market noise, India’s long-term story intact, say D-Street bulls Ramesh Damani and Sunil Singhania
-
Politics4 days agoPolitics Home Article | Starmer Enters The Danger Zone
-
Tech2 days agoGM Agrees To Pay $12.75 Million To Settle California Lawsuit Over Misuse Of Customers’ Driving Data
-
Politics6 days agoSimon Cowell Says He Was ‘Horrible’ To Susan Boyle During BGT Audition
-
Entertainment7 days agoSarah Paulson Called Out For Met Gala ‘Hypocrisy’
-
Entertainment6 days agoGeneral Hospital: Ric & Ava Bombshell – Ric’s Massive Secret Exposed!
-
Crypto World7 days agoRobinhood says Wall Street is building onchain
-
Entertainment7 days agoBold and Beautiful Early Spoilers May 11-15: Steffy Revolted & Liam Overjoyed!
-
Sports6 days agoUEFA Champions League final schedule, teams, venue, live time and streaming | Football News
-
Entertainment6 days agoWhy David Letterman Called CBS ‘Lying Weasels’
-
Fashion7 days agoThe Best Work Pants for Women in 2026
-
Sports6 days agoMike Tyson speaks out on status of Floyd Mayweather fight


You must be logged in to post a comment Login