Connect with us

Tech

6 Best Duffel Bags We Tested While Traveling (2026)

Published

on

This is not a true duffel bag so much as “the world’s first true wide-mouth packing system,” as Rux calls it, but it is nevertheless an impressive piece of equipment from a company known for its modular gear-toting systems. Not unlike a foldable version of the popular 70L storage container, the Duffel box starts completely flat, but the sides pop up, and the patent-pending top rolls down to form a box that stays open on its own. There are no zippers involved in its construction, but there are multiple straps, panels, and pockets, and you will most likely need to watch an instructional YouTube video to make full use of all the features. However, the beauty of this bag is that it can be just about anything you want it to be. Long-term storage, luggage, a gear box—even a backpack. All is possible with the included straps and dividers in the right places.

Over the past four months, my family has used it as a traditional duffel bag, a storage box, and, currently, a portable equipment organizer for my son’s club soccer team. It’s been stepped on, rained on, and thrown in wagons and vehicle trunks, with nary a scratch on the 105D nylon gridstop fabric. (Though it did get stuck in a downpour once, and I will say I’m not sure I’d quantify the fabric as fully waterproof—closer to water resistant.) Lash points along the inside walls allow it to integrate with Rux’s line of accessories and packing bags (sold separately), in which we’re currently keeping pinnies and goalkeeper gear.

The Duffel Box will be officially for sale on March 16 in two sizes, 55L and 75L; pictured is the 55L. Note that a “Plus” version will include a removable universal shoulder strap, which connects to lash points on the outside, for an extra $25. —Kat Merck

Capacity 55L, 75L
Color Options 2
Dimensions 14.2″ x 18.1″ x 12.6″
Materials Nylon gridstop with waterproof coating and PFAS-free DWR. 3-mm EVA foam.
Additional Features Zipper-free. Water-resistant. Compatible with various accessories and packing bags.
Warranty Lifetime

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

iPhone users can test encrypted RCS texts to Android in iOS 26.4 beta 2

Published

on

After the first beta was iPhone to iPhone only, the second iOS 26.4 developer beta lets iPhones and Androids trade fully encrypted RCS messages for the first time.

RCS support will be added to the iPhone sometime in 2024
RCS support will be extended to include end-to-end encryption

You may recall that in the first iOS 26.4 developer beta, Apple introduced end-to-end encryption (E2EE) for RCS messaging. You may also remember that it was extremely limited and only worked between iPhones with iMessage disabled.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Yes, Section 230 Should Apply Equally To Algorithmic Recommendations

Published

on

from the it-won’t-do-what-you-think-if-you-remove-it dept

If you’ve spent any time in my Section 230 myth-debunking guide, you know that most bad takes on the law come from people who haven’t read it. But lately I keep running into a different kind of bad take—one that often comes from people who have read the law, understand the basics passably well, and still say: “Sure, keep 230 as is, but carve out algorithmically recommended content.”

Unlike the usual nonsense, this one is often (though not always) offered in good faith. That makes it worth engaging with seriously.

It’s still wrong.

Let’s start with the basics: as we’ve described at great length, the real benefits of Section 230 are its procedural protections, which make it so that vexatious cases get tossed out at the earliest (i.e., cheapest) stage. That makes it possible for sites that host third party content to do so in a way that they won’t get sued out of existence any time anyone has a complaint about someone else’s content being on the site. This important distinction gets lost in almost every 230 debate, but it’s important. Because if the lawsuits that removing 230 protections would enable would still eventually win on First Amendment grounds, the only thing you’re doing in removing 230 protections is making lawsuits impossibly expensive for individuals and smaller providers, without doing any real damage to large companies, who can survive those lawsuits easily.

Advertisement

And that takes us to the key point: removing Section 230 for algorithmic recommendations would only lead to vexatious lawsuits that will fail.

But what about [specific bad thing]?

Before diving into the legal analysis, let’s engage with the strongest version of this argument. Proponents of carving out algorithmic recommendations typically aren’t imagining ordinary defamation suits. They’re worried about something more specific: cases where an algorithm itself arguably causes harm through its recommendation patterns—radicalization pipelines, engagement-driven amplification of dangerous content, recommendation systems that push vulnerable users toward self-harm.

The theory goes something like this: maybe the underlying content is protected speech, but the act of recommending it—especially when the algorithm was designed to maximize engagement and the company knew this could cause harm—should create liability, usually as some sort of “products liability” type complaint.

Advertisement

It’s a more sophisticated argument than “platforms are publishers.” But it still fails, for reasons I’ll explain below. The short version: a recommendation is an opinion, opinions are protected speech, and the First Amendment doesn’t carve out “opinions expressed via algorithm” as a special category.

A short history of algorithmic feeds

To understand why removing 230 from algorithmic recommendations would be such a mistake, it helps to remember the apparently forgotten history of how we got here. In the pre-social media 2000s, “information overload” was the panic of the moment. Much of the discussion centered on the “new” technology of RSS feeds, and there were plenty of articles decrying too much information flooding into our feed readers. People weren’t worried about algorithms—they were desperate for them. Articles breathlessly anticipated magical new filtering systems that might finally surface what you actually wanted to see.

The most prominent example was Netflix, back when it was still shipping DVDs. Because there were so many movies you could rent, Netflix built one of the first truly useful recommendation algorithms—one that would take your rental history and suggest things you might like. The entire internet now looks like that, but in the mid-2000s, this was revolutionary.

Advertisement

Netflix’s approach was so novel that they famously offered $1 million to anyone who could improve their algorithm by 10%. We followed that contest for years as it twisted and turned until a winner was finally announced in 2009. Incredibly, Netflix never actually implemented the winning algorithm—but the broader lesson was clear: recommendation algorithms were valuable, and people wanted them.

As social media grew, the “information overload” panic of the blog+RSS era faded, precisely because platforms added recommendation algorithms to surface content users were most likely to enjoy. The algorithms weren’t imposed on users against their will—they were the answer to users’ prayers.

Public opinion only seemed to shift on “algorithms” after Donald Trump was elected in 2016. Many people wanted something to blame, and “social media algorithms” was a convenient excuse.

Algorithmic feeds: good or bad?

Advertisement

Many people claim they just want a chronological feed, but studies consistently show the vast majority of people prefer algorithmic recommendations, because they surface more of what users actually want, compared to chronological feeds.

That said, it’s not as simple as “algorithms good.” There’s evidence that algorithms optimized purely for engagement can push emotionally charged political content that users don’t actually want (something Elon Musk might take notice of). But there’s also evidence that chronological feeds expose users to more untrustworthy content, because algorithms often filter out garbage.

So, algorithms can be good or bad depending on what they’re optimized for and who controls them. That’s the real question: will any given regulatory approach give more power to users, to companies, or to the government?

Keep that frame in mind. Because removing 230 protections for algorithmic recommendations shifts power away from users and toward incumbents and litigants.

Advertisement

The First Amendment still exists

As mentioned up top, the real role of Section 230 is providing a procedural benefit to get vexatious lawsuits tossed well before (and at much lower cost) they would get tossed anyway, under the First Amendment. With Section 230, you can get a case dismissed for somewhere in the range of $50k to $100k (maybe up to $250k with appeals and such). If you have to rely on the First Amendment, it’s up in the millions of dollars (probably $5 to $10 million).

And, the crux of this is that any online service sued over an algorithmic recommendation, even for something horrible, would almost certainly win on First Amendment grounds.

Because here’s the key point: a recommendation feed is a website’s opinion of what they think you want to see. And an opinion is protected speech. Even if you think it’s a bad or dangerous opinion. One thing that the US has been pretty clear on is that opinions are protected speech.

Advertisement

Saying that an internet service can be held liable for giving its opinion on “what we think you’d like to see” would be earth shatteringly problematic. As partly discussed above, the modern internet today relies heavily on algorithms recommending stuff, giving opinions. Every search result is just that, an opinion.

This is why the “algorithms are different” argument fails. Yes, there’s a computer involved. Yes, the recommendation emerges from machine learning rather than a human editor’s conscious decision. But the output is still an expression of judgment: “Based on what we know, we think you’ll want to see this.” That’s an opinion. The First Amendment doesn’t distinguish between opinions formed by editorial meetings and opinions formed by trained models.

In the earlier internet era, there were companies that sued Google because they didn’t like how their own sites appeared (or didn’t appear) in Google search results. The E-Ventures v. Google case here is instructive. Google determined that E-Venture’s “SEO” techniques were spammy, and de-indexed all its sites. E-Ventures sued. Google (rightly) raised a 230 defense which (surprisingly!) a court rejected.

But the case went on longer, and after lots more money on lawyers was spent, Google did prevail on First Amendment grounds.

Advertisement

This is exactly what we’re discussing here. Google search ranking is an algorithmic recommendation engine, and in this one case a court (initially) rejected a 230 defense, causing everyone to spend more money… to get to the same basic result in the long run. The First Amendment protects a website using algorithms to express an opinion over what it thinks you’ll want… or not want.

Who has agency?

This brings us back to the steelman argument I mentioned above: what about cases where an algorithm recommends something genuinely dangerous?

Our legal system has a clear answer, and it’s grounded in agency. A recommendation feed is not hypnotic. If an algorithm surfaces content suggesting you do something illegal or dangerous, you still have to make the choice to do the illegal or dangerous thing. The algorithm doesn’t control you. You have agency.

Advertisement

But there’s a stronger legal foundation here too. Courts have consistently found that recommending something dangerous is still protected by the First Amendment, particularly when the recommender lacks specific knowledge that what they’re recommending is harmful.

The Winter v. GP Putnam’s Sons case is instructive here. The publisher of a mushroom encyclopedia included recommendations to eat mushrooms that turned out to be poisonous—very dangerous! But the court found the publisher wasn’t liable because they didn’t have specific knowledge of the dangerous recommendation. And crucially, the court noted that the “gentle tug of the First Amendment” would block any “duty of care” that would require publishers to verify the safety of everything they publish:

The plaintiffs urge this court that the publisher had a duty to investigate the accuracy of The Encyclopedia of Mushrooms’ contents. We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

Now, I should acknowledge that Winter was a products liability case involving a physical book, not a defamation or tortious speech case involving an algorithm, but almost all of the current cases challenging social media are self-styled as product liability cases to try (usually without success) to avoid the First Amendment. And that’s all they would be regarding algorithms as well.

The underlying principle remains the same whether you call it a products liability case or one officially about speech: the First Amendment bars requirements that publishing intermediaries must “investigate” whether everything they distribute is accurate or safe. The reason is obvious—such liability would prevent all sorts of things from getting published in the first place, putting a massive damper on speech.

Advertisement

Apply that principle to algorithmic recommendations, and the answer is clear. If a book publisher can’t be required to verify that every mushroom recommendation is safe, a platform can’t be required to verify that every algorithmically surfaced piece of content won’t lead someone to harm.

The end result?

So what would it mean if we somehow “removed 230 from algorithmic recommendations”?

Practically, it means that if companies have to rely on the First Amendment to win these cases, only the biggest companies can afford to do so. The Googles and Metas of the world can absorb $5-10 million in litigation costs. For smaller companies, those costs are existential. They’d either exit the market entirely or become hyper-aggressive about blocking content at the first hint of legal threat—not because the content is harmful, but because they can’t afford to find out in court.

Advertisement

The end result would be that the First Amendment still protects algorithmic recommendations—but only for the very biggest companies that can afford to defend that speech in court.

That means less competition. Fewer services that can recommend content at all. More consolidation of power in the hands of incumbents who already dominate the market.

Remember the frame from earlier: does this give more power to users, companies, or the government? Removing 230 from algorithmic recommendations doesn’t empower users. It doesn’t make platforms more “responsible.” It just makes it vastly harder for anyone other than the giant platforms to exist while also giving more power to governments, like the one currently run by Donald Trump, to define what things an algorithm can, and cannot, recommend.

Rather than diminishing the power of billionaires and incumbents, this would massively entrench it. The people pushing for this carve-out often think they’re fighting Big Tech. In reality, they’re fighting to build Big Tech a new moat.

Advertisement

Filed Under: 1st amendment, algorithmic feeds, algorithmic recommendations, algorithms, feeds, free speech, opinion, section 230

Source link

Advertisement
Continue Reading

Tech

What is the release date for The Pitt season 2 episode 8 on HBO Max?

Published

on

Things are slowly getting worse during the Fourth of July hospital shift in The Pitt season 2. Not only has Dana (Katherine LaNasa) been trying her best to shield a sexual assault victim from the chaos of the emergency room, but a new type of disaster has entered the chat.

At the end of episode 7, the entire hospital system has been shut down to prevent being targeted by a cyberattack. What this means practically remains to be seen, but as it stands, nobody has access to patient records… or even where they are in the building.

Source link

Advertisement
Continue Reading

Tech

iBoot to mBoot — Apple's iPhone bootloader has a mysterious new name

Published

on

The iOS bootloader just got its first name change, from ‘iBoot’ to ‘mBoot.’ As to why, nobody outside of Apple Park knows yet.

An iPhone on a table, with code on-screen.
The second iOS 26.4 developer beta renames the iOS bootloader.

While the second iOS 26.4 developer beta makes it possible to test end-to-end encrypted RCS texting with Android devices, the software contains another, more mysterious change.
Apple has altered the long-standing name of the iOS bootloader. This is the first change since when the operating system debuted nearly two decades ago.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

ViewSonic LX60HD Smart LED Projector Delivers 1080p Big Screen Streaming for $299

Published

on

The lifestyle projector category is no longer a niche sideshow in home theater. It is one of the fastest growing segments in the display market, driven by mobility, improving image quality, lower prices, and the simple reality that a 100-inch picture is more fun than a 55-inch TV when friends come over. Consumers want something they can move from the living room to the bedroom, take outside for movie night, or toss in a bag for a weekend away without hiring an installer.

The ViewSonic LX60HD lands squarely in that conversation. Known primarily for its PC monitors and business and home theater projectors, ViewSonic is leaning into the lifestyle trend with a portable Smart LED model that focuses on flexible placement, easy setup, and built in content access right out of the box. It is designed to make big screen viewing less intimidating, less permanent, and far more accessible at a price that does not require a second mortgage.

ViewSonic LX60HD Features & Specifications:

Product Design: The LX60HD uses the familiar cube-style chassis that’s become the default look for lifestyle projectors—compact, portable, and designed to sit just about anywhere without looking like “serious home theater equipment.”

Imaging Chip and Light Output: Inside, the LX60HD uses a single TFT LCD imaging chip paired with an LED light source rated at 630 ANSI lumens. ViewSonic also uses a sealed light engine to reduce the impact of dust and moisture over time.

Advertisement

Resolution: Native 1080p (Full HD).

Optical Engine: The sealed optical engine is designed to help keep dust and moisture from entering the light path—important for a projector that’s likely to be moved around, used in different rooms, or taken on the road.

viewsonic-lx60hd-rear-inputs

Connectivity: The LX60HD covers both wireless and wired use cases. Wireless support includes built-in Wi-Fi and Bluetooth. For physical connections, it offers HDMI, USB-C, AV-in, and an audio out port for external speakers or headphones.

viewsonic-lx60hd-setup

Easy Setup: The LX60HD includes a suite of automated setup tools designed to simplify placement and alignment. These features include auto four-corner adjustment, automatic horizontal and vertical keystone correction, auto screen fit, instant autofocus, and obstacle avoidance to help maintain a properly sized and aligned image with minimal manual intervention.

viewsonic-lx60hd-screen-sizes

Image Size Options: ViewSonic states that the LX60HD can project images up to 140 inches. In practical terms, it can produce an approximately 50 inch image from about 5 feet away, or scale up to around 100 inches from roughly 9 feet. As with any projector rated at 630 ANSI lumens, overall picture quality will vary depending on ambient light conditions, with best results achieved in dim or darkened rooms.

Advertisement
viewsonic-lx60hd-google-tv

Google TV: The LX60HD runs on the built-in Google TV platform, providing direct access to a wide range of streaming services, including Netflix, Amazon Prime Video, YouTube, Disney+, Max, and others. This allows users to stream content without needing an external media device, keeping setup simple and self-contained.

viewsonic-pj-wpd-700

Wireless Screen Casting Dongle (Optional): ViewSonic also offers the optional PJ-WPD-700 plug and play dongle, which enables wireless screen casting from compatible smartphones and laptops directly to the LX60HD. It is a practical add on for classrooms, meetings, or quick presentations where running cables is not ideal.

Advertisement. Scroll to continue reading.

ViewSonic LX60HD Projector Specifications

ViewSonic Model LX60HD
Projector Type Compact LED Video Projector
Price $299.99
Display Type TFT LCD x 1
Light Source Type LED
Light Source Life, Normal  20,000 Hours
Color Depth 16.7 Million Colors
Display Resolution Full HD (1920×1080)
Brightness (ANSI Lumens) 630
Dynamic Contrast Ratio 4,200:1
Screen Size 50″-140″
Aspect Ratio 16:9
Throw Distance 1.42m-3.8m (100″@2.28m)
Throw Ratio 1.2
Keystone Correction Vertical (+/- 40º) 
Horizontal (+/- 40º)
Horizontal Scan Rate 15K-135KHz
Vertical Scan Rate 23-85Hz
PC Resolution (max) VGA (640 x 480) to
Full HD (1920 x 1080)
Mac® Resolution (max) 480i, 480p, 576i, 576p, 720p, 1080i, 1080p
Wired Inputs USB 2.0 Type A: 1
HDMI 1.4 (with HDCP 1.4): 1
AV In: 1
Wired Outputs 3.5mm Audio Out: 1
WiFi 5Gn
Bluetooth Version 5.0
Bluetooth Audio-In 1 (BT5.0) – Direct streaming from compatible smartphones, PCs, etc..
Bluetooth Audio-out 1 (BT5.0 – compatible with Bluetooth headphones or speakers
Power Supply 100-240V+/- 10%, 50/60Hz AC
Stand-by <0.5W
Physical Control Keypad, Power key
On-Screen Display Display Image
Power Management
Basic and Advanced System Information (See user guide for full OSD functionality)
Operating Temperature 32-104º F (0 – 40 °C)
Kensington Lock Slot 1
Dimensions  9.0 x 8.9 x 6.3 inches

228 x 227 x 159mm

Advertisement
Net Weight 6.8 lbs
Package Contents Projector
Power Cable
Remote Control
Quick Start Guide
Warranty: One-year limited warranty on parts and labor

The Bottom Line 

The lifestyle projector space is crowded with inexpensive models that promise the world and deliver a dim flashlight. ViewSonic is at least playing this one straight. The LX60HD’s 630 ANSI lumens puts it ahead of portable competitors like the Xgimi MoGo 4 (450 lumens) and Samsung Freestyle (550 lumens), while still landing under the $300 mark. That matters.

You’re getting native 1080p, solid auto setup tools, built in Bluetooth, and Google TV in one compact cube. For a bedroom, dorm, office, or casual movie night, it makes a lot of sense. Setup is simple. Streaming is built in. Portability is the point.

But let’s keep expectations grounded. 630 lumens is not enough for a large screen home theater in a bright living room. This projector needs dim or near dark conditions to look its best, especially at 100 inches or larger. If you want a daylight TV replacement, this is not it.

Advertisement

The design is clean and easy to move, although a built in carry handle or optional floor stand would have made it even more flexible.

For under $300, the LX60HD offers a portable, affordable lifestyle projector that delivers usable brightness, smart features, and convenience without pretending it can replace a dedicated home theater setup.

Price & Availability

Source link

Advertisement
Continue Reading

Tech

Bungie says ‘no second chances’ if you’re caught cheating in Marathon

Published

on

Bungie isn’t taking any prisoners when it comes to cheating on its upcoming extraction shooter, Marathon. In a detailed blog post explaining its anti-cheat measures, Bungie took a very declarative position against those caught trying to gain an unfair advantage.

“We are taking a strong stance against cheating and anyone found to be cheating or developing cheats will be permanently banned from playing Marathon forever, no second chances,” the blog post read, adding that there will be an appeals system in place.

However, Bungie’s anti-cheat standards go beyond punishment. In the blog post, Bungie detailed that Marathon‘s dedicated servers have full authority on movement, shooting, actions, and inventory. Since these key actions rely on the server, it will translate to smoother gunplay for players as well as the prevention of cheats related to teleportation, unlimited ammo or damage manipulation. Bungie is also incorporating a “Fog of War” system that limits an individual player’s client to see only certain regions of a map, which should prevent wall hacks, ESP cheats or loot revealers.

On top of these robust regulations, Bungie is utilizing BattlEye, a kernel-level anticheat that’s seen with other popular multiplayer shooters like Fortnite, Rainbow Six Siege and Destiny 2. Bungie added that in the event of disconnecting, you’ll be able to reconnect to your run without any hitches. If players can’t reconnect due to an issue with the servers, Bungie said it will “attempt to return the starting gear to all impacted players.”

Advertisement

Marathon isn’t out until March 5, but Bungie is doing a preview weekend with the Server Slam event starting February 26. Still, it’s obvious that Bungie already wants to get ahead of the competition, since Arc Raiders, another recently released extraction shooter, has been dealing with its own cheating problem. To address the rise in cheating, the game’s developer, Embark Studios, implemented a three-strike system, which some players have criticized as too lenient.

Source link

Continue Reading

Tech

Spain arrests suspected hacktivists for DDoSing govt sites

Published

on

Arrest

Spanish authorities have arrested four alleged members of a hacktivist group believed to have carried out cyberattacks targeting government ministries, political parties, and various public institutions.

The group, which called itself “Anonymous Fénix” and claimed they were affiliated with the Anonymous hacker collective, conducted distributed denial-of-service (DDoS) attacks against targets in Spain and several South American countries, according to the Spanish Civil Guard.

The first attacks occurred in April 2023 and peaked after the flash floods that struck Valencia in late October 2024, when the group’s members attacked multiple government websites, claiming Spanish authorities were responsible for the deaths and destruction caused by the storm.

Wiz

Anonymous Fénix also used X and Telegram to spread anti-government messaging and recruit volunteers for its campaigns.

“From September 2024 they increased their activity and initiated a campaign of recruitment of volunteers with the aim of perpetrating cyberattacks against relevant domains,” the Spanish Civil Guard said over the weekend.

Advertisement

“They reached their peak after the DANA of Valencia when they managed to successfully attack different websites of the Public Administration, justifying that they were ‘the responsible for the tragedy.’”

The Civil Guard arrested the group’s administrator and moderator in May 2025, in Alcalá de Henares, near Madrid, and Oviedo, in the northern region of Asturias. After analyzing the evidence collected following those arrests, investigators identified two additional members of the group as its most active operatives, who were arrested earlier this month in Ibiza and Móstoles, near Madrid.

Following the arrests, Spanish courts also ordered the seizure of the group’s accounts on X and YouTube and ordered the closure of its Telegram channel. No details on specific charges or potential penalties were provided in the Civil Guard’s announcement.

In recent months, Spanish authorities also detained a 19-year-old suspect in Barcelona for allegedly breaching nine companies and dismantled the “GXC Team” crime-as-a-service (CaaS) platform that pushed AI-powered phishing kits, Android malware, and voice-scam tools.

Advertisement

More recently, in January, the Spanish National Police arrested 34 suspects linked to a criminal network involved in cyber fraud and believed to be connected to the Black Axe crime ring.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

QUOD Is A Quake-Like In Only 64kB

Published

on

The demoscene is still alive and well, and the proof is in this truly awe-inspiring game demo by [daivuk] : a Quake-like “boomer shooter” squeezed into a Windows executable of only 64 kB he calls “QUOD”. We’ve included the full explanation video below, but before you check out all the technical details, consider playing the game. It’ll make his explanations even more impressive.

OK, what’s so impressive? Well, aside from the fact that this is a playable 3D shooter in 64kB, with multiple enemies, multiple levels, oodles of textures, running, jumping et cetera–it’s so Quake-like he’s using TrenchBroom to make the levels. Of course he’s reprocessing them into a more space-efficient, optimized format. Yeah, unlike the famous .kkrieger and a lot of other demos in the 64kB space, this isn’t all procedurally generated. [daivuk] did make his own image editing program for procedurally generated textures, though. Which makes sense: as a PNG, the QUOD logo is probably half the size of the (compressed) executable.

The low-poly models are created in Blender, and all created to be symmetric–having the engine mirror the meshes saves 50% of the vertex data. . Blender is just exporting half of a low-poly mesh; just as he wrote his own image editor, he has his own bespoke model tool. This allows tiling model elements, as well as handling bones and poses to keyframe the model’s animation.

Audio is treated similarly to textures and meshes: built up at runtime from stored data and a layered series of effects. When you realize all the sounds were put together in his sound tool from square and sine waves, it makes it very impressive. He’s also got an old-style tracker to create the music. All of these tools output byte arrays that get embedded directly in the game code.

Advertisement

The video also gets into some of his optimization techniques; we like his use of a map file and analyzing it with a python tool to find the exact size of game elements and test his optimizations thereby. One thing he notes is that his optmizations are all for space, not for speed. Except, perhaps, for one thing: [daivuk] created a new language and virtual machine for the game, which seems downright extravagant. It actually makes sense, though, as the virtual machine can be optimized for the limits of the game, as he explains starting at about 20 minutes into the video. Apparently it saved a whole 2kB, which seems like nothing these days but actually let [daivuk] fit an extra level into his 64kB limit. Sure, it’s still bigger than Quake13k–and how did we never cover that?–but you get a lot more game, too.

So, to recap: [daivuk] didn’t just make a game with an impressively tiny size on disk, he made the entire toolchain, and a language for it to boot. If you think this is overoptimized, check out Wolfenstien in 600 lines of AWK. Of course in spite of the 1980s file size, this needs modern hardware to run. You can get surprising graphics performance from a fraction of that, like this ATtiny sprite engine.

Thanks to [Keith Olson] for the tip, which probably took up more than 64kB on our tips line.

Advertisement

Source link

Continue Reading

Tech

MAHA People Are Mad At RFK Jr. And For Good Reason As He Reverses Stance On Glyphosate

Published

on

from the about-face dept

One of the more perplexing questions in all of the coverage I’ve done on RFK Jr. has been whether or not Kennedy is some misguided true believer or if this is all some grift for power, influence, and/or money. While most people who watch how RFK Jr. has operated on the topic of vaccines, for instance, both before and after he entered government, they assume he’s a real, if stupid, crusader. But they will tell you the same when it comes to processed foods and pesticides, two topics on which Kennedy has also crusaded for years, and two topics that have been noticeably absent or reversed now that he’s in government.

The pesticide topic was recently thrust back into the news. Trump signed an executive order that essentially demanded that two chemicals be produced in higher quantities: phosphorus and glyphosate. Kennedy then came out to cheerlead the executive order as well, which was odd when you consider what glyphosate is chiefly used for.

Trump on Wednesday night signed an executive order invoking the Defense Production Act to compel the domestic production of elemental phosphorus and glyphosate-based herbicides. Glyphosate is the chemical in Bayer-Monsanto’s Roundup and is the most commonly used herbicide for a slew of U.S. crops. Trump, in the order, said shortages of both phosphorus and glyphosate would pose a risk to national security.

Kennedy backed the president in a statement to CNBC Thursday morning.

“Donald Trump’s Executive Order puts America first where it matters most — our defense readiness and our food supply,” he said. “We must safeguard America’s national security first, because all of our priorities depend on it. When hostile actors control critical inputs, they weaken our security. By expanding domestic production, we close that gap and protect American families.”

Advertisement

Bayer-Monsanto has been the defendant in a number of lawsuits over its Roundup product. Specifically, those suits have been powered by claims that glyphosate causes non-Hodgkin’s lymphoma, a form of cancer primarily impacting blood cells. Whether or not you or I think those claims are true, Kennedy sure said he did, since he acted as counsel in some of these suits.

Kennedy, a former environmental attorney, notably once won a nearly $290 million case against Monsanto for a man who claimed his cancer was caused by Roundup. The executive order came down one day after Bayer proposed paying $7.25 billion to settle a series of lawsuits claiming Roundup causes cancer.

The MAHA crowd is understandably pissed. Building a career on these very concrete health stances, only to reverse course while in government to appease Dear Leader, is a fairly horrible look. And it’s actually a worst of both worlds situation, as his MAHA crowd is pointing to his failed promises and hypocrisy, while those who are generally his opponents are pointing out that this might be a stance in which he was actually acting rationally before pulling a u-turn.

“This was one of the few issues where Secretary Kennedy actually embraced credible science,” said Kayla Hancock, Director of Public Health Watch, a project of Protect Our Care. “But RFK Jr. tossed out his years of anti-pesticide advocacy and conviction like a used tissue to stay in the good graces of Donald Trump, who cares more about making his chemical company donors happy than protecting the public’s health. This makes it clear, Secretary Kennedy has no problem selling out his supposed value if there’s a quick buck to be made for special interest donors, or political points to be scored.” 

This seems as close to a solid answer to the question I posed at the start of this post as we’re likely to get. Kennedy, whatever else he might be, is not a true-believing crusader willing to hold firm to his beliefs. He simply does and says whatever will propel his influence and revenue. That’s it.

You’ve been lied to, MAHA people. Lied to and used to put in office the very people who have betrayed you. Let that sink in.

Advertisement

Filed Under: donald trump, executive order, glyphosate, health, health and human services, maga, maha, pesticides, rfk jr., roundup

Companies: monsanto

Source link

Advertisement
Continue Reading

Tech

New Microsoft Gaming CEO Has ‘No Tolerance For Bad AI’

Published

on

In her first major interview as Microsoft’s new gaming chief, Asha Sharma said that “great games” must deliver emotional resonance and a distinct creative voice, while making clear that she has “no tolerance for bad AI.” Stepping in after Phil Spencer’s retirement, she’s pledging consistency, community trust, and a human-first approach to storytelling as Xbox enters a new era. Variety reports: Sharma was quick in laying out her top priorities for Microsoft Gaming in an internal memo announcing her promotion, noting “great games,” “the return of Xbox” and the “future of play” as her three main commitments to the gaming community. So first, what makes a great game for Sharma, whose roles prior to CoreAI include top positions at Instacart and Meta? The new Microsoft Gaming CEO tells Variety it’s all about games with “deep emotional resonance” and “a distinct point of view.” She wants to develop stories that make players “feel something,” like the kind of feelings Campo Santo’s 2016 first-person mystery “Firewatch” elicited in her.

Sharma takes on the mantle as head of the leading competitor to Sony’s PlayStation and Nintendo knowing full well she’s entering the role as an outsider to the larger gaming community and has “a lot to learn” still. But Sharma says she’s got a commitment to “being grounded in what the community is telling us.” “I’m coming into gaming as a platform builder,” Sharma said, adding that her goal is to “earn the right to be trusted by players and developers” and show the fanbase that “consistency” over time. In her interview with Variety, Sharma acknowledged the tumultuous state of the gaming industry, referencing Matthew Ball’s recent State of Video Gaming in 2026 report as evidence that the larger “transformation” of the sector is “protecting what we believe in while remaining open-minded about the future.”

Due to her strong background in AI, initial reactions to Sharma’s appointment have raised concerns about what her specific views are on the use of generative AI in game development. Sharma says her stance is simple: she has “no tolerance for bad AI.” “AI has long been part of gaming and will continue to be,” Sharma said, noting that gaming needs new “growth engines,” but that “great stories are created by humans.”

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025