Minutes after Donald Trump announced that the US and Israeli governments had launched a “major combat operation” against Iran in the early hours of Saturday morning, disinformation about the attack and Tehran’s response flooded X.
WIRED has reviewed hundreds of posts on X, some of which have racked up millions of views, that promote misleading claims about the locations and scale of the attack.
Elon Musk’s social media platform is a verifiable mess: In some cases, alleged video footage of the attack shared in posts on X are actually months or years old. In several posts, video footage of apparent attacks have been attributed to incorrect locations. A number of images shared on X appear to be altered or generated with AI. Other posts attempt to pass off video game footage as scenes from the conflict.
Almost all of the most viral posts reviewed by WIRED on Saturday came from accounts with blue check marks, meaning they pay X for its premium service and could be eligible to earn money based on how much engagement their posts generate, even if the content is false. While some posts with disinformation have a community note appended beneath them to correct the record, they remain up on the site, and it’s unclear how many people viewed them before the notes appeared.
One video posted by a blue check mark account claimed to show ballistic missiles over Dubai; the clip actually showed Iranian ballistic missiles fired at Tel Aviv in October 2024. The post has been viewed over 4.4 million times.
One of the most viral clips shared on X in the hours after the attack claims to show an Israeli fighter jet being shot down by Iranian air defense systems. The video has been shared by dozens of accounts, including one post which has been viewed more than 3.5 million times. The provenance of the video is unclear, but there have been no credible reports of any Israeli jets being shot down over Iran on Saturday.
Another account that claims to be an expert in open source intelligence posted a video showing explosions, alongside the caption: “6 Iranian Hypersonic Missiles hit the Indian-invested Israeli Haifa port. Massive damages reported.” The video has been viewed 64,000 times, but the footage was actually captured last July and shows an Israeli attack on the defense ministry in Damascus, Syria.
Advertisement
In a number of cases, pro-Iranian accounts have been using images and footage from Saturday’s attacks to falsely claim successful strikes against Israel. “IRANIAN MISSILE IMPACT IN TEL AVIV RIGHT NOW,” the Iran Observer account wrote in a post featuring an image of Dubai. The post had been viewed over 200,000 times before it was deleted, but dozens of other posts sharing the same image and making the same claims remain on X.
Tehran Times, a news outlet aligned with the Iranian government, posted what appears to be an AI-generated image on X which claims to show that “an American radar in Qatar was completely destroyed today in an Iranian drone strike.” The use of AI generated images was flagged on X by Tal Hagin, a senior analyst with open source intelligence company Golden Owl. While there are reports that drone and missile attacks targeted the US Navy’s 5th Fleet headquarters in Bahrain, there are no reports yet of similar successful attacks in Qatar.
A pro-Trump account, which also features a blue check mark, posted images claiming to show the before and after pictures of the palace of Iranian Supreme Leader Ali Khamenei, which was targeted during Saturday’s missile attacks. (In a post on Truth Social, Trump claimed Khamenei was killed in an attack.) While the after picture appears to accurately show the palace after the attack, the before picture shows the Mausoleum of Ruhollah Khomeini, which is located on the other side of Tehran. The post has been viewed 365,000 times.
The move comes amid Google’s strategy to move further into the physical AI space.
Intrinsic, an Alphabet-owned software and AI company, is joining Google. The platform, which was established in 2021 as one of Alphabet’s ‘other bets’ under the ‘moonshots’ research and development segment X Development, builds AI models and software designed to make industrial robots more accessible.
In joining Google, Intrinsic will continue to operate as a distinct entity, however, it will work closely with Google DeepMind and will tap into Google’s Gemini AI models and cloud services. Thus far, Alphabet has declined to share information regarding funding or the purchase price.
Commenting on the news, Wendy Tan White, the CEO of Intrinsic said: “The Intrinsic team has been working for years to enable access to intelligent robotics through a democratised platform, so more people can build and benefit from robotics applications.
Advertisement
“Combined with Google’s incredible AI and infrastructure, we’re going to unlock the promise of physical AI for a much broader set of manufacturing businesses and developers. This will fundamentally shift production, from its economics to operations and enable truly advanced manufacturing.”
Hiroshi Lockheimer, the chief product officer of Other Bets, added: “At Google, we see the immense opportunity in bridging the gap between the digital and physical world, that is also true for intelligent robotics in industries like manufacturing and logistics. We’re excited to welcome the Intrinsic team to Google, so we can bring breakthrough AI to more businesses and industries, at scale.”
In other Alphabet news, Alphabet and Google were in hot water earlier this month as both were at the centre of a new antitrust complaint filed by the European Publishers Council with the European Commission on 10 February.
The complaint alleged that Google and Alphabet are abusing their dominant position in general search services via the use of AI overviews and AI mode embedded within Google Search.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Picture this: you have an irregular opening you need to fabricate a piece to fill. Maybe it’s the stonework of a fireplace; maybe it’s the curved bulkhead of a ship. How do you get that shape? The most “Hackaday” answer would be to 3D scan the area, create a CAD model based on the point cloud, and route the shape with CNC. Of course, none of those were options for the entirety of human history. So how do you do it if you don’t have such high-tech toys? With a stick, as [Essential Craftsman] takes great pains to show us in the video below.
It’s not just any stick, of course. Call it a “tick stick”, a “speil stick”, or a “joggle stick” — whatever you call it, it’s just an irregularly shaped piece of wood. The irregular shape is key to the whole process. How you use it is simple: get some kind of storyboard — cardboard, MDF, whatever — that fits inside your irregular void. Thanks to the magic of the stick, it need not fit flush to the edges of the hole. You put the tick stick on the storyboard, press the pointy end against a reference point on the side of the hole, and trace the stick. The irregular shape means you’re going to be able to get that reference point back exactly later. Number the outline you just made, and rinse and repeat until you’ve got a single-plane “point cloud” made of tick stick outlines.
Your storyboard is probably going to look mighty confusing, but that’s what the numbers are for. Bring your storyboard and your tick stick onto the workbench and whatever you want to cut out– plywood, cardboard, 1/4″ steel armor plate, you name it–and simply repeat the process. Put the tick stick inside outline #1 and mark where the pointy end lands on the material. Then do it again for the other outlines, reproducing the points you measured on the original piece. After that, it’s just a game of ‘connect the dots’ and cutting with whatever methodology works for your substrate. A sharp knife will work for cardboard, but you’ll probably want something more substantial for steel plate.
It’s not often you’re going to need the tick stick– the [Craftsman] reports only needing it a few times over the course of a decades-long career, but when you need it, there’s not much else that will do the job. Well, unless you have a 3D scanner handy, that is.
At first glance, it may look like [Rybitski]’s 7-segment RGB LED clock is something that’s been done before, but look past the beautiful mounting. It’s not just stylishly framed; the back end is just as attentively executed. It’s got a built-in web UI, MQTT automation, so Home Assistant integration is a snap, and allows remote OTA updates, so software changes don’t require taking the thing down and plugging in a cable.
A slick web interface allows configuring which LEDs belong to which segments without code changes.
Pixel Clock is code for the Wemos D1 Mini microcontroller board and WS2812/WS2812B RGB LED strips, but it’s made to be flexible enough to support different implementations. For example, altering which LEDs in the strip belong to which segments on which digits can be configured entirely from the web interface. Naturally, one could build an LED strip clock using the same layout [Rybitski] did and require no changes at all — but it’s very nice to see that different wiring layouts are supported without needing to edit any code. There’s even automatic brightness adjustment if one adds an LDR (light-dependent resistor), which is a nice touch.
[Rybitski]’s enclosure is CNC-routed MDF, framed and given a marble finish. The number segments are capped with laser-cut frosted white acrylic, which serve as both diffuser for the LEDs and an attractive fit with the marble finish at the front. MDF is dense and opaque enough that no additional baffles or louvers are needed between segments.
With this code and an RGB LED strip, you can implement your own 7-segment clock any way you like, focusing on an artful presentation instead of re-inventing the wheel in software. Of course, there’s nothing that says one must use 7-segment numerals; some say your LED clock need not display numbers at all.
JapanNext 31.5-inch 6K panel increases pixel density for sharper interface elements
60Hz refresh and 8ms response focus on productivity usage
500 nit brightness and 1500:1 contrast suit standard office lighting
JapanNext has released the JN-IPS326K-HSPC9, a 31.5-inch IPS monitor with a 6016 x 3384 resolution aimed primarily at home and office users.
This resolution exceeds the 3840 x 2160 pixel count commonly associated with 4K displays, resulting in a pixel pitch of 0.1159mm on this panel size.
In practical terms, that density means text and interface elements can appear finer, although the benefit depends largely on scaling settings and viewing distance.
Designed for office environments
This business monitor operates at 60Hz with an 8ms response time, specifications that indicate a focus on routine productivity rather than competitive gaming performance.
Brightness is rated at 500 nits and contrast at 1500:1, figures that align with upper mid-range IPS office displays currently available, and the panel covers 100% of the sRGB color space and 96% of DCI-P3, with support for HDR10 content.
While those numbers suggest suitability for photo or video editing, HDR performance on edge-lit IPS panels can vary depending on content and environmental lighting conditions.
Advertisement
This device supports a 178-degree horizontal and vertical viewing angle, which is consistent with IPS technology and helps maintain stable colors across wider seating positions.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The monitor includes Picture-in-Picture and Picture-by-Picture modes, allowing two input sources to be displayed simultaneously within a single-screen workspace.
Flicker-Free and Low Blue Light settings are available for extended sessions, and HDCP support is provided across HDMI, DisplayPort, and USB-C connections.
Advertisement
Connectivity options include two HDMI 2.1 ports, one DisplayPort 1.4 connection, and a USB-C 3.1 interface capable of delivering up to 90W of power.
A built-in KVM switch allows users to control multiple connected systems with one keyboard and mouse, which may simplify desk arrangements involving separate devices.
Additional features include an audio output, integrated 2W speakers, compatibility with AMD FreeSync and Nvidia G-Sync, and support for 75 x 75mm VESA mounting.
Advertisement
The stand supports tilt, swivel, height adjustment, and pivot functions, offering flexibility for different seating positions and workspace layouts.
At €899, about $1,061, on JapanNext’s website, the JN-IPS326K-HSPC9 sits among the more affordable 6K monitors currently available.
For comparison, Asus offers the ProArt Display 6K PA32QCV at $1,289.99 through Best Buy, with specifications that include dual Thunderbolt 4 ports and broader stated color coverage.
At a higher price point, Dell lists the UltraSharp 32 6K U3224KB for $2,499.99 on its United States online store.
Advertisement
Given JapanNext sells its own 27-inch 5K monitor for under €700, it’s not surprising that its latest monitor seriously undercuts the competition.
I’ve been reviewing robot vacuums professionally for a couple of years now, and as a result I’ve been drawn into conversations about these handy home helpers on a regular basis. Everyone I’ve met outside of a work context seems intrigued by the idea of a robot vacuum, but there are some misconceptions about what they can and can’t do. In many cases, people are underestimating modern robot vacuums’ capabilities.
So let’s set the record straight. Here are eight common robot vacuum misunderstandings, and some information on what you can actually expect…
1. They’re just for vacuuming
Advertisement
Newsflash: modern robot vacuums can mop, too. In fact, I’d go as far as to say that these days, you’d be hard-pressed to find a robovac that doesn’t have mopping functionality built in.
The level of mopping varies quite considerably, however. Cheap, basic machines such as the Dreame D9 Max Gen 2 will have a large, flat water tank with a mop pad mounted to the bottom. You’ll need to fill it up and attach it to the machine every time you want to mop your floors. In some cases, having the tank attached automatically means water will be coming out, so you’ll need to carry the robovac into any target room unless you want your carpets mopped too.
(Image credit: Future)
Pricier robovacs have really quite advanced mop setups. You’ll almost always be able to set no-mop zones, many robot vacuums can lift their mop pads when traversing carpet, and some will even drop their mop pads off in the dock when they’re not needed. Some premium robot vacuums have docks that will refill water tanks, dispense detergent, and wash and dry the mop pads for you.
Advertisement
2. They can’t be used on multiple floors
Autonomous stair-climbing is off the cards (for the moment, at least… more on that in a sec) but that doesn’t mean your robovac is confined to one floor only. You’ll just need to carry it up and down the stairs yourself.
The vast majority of robot vacuum apps can store multiple floorplans, so you can map each floor, then place the robotic on the floor that needs cleaning. It won’t be able to return to its dock mid-clean to charge or empty its bin; but otherwise, it will just operate as usual. Cliff sensors mean it won’t take a tumble down any stairs, either.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
(Image credit: TechRadar / Carrie-Ann Skinner)
3. Roomba is still the best brand to buy
Roomba is still kicking around (although for a while, it was touch-and-go for parent brand iRobot) – but it hasn’t been top of the bot charts for some time now. Brands such as Roborock, Dreame, Ecovacs and Eufy have leapfrogged Roomba in terms of features, and in my experience the latter bots are generally more reliable, capable, and offer better value for money too. I’m not writing off Roomba completely just yet, but it isn’t currently troubling my best robot vacuum roundup.
4. They’re not for pet hair
Pet hair is notoriously “sticky”, so pulling it up from carpet is a challenge for any vacuum — let alone one of the robo-variety. However, robot vacuums can still be very useful for owners of shedding pets, simply because they can clean as regularly as you want them to, without you even needing to be awake, or in the house.
Advertisement
These regular, light cleans can help stop hair from building up, so when you do go in for a deep clean with a manual vacuum, you aren’t dealing with enough hair to stuff a king-sized duvet. Robot vacuums are also great at cleaning in hard-to-access places — under the bed, for example — where flurries of fur can easily collect.
(Image credit: Future)
There are some key things to look for if you’re seeking the best robot vacuum for pet hair. Decent suction specs (around 6,000Pa or more) are a must, as is a self-empty bin, unless you constantly want to be pulling hair out of the small onboard dust cup. On that latter point, it’s worth spending more for a higher-end dock, since cheaper units can become jammed with fur during the self-empty process.
Advertisement
5. They’re super technical
If you want to understand how robot vacuums work, you’ll need to get quite technical. However, if you pick a good one, using it will be pretty straightforward. Any decent, modern robot vacuum will walk you through the set-up process, which is typically no more involved than downloading the correct companion app and connecting the robot to the internet (I’ve never had issues with this, but here are some things to try if your robot vacuum is losing internet connection).
Most will then prompt you to do a quick mapping run, where the bot will wander into each room and build a basic map for you to edit. You could tidy up, lift chairs and so on for this bit; but even if you don’t, your bot will likely discover any previously inaccessible areas on a later run.
Generally, with robot vacuums there’s plenty to dig into if you are tech-savvy — precisely editing your maps, setting up complex schedules, tweaking settings and so on. However, if you don’t want to get into all that, most will have a big Go button that you can press and the vacuum will make a good fist of cleaning your home with no more information required than that.
Advertisement
6. They can’t cope with clutter
Modern robot vacuums arrive with navigation tech that means they’ll be able to skirt around any obstacles. The most advanced options can also accurately identify the exact type of clutter, and figure out what needs a wide berth and what doesn’t. In short, a little bit of clutter will generally not be a problem.
That said, there are some limits. In particular, shallow obstacles often get missed — I’ve never met a robot vacuum that wasn’t desperate to chow down on charge cables like spaghetti. And I’d never trust a robovac’s object avoidance enough that I’d let it loose in a home with a non-house trained pet, either.
(Image credit: Future)
7. They can replace a manual vacuum
Robot vacuums can be great, but they are unlikely to replace a manual vac. There are some things that even the priciest, most advanced robot vacuums can’t do. An obvious one is vacuuming the stairs (although there are various prototypes in the works from Eufy and Dreame, and most recently Roborock, that look to change that). Bungalow-dwellers aren’t in the clear, either — a robovac can’t vacuum your sofa, your mattress, or be used to dust away the cobwebs on your room coving.
Advertisement
In addition, I’ll make it clear that robot vacuums still can’t really rival the best manual vacuums in terms of suction. They’re excellent at taking care of regular, light cleans, but for a proper deep dust-busting session, you’ll need to roll up your sleeves.
8. They cost a fortune
This depends on your definition of “a fortune”. You’re unlikely to find the top-end flagship robot vacuums for less than four figures, and for features such as automatic mop cleaning and water dispensing, you’ll need to shell out over $600 / £600. However, there are plenty of capable, basic models under the $400 / £400 mark — my best cheap robot vacuum guide has more information. That’s still an investment, but perhaps not as ruinous as you might expect.
Discounts aren’t hard to come by, either. Robot vacuums almost always feature in shopping events such as the Black Friday sales, and when you consider the rate at which the market is moving, it’s common to see relatively new models discounted to make space for an even-newer range-mate.
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Saturday afternoon Sam Altman announced he’d start answering questions on X.com about OpenAI’s work with America’s Department of War — and all the developments over the past few days. (After that department’s negotions had failed with Anthropic, they announced they’d stop using Anthropic’s technology and threatened to designate it a “Supply-Chain Risk to National Security“. Then they’d reached a deal for OpenAI’s technology — though Altman says it includes OpenAI’s own similar prohibitions against using their products for domestic mass surveillance and requiring “human responsibility” for the use of force in autonomous weapon systems.)
Altman said Saturday that enforcing that “Supply-Chain Risk” designation on Anthropic “would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation…. We should all care very much about the precedent… To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”
Altman also said that for a long time, OpenAI was planning to do “non-classified work only,” but this week found the Department of War “flexible on what we needed…”
Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.
Advertisement
I know what it’s like to feel backed into a corner, and I think it’s worth some empathy to the Department of War. They are… a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them “The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.” And then we say “But we won’t help you, and we think you are kind of evil.” I don’t think I’d react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.
Question: Are you worried at all about the potential for things to go really south during a possible dispute over what’s legal or not later on and be deemed a supply chain risk…?
Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that…
Question: Why the rush to sign the deal ? Obviously the optics don’t look great.
Advertisement
Sam Altman: It was definitely rushed, and the optics don’t look good. We really wanted to de-escalate things, and we thought the deal on offer was good.
If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don’t where it’s going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years…
Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?
Sam Altman: […] We believe in a layered approach to safety–building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it’s very important to build safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one…
Advertisement
I think Anthropic may have wanted more operational control than we did…
Question: Were the terms that you accepted the same ones Anthropic rejected?
Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.
Question: Will you turn off the tool if they violate the rules?
Advertisement
Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won’t do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.
Questions were also answered by OpenAI’s head of National Security Partnerships (who at one point posted that they’d managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI’s deal with Department of War, “We control how we train the models and what types of requests the models refuse.”
Question: Are employees allowed to opt out of working on Department of War-related projects?
Advertisement
Answer: We won’t ask employees to support Department of War-related projects if they don’t want to.
Question: How much is the deal worth?
Answer: It’s a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We’re doing it because it’s the right thing to do for the country, at great cost to ourselves, not because of revenue impact…
Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a ‘threat to democratic values’?
Advertisement
Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract. They also detailed OpenAI’s position on LinkedIn:
Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware…
Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren’t refusing queries they should, or there’s more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.
U.S. law already constrains the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.
Unihertz plans to debut the new Titan 2 Elite on Kickstarter early next month, and by all accounts, it will be a natural evolution of the company’s previous Titan handsets, although one that has managed to reduce the entire thing down into a much more pocket-friendly container. You’ll still get that characteristic QWERTY keyboard that reminds you of the good old days of BlackBerry phones; after all, some people still prefer the experience of pressing actual buttons to swiping at a glass screen.
On the front, we have a 4.03-inch AMOLED display that runs at a smooth 120 Hertz. That’s far clearer and smoother than the LCD screens in earlier Titans. For the time being, we only have two color options: a standard black finish and a more eye-catching orange variety. The overall design is fairly elegant, with impressively low bezels and a small punch hole cutout for the front-facing camera.
⭐【Compatible with T-Mobile, Verizon and AT&T only in USA】 Verizon Users: Activate the SIM Card with another Verizon-certified phone first, then…
⭐【Global 5G Unlocked】Titan 2 supports major frequencies and bands globally. This means it’ll work with most of the network carriers, so you…
⭐【Android 15】The latest Android 15 OS improves your productivity while safeguarding your sensitive data, all while providing greater usability…
Physical keyboards, like as those seen in the Titan line, are what sets Unihertz apart, and the Elite 2 maintains the same four row QWERTY layout, which is ideal for typing out emails, chats, or notes without relying on on-screen functions. Touch-sensitive buttons on the keyboard will allow you to use a variety of useful motions and custom shortcuts, which is really cool.
Under the hood, it’s very comparable to the existing Titan 2, with a MediaTek Dimensity 7300 processor that can perform typical tasks with ease, as well as 5G connectivity, so you’re ready for almost anything. When combined with 12GB of RAM and 512GB of storage, the device should be able to handle multitasking, apps, and all of your media storage with ease. Battery life is also expected to be good, presumably about 5,000 mAh, though specific data will have to wait until the actual launch.
Advertisement
The Elite’s camera setup is straightforward, with a basic dual-lens system that includes a 50MP main sensor for primary images and several auxiliary lenses for wide-angle or depth effects. Not exactly cutting-edge technology, but it should be more than adequate for the occasional quick snap or video call. It will ship with Android 16 out of the box, and Unihertz has committed to keeping it updated to Android 20, as well as security patches until 2031. That’s some fairly excellent long-term support, which is all too rare in devices that typically slip off the radar after a few of years at most.
The Elite 2 will be launched on Kickstarter first, similar to previous Titan efforts that were favorably received by keyboard fans. As for pricing, we’ll have to wait and see. The basic Titan 2 is around $400, so the Elite may be roughly the same, or somewhat more for all the enhancements.
Shopping for a MicroSD card can be a little daunting. There are a ton of numbers to consider, a huge number of brands producing cards with similar-sounding features and names, and words like Pro, Extreme, and Express getting thrown around everywhere.
To make a long story short, unless you’re shooting a ton of photos and videos, and doing so even semiprofessionally where losing those shots might be detrimental to your professional reputation, you’re fine to buy a MicroSD card from any company whose name you’ve heard before. I prefer cards from PNY, SanDisk, and Lexar. Keep an eye out for the “U” symbol with a 3 inside, or a “V30” on the card for the best balance of speed and price. There are two exceptions to that suggestion:
If you’re shooting on a high-end camera, you should consider a V60 MicroSD card, if you can find one for a reasonable price. Some cameras have extra video features you can enable with a faster MicroSD card, so check your manual for more info on whether you need to upgrade.
Advertisement
If you’re buying for a Nintendo Switch 2, you’ll need an unfortunately more expensive MicroSD Express card. While you can transfer images and videos from your Switch 2 with most regular MicroSD cards, you’ll need an Express version to use it for actually running games.
Capacity
How much storage you need will largely depend on your needs, but there are a few things to consider when debating between 128 GB and 1 TB. The first is that MicroSD cards are tiny, and having to swap them out on the road can be a risky proposition. Costs tend to go up exponentially for 1 TB and 2 TB cards, but the gap between 256 GB and 512 GB isn’t that large, so I recommend sizing up a bit.
The other factor is that storage sizes are also separated into different standards, so you’ll want to make sure your device actually supports that larger card. Cards that are 64 GB are higher are technically “SDXC” for Extended Capacity, and are currently the most common type, and you should be able to use them in most modern situations.
Speed
If you’re interested in learning more about MicroSD speeds, we have a write-up with a full explanation of the different speeds and how they interact, but I’ll give you the quick rundown here too.
Advertisement
Each MicroSD card will have its actual minimum sequential write speed indicated by a letter and number on the card. The number indicates the speed in MB/s, with the letter representing the generation. A C10, U1, and V10 are all essentially the same speed, just written differently, so you’re likely to see multiple symbols printed on each card. I’d recommend checking out the SD Association’s page on speeds with a handy chart showing the full comparison.
In practice, you have to go out of your way to find a MicroSD card that’s slower than V30/U3 at most retailers, though you may find them included with some electronics that don’t require anything more substantial.
Looking for the most recent Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.
Today’s NYT Connections puzzle is kind of tough. As always, the purple category is extra tricky. Today, you’ll need to find hidden words inside of four other words to complete that group. (Or solve the other three, and let the purple group just solve itself.) Read on for clues and today’s Connections answers.
The Times has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. Players who are registered with the Times Games section can now nerd out by following their progress, including the number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.
Here are four hints for the groupings in today’s Connections puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.
Yellow group hint: Yum!
Advertisement
Green group hint: Working hard.
Blue group hint: Taking time off.
Purple group hint: Hidden yummy words.
Answers for today’s Connections groups
Yellow group: Little bite.
Advertisement
Green group: Construction equipment.
Blue group: Vacation emoji.
Purple group: Things you don’t eat that end in foods.
Manna was founded in 2019 and operates mostly in Dublin.
Irish drone delivery company Manna, which operates suburban air delivery of food and other goods, is to partner with Uber as the US transportation company makes its first moves into the European drone delivery space.
The new strategic partnership will be tested in Ireland before being launched in cities elsewhere in Europe. Manna said that integrating its drones with the “vast network of restaurants, merchants and consumers on the Uber platform will unlock faster, safer and more cost-efficient last-mile logistics at scale”.
Manna founder Bobby Healy told SiliconRepublic.com: “Uber is a worldwide brand synonymous with innovation and disruption. It’s a huge win for indigenous Irish tech and I’m particularly proud for our 170-strong team in north county Dublin.
Advertisement
“It represents everything that is great about building ambitious start-ups right here in Ireland.”
The new service will integrate Manna’s flight-proven autonomous drone delivery system with Uber’s global platform and logistics expertise, creating a fully integrated, end-to-end experience engineered for speed, safety and reliability at scale, the Irish company said.
Manna was founded in 2019 and claims to have made over 250,000 successful deliveries to date. It already works with food delivery platforms such as JustEat and Deliveroo, primarily in areas of Dublin. Uber, founded in 2010, focuses on moving people, food and things through cities.
Sarfraz Maredia, Uber’s president of autonomous mobility and delivery, said: “Autonomous technology is shaping the future of delivery, whether it’s on the streets or in the skies. By combining Uber’s scale with Manna’s proven aerial expertise, we’re bringing fast, efficient and sustainable delivery to consumers and merchants alike.
Advertisement
“We’re proud to launch in Europe and excited to introduce this technology to more Uber Eats customers over time.”
Manna has faced some opposition to its services at a local level over factors such as noise pollution, but claims its delivery service is cleaner and faster than comparable local deliveries by road while being safe and sustainable, with an ideal flight radius of around 5km.
“Our focus remains simple: build the safest, fastest and most sustainable delivery infrastructure in the world,” said Eoghan Huston, Manna’s COO.
Last year, it began operating in Cork, and has raised over $60m of funding in its lifespan to date.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.