Connect with us
DAPA Banner

Tech

Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App

Published

on

Of all the gay hookup apps Brennan Zubrick uses, Sniffies, a cruising app for men interested in discreet sex-positive casual encounters with other men, is by far his favorite. Some of the most popular kinks among members on the platform include edging, cum play, and BDSM. “I overwhelmingly prefer the experience I get and the community I can access,” he tells WIRED. But Zubrick, who is 40 and based in Washington, DC, has a bad feeling that could soon change.

Tinder and Hinge parent company Match Group announced on Monday an investment of $100 million into Sniffies. The deal gives Match Group a large minority share and the choice to become the sole owner later on. The announcement has set off an intense firestorm of reactions from users who are second-guessing the direction of the company and the longterm sustainability of the app.

“Sniffies has long held its market position as the little guy, catering to a specific section of the gay community, and is somewhere people who might not be comfortable with Grindr—where no face-pic, no-chat culture runs rampant—go to connect with other like-minded people in a more direct and discreet way,” Zubrick tells WIRED.

“This partnership is about supporting that, not redefining it,” Sniffies founder and CEO Blake Gallagher said in a statement, noting that the investment will help the platform focus on three key areas users want: “stronger trust and safety, expansive network growth, and continued product improvements.” According to the agreement, Match Group will offer guidance on the right roles, procedures, and tech to help Sniffies build on its trust and safety efforts.

Advertisement

But users aren’t buying what Gallagher is selling. The Instagram post announcing the news was inundated with negative reactions, as users expressed worry over the strategic partnership. “Please don’t let this be the straightification of sniffies,” expressed one. “You sold out. Plain and simple. Where we moving to next boys?” added Marc Sundstrom, a user in Philadelphia. “Partnering with Match feels very gentrified and straight. Highly concerned about the app being allowed to be what it is in order to court investors,” wrote another. By Tuesday afternoon, comments on the post had been shut off.

Though it remains to be seen how Gallagher will position Sniffies in the months ahead, already users are saying this marks the beginning of the end for the app. “Straight people shouldn’t even know what Sniffies is for fuck sake,” one wrote in the r/askgaybros subreddit. And despite promises, some say a major corporation like Match is not ethically aligned with the indie spirit of Sniffies. On LinkedIn, the top comment under Gallagher’s post questioned the real intent behind Match Group’s investment. “Interested to see how ties to Palantir affect Sniffies’ growth. Hopefully this doesn’t become a surveillance application.”

Spencer Rascoff, who became CEO of Match Group in 2025, previously served on the board of Palantir, the defense tech and data mining company that has become a “technological backbone” of the Trump administration.

Sniffies maintains that it will continue to own and control how its user data is stored, handled, and protected. According to the company, there are no changes planned to its data practices as part of the investment.

Advertisement

But the outrage underscores the significance of platforms like Sniffies and what it would mean to a community of people who already feel like they have so few quality options for seeking desire online.

“It’s a mess and obviously to be expected. It’s definitely an indicator of its fast rise, so no shade, but we saw what happened with Grindr,” says Brad Allen, a 34-year-old event producer and the creator behind Club Quarantine, who joined Sniffies in 2023. “I really am pulling for them to somehow navigate this differently since it’s essential to the cruising community now. Hopefully the pop-up Candy Crush ads don’t light up too much in the bushes.”

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How to build custom reasoning agents with a fraction of the compute

Published

on

Training AI reasoning models demands resources that most enterprise teams do not have. Engineering teams are often forced to choose between distilling knowledge from large, expensive models or relying on reinforcement learning techniques that provide sparse feedback.

Researchers at JD.com and several academic institutions recently introduced a new training paradigm that sidesteps this dilemma. The technique, called Reinforcement Learning with Verifiable Rewards with Self-Distillation (RLSD), combines the reliable performance tracking of reinforcement learning with the granular feedback of self-distillation. 

Experiments indicate that models trained with RLSD outperform those built on classic distillation and reinforcement learning algorithms. For enterprise teams, this approach lowers the technical and financial barriers to building custom reasoning models tailored to specific business logic.

The problem with training reasoning models

The standard method for training reasoning models is Reinforcement Learning with Verifiable Rewards (RLVR). In this paradigm, the model learns through trial and error, guided by a final outcome from its environment. An automated verifier checks if the model’s answer is right or wrong, providing a binary reward, such as a 0 or 1.

Advertisement
rlvr

Reinforcement learning with verifiable rewards (RLVR)

RLVR suffers from sparse and uniform feedback. “Standard GRPO has a signal density problem,” Chenxu Yang, co-author of the paper, told VentureBeat. “A multi-thousand-token reasoning trace gets a single binary reward, and every token inside that trace receives identical credit, whether it’s a pivotal logical step or a throwaway phrase.” Consequently, the model never learns which intermediate steps led to its success or failure.

On-Policy Distillation (OPD) takes a different approach. Instead of waiting for a final outcome, developers pair a smaller student model with a larger, more capable teacher model. For each training example, the student compares its response to that of the teacher token by token. This provides the student with granular feedback on the entire reasoning chain and response-generation process.

Deploying and running a separate, massive teacher model alongside the student throughout the entire training process incurs massive computational overhead. “You have to keep a larger teacher model resident throughout training, which roughly doubles your GPU footprint,” Yang said. Furthermore, the teacher and student models must share the exact same vocabulary structure, which according to Yang, “quietly rules out most cross-architecture, cross-modality, or multilingual setups that enterprises actually run.”

Advertisement
on-policy distillation

On-policy distillation (OPD)

The promise and failure of self-distillation

On-Policy Self-Distillation (OPSD) emerged as a solution designed to overcome the shortcomings of the other two approaches. In OPSD, the same model plays the role of both the student and the teacher.

During training, the student receives a standard prompt while the teacher receives privileged information, such as a verified, step-by-step answer key. This well-informed teacher version of the model then evaluates the student version, providing token-by-token feedback as the student tries to solve the problem using only the standard prompt.

OPSD appears to be the perfect compromise for an enterprise budget. It delivers the granular, step-by-step guidance of OPD. Because it eliminates the need for an external teacher model, it operates with the high computational efficiency and low cost of RLVR, only requiring an extra forward pass for the teacher.

Advertisement

However, the researchers found that OPSD suffers from a phenomenon called “privileged information leakage.”

“The objective is structurally ill-posed,” Yang said. “There’s an irreducible mutual-information gap that the student can never close… When self-distillation is set up as distribution matching, the student is asked to imitate the teacher’s full output distribution under privileged context.”

on-policy self-distillation

On-policy self-distillation (OPSD)

Because the teacher evaluates the student based on a hidden answer key, the training objective forces the student model to learn the teacher’s exact phrasing or steps instead of the underlying reasoning logic. As a result, the student model starts hallucinating references to an invisible solution that it will not have access to in a real-world deployment.

Advertisement

In practice, OPSD models show a rapid spike in performance early in training, but their reasoning capabilities soon plateau and progressively degrade over time.

Decoupling direction from magnitude with RLSD

The researchers behind RLSD realized that the signals governing how a model updates its parameters have fundamentally asymmetric requirements. They identified that the signal dictating the direction of the update (i.e., whether to reinforce or penalize a behavior) can be sparse, but must be perfectly reliable, because pointing the model in the wrong direction damages its reasoning policy.

On the other hand, the signal dictating the magnitude of the update (i.e., how much relative credit or blame a specific step deserves) benefits from being extremely dense to enable fine-grained, step-by-step corrections.

RLSD builds on this principle by decoupling the update direction from the update magnitude. The framework lets the verifiable environmental feedback from the RLVR signal strictly determine the direction of learning. The model only receives overall reinforcement if the final answer is objectively correct.

Advertisement
RLSD

Reinforcement learning with self-distillation (RLSD) (source: arXiv)

The self-teacher is stripped of its power to dictate what the model should generate. Instead, the teacher’s token-by-token assessment is repurposed to determine the magnitude of the update. It simply distributes the total credit or blame across the individual steps of the model’s reasoning path.

This alters how the model learns compared to the classic OPSD paradigm. In standard OPSD, the training objective acts like behavioral cloning, where the model is forced to directly copy the exact wording and phrasing of the teacher. This causes the student to hallucinate and leak references to data it does not have.

Instead of forcing the model to copy a hidden solution, RLSD provides a natural and virtually cost-free source of per-token credit information.

Advertisement

“The intuition: we’re not teaching the model to reason like the teacher,” Yang said. “We’re telling the model, on the path it chose, which of its own tokens were actually doing the work. The model’s exploration distribution stays its own. Only the credit allocation gets sharpened.”

If a specific deduction strongly supports the correct outcome, it receives a higher score. If it is just a useless filler word, it receives a baseline score. RLSD eliminates the need to train complex auxiliary reward networks, manually annotate step-by-step data, or maintain massive external teacher models.

Putting RLSD to the test

To test RLSD, the researchers trained the open-weight Qwen3-VL-8B vision-language model and evaluated it on several visual reasoning benchmarks. These included MMMU for college-level multi-discipline questions, MathVista, MathVision, WeMath, and ZeroBench, a stress-test benchmark explicitly designed to be nearly impossible for current frontier models.

They compared the RLSD model against the base model with no post-training, standard RLVR via the GRPO algorithm, standard OPSD, and a hybrid combination of the two.

Advertisement

RLSD significantly outperformed every other method, achieving the highest average accuracy of 56.18% across all five benchmarks. It beat the base model by 4.69% and outperformed standard RLVR by 2.32%. The gains were most pronounced in complex mathematical reasoning tasks, where RLSD outperformed standard RLVR by 3.91% on the MathVision benchmark.

rlsd performance

RLSD outperforms other techniques on key benchmarks (source: arXiv)

Beyond accuracy, the framework offers massive efficiency gains. “Concretely, RLSD at 200 training steps already beats GRPO trained for 400 steps, so roughly 2x convergence speedup,” Yang said. “Cost-wise, the only overhead beyond a normal GRPO pipeline is one extra forward pass per response to grab teacher logits. Compared to rollout generation… that’s basically free.”

Unlike OPSD, which saw performance spike and then completely collapse due to information leakage, RLSD maintained long-term training stability and converged on a higher performance ceiling than standard methods.

Advertisement

The qualitative findings highlight how the model alters its learning behavior. For example, in a complex visual counting task, standard RLVR looks at the final correct answer and gives the entire paragraph of reasoning tokens the same reward. RLSD surgically applied rewards to the specific mathematical subtraction steps that solved the problem, while actively down-weighting generic filler text like “Looking at the image, I see…”.

In another example, the model performed an incorrect math derivation based on a bar chart. Instead of labeling the whole response as a failure, RLSD concentrated the heaviest penalty on the exact point where the model misread a relationship from the chart. It remained neutral on the rest of the logical setup, recognizing that the initial framework was valid.

This is particularly important for messy, real-world enterprise use cases. If a model makes a mistake analyzing a 50-page quarterly earnings report, developers do not want it to unlearn its entire analytical framework. They just want it to fix the specific assumption it got wrong. RLSD allows the model to learn exactly which logical leaps are valuable and which are flawed, token by token. Because RLSD does this by repurposing the model itself, it provides models with granular reasoning capabilities while keeping the costs of training reasonable.

How enterprises can get started

For data engineers and AI orchestration teams, integrating RLSD is straightforward, but it requires the right setup. The most critical requirement is a verifiable reward signal, such as code compilers, math checkers, SQL execution, or schema validators. “Tasks without verifiable reward (open-ended dialogue, brand-voice writing) belong in preference-based pipelines,” Yang said.

Advertisement

However, RLSD is highly flexible regarding the privileged information it requires. While OPSD structurally requires full intermediate reasoning traces, forcing enterprises to either pay annotators or distill from a frontier model, RLSD does not.

“If you have full verified reasoning traces, great, RLSD will use them,” Yang said. “If all you have is the ground-truth final answer, that also works… OPSD doesn’t have this flexibility.”

Integrating the technique into existing open-source multi-modality RL frameworks like veRL or EasyR1 is incredibly lightweight. According to Yang, it requires no framework rewrite and slots right into the standard stack. The code swap involves simply changing tens of lines to adjust the GRPO objective and sync the teacher with the student.

Looking ahead, RLSD offers a powerful way for enterprises to maximize their existing internal assets.

Advertisement

“The proprietary data enterprises hold inside their perimeter (compliance manuals, internal documentation, historical tickets, verified code snippets) is essentially free privileged information,” Yang concluded. “RLSD lets enterprises feed this kind of data straight in as privileged context, which sharpens the learning signal on smaller models without needing an external teacher and without sending anything outside the network.”

Source link

Continue Reading

Tech

Sony INZONE H6 Air Open-back Gaming Headphones Debut with Wired Design and Immersive Sound

Published

on

Sony is expanding its INZONE lineup with the new INZONE H6 Air, a wired open-back gaming headset built for PC and PlayStation users who want a more natural, spacious presentation than closed back designs typically deliver. It joins the existing H5 and H3 models, both closed back, and signals a broader push by Sony to cover more listening preferences inside a gaming category that continues to grow at scale.

That push isn’t happening in a vacuum. Sony’s acquisition of Audeze came after the strong reception of the Maxwell wireless headset, and the recent Maxwell 2 only reinforces the point. Gaming audio has become a serious battleground. Between Sony and Microsoft, it’s a trench war for the same customer, and products like the INZONE H6 Air are clearly part of the strategy to tighten that grip.

sony-inzone-h6-air-headphones-angle

The INZONE H6 Air brings an open-back acoustic design to Sony’s gaming headset lineup, aiming for a more natural and spacious sound field than the sealed approach used by its siblings. Sony pairs that structure with custom drivers and integrated back ducts to better manage airflow and low frequency response, with the goal of maintaining control while preserving spatial cues.

That matters in practice. Open-back designs tend to trade isolation for positional accuracy, and Sony is clearly leaning into that balance here. The H6 Air is tuned to support spatial audio processing so players can more easily track movement and environmental detail in game, rather than just pushing volume or bass for effect.

Advertisement

Construction

The INZONE H6 Air uses an aluminum construction to keep weight down to approximately 199 grams without the detachable microphone and cable, making it the lightest headset in Sony’s INZONE lineup. The design also incorporates the spring hinge headband used in the Sony INZONE H9 II, which allows for a more compact frame while maintaining fit and stability.

The low weight and flexible headband structure are intended to improve long session comfort, reducing pressure without significantly affecting durability or support.

Open-back 

With its open-back design, the INZONE H6 Air is intended to create a more natural sound field that places greater emphasis on spatial accuracy rather than isolation. By leaving the rear of the driver unobstructed, the design reduces internal reflections inside the earcup, which can help preserve detail and improve the sense of space.

The goal is more precise sound field reproduction in line with how game audio is mixed, allowing players to better perceive directionality and environmental cues without the coloration that can come from a fully enclosed housing.

Advertisement
sony-inzone-h6-air-headphones-top-angle

Drivers

The H6 uses 40 mm drivers that draw on design elements from Sony’s MDR-MV1, adapted here for gaming use. Sony also incorporates back ducts into the driver assembly to help manage airflow and support low frequency control, while maintaining separation between bass and midrange.

The result is a presentation focused on clarity and spatial accuracy, which can help with positional cues in games where directionality and environmental detail matter.

Mic

The INZONE H6 Air includes a detachable cardioid microphone designed for focused voice capture. The boom is positioned toward the user’s mouth to reduce pickup of off-axis noise, and the flexible arm allows for adjustment while holding its position during use.

Advertisement. Scroll to continue reading.
sony-inzone-h6-air-mic

Tuning

The H6 Air has been tuned specifically for Role-Playing Game (RPG) and adventure games. It enhances clarity, depth, and environmental detail to better reflect how the audio is intended to be heard. This can be accessed by connecting to Sony’s INZONE Hub via the USB-C Audio Box. 

This is a compact, digital-to-analog converter (DAC) that connects to PCs or consoles via USB-C and features a 3.5mm input for connecting the headphones. The box supports 360 Spatial Sound for Gaming, 7.1ch virtual surround sound, and custom EQ settings via the INZONE Hub

Advertisement

Comparison

sony-inzone-h6-air-h5-h3
Sony INZONE Model H6 AIR (2026) H5 (2023) H3 (2022)
Product Type  Wired Gaming  Headphones  Wired/Wireless Gaming  Headphones  Wired Gaming  Headphones 
Price $199 $179 $119
Wearing Style  Open-back  Closed-back Closed-back
Driver Type Dynamic Dynamic Dynamic
Driver Size 40 mm 40 mm 40 mm
Frequency Response 10 Hz – 20,000 Hz 5Hz-20,000Hz 10 Hz – 20,000 Hz
Impedance  28 Ω (1 kHz) 21ohm(1kHz) 35ohm (1kHz)
Sensitivity 99 dB/mW 89 dB/mW 92 dB / mW
Volume Control  Yes Yes Yes
360 Spatial Sound Gaming and Personalizer Yes Yes Yes
FPS Mode with Fnatic Yes Yes Not Indicated
INZONE Hub (PC) Yes Yes Yes
Sony Sound Connect (Mobile app) 
Mic Yes (Detachable) Yes (Not detachable) Yes (Not detachable)
Bluetooth 
Plug  Gold-plated L-shaped 4-pole Mini Plug (CTIA) Gold-plated L-shaped 4-pole mini plug N/A
Audio Cable Included  Yes Yes Yes (Not Detachable)
Wireless Connection Option Yes – USB-A 2.4 GHz wireless dongle
USB-C Audio Box Yes USB-A Wireless Dongle acts as an Audio Box Yes
Battery Life  Max. 28 hours (Wireless Mode)
Weight (without cable and microphone) 199 g 260 g 299 g
Colors  Black White, Black White, Black
Included Accessories USB-C Audio Box

Headphone Cable (2 m -single-sided)

Detachable Boom Microphone (with windscreen)

Reference Guide

Warranty Card

Advertisement
USB-A Wireless Dongle

USB Cable

USB transceiver

Reference Guide

Advertisement
USB-C Audio Box

Attached Headphone Cable (1.2 m -single-sided)

Reference Guide

Warranty Card

Advertisement

The Bottom Line 

Sony’s INZONE H6 Air adds something the lineup didn’t have before: an open-back option aimed at players who care more about spatial accuracy and a less enclosed presentation than isolation. It’s also Sony’s lightest full-size gaming headset to date and one of the few in this category to pair a traditional wired design with a bundled USB-C audio interface for software control through the INZONE Hub. That combination, open-back acoustics plus a PC-friendly control box, gives it a different angle than the closed-back H5 and H9 II.

What it doesn’t offer is just as clear. There’s no wireless option, no onboard battery features, and no noise isolation—by design. The “Air” label may suggest mobility, but this is a desk-bound headset that depends on its wired connection and external audio box. If you game in a noisy environment or want a single headset for commuting and play, this isn’t built for that.

Competition is crowded. On the open-back side, options are limited but growing, while closed-back heavyweights like the Audeze Maxwell (and its newer iterations) set a high bar for wireless performance. Brands like ASUS ROG, SteelSeries, and Razer continue to push feature-rich headsets, while wireless gaming earbuds from Cleer Audio and Final Audio offer a very different form factor for the same audience.

Advertisement

Who should consider the H6 Air? PC and PlayStation users who play in quieter spaces and want a more open, speaker-like presentation with reliable wired performance. If positional accuracy, comfort, and long-session usability matter more than isolation or portability, this is where the H6 Air fits. If you need flexibility, travel use, or a single do-it-all headset, Sony already has other options, and so does everyone else.

Pricing & Availability

Source link

Advertisement
Continue Reading

Tech

Revolut plans a physical store in Barcelona

Published

on

There is no confirmed timeline for a launch date yet, Revolut said.

Revolut is piloting a physical store in Barcelona in its latest attempt to compete with traditional banks.

The UK fintech is stressing that this will not be a traditional bank branch, but rather a “new format built for how modern customers engage with brands today”. With this, “high-visibility, immersive space”, Revolut hopes to make fintech more accessible to the general public.

“This is a new physical concept space where people can experience Revolut products and services in-person, receive support, discover features and engage with the brand more tangibly. At our scale, physical presence builds trust and visibility,” a company spokesperson told SiliconRepublic.com.

Advertisement

Spain is one of Revolut’s key strategic hubs in Europe with more than 6m customers. The pilot is still in the early stages with no confirmed timeline on when it would open.

“We chose Barcelona as the place to pilot our physical stores because it combines local density, global relevance, tourism and innovation,” a spokesperson told Euronews today (28 April). Plans for any other future physical stores will depend on the success of the pilot project.

Revolut’s plans for a physical store comes at a time when traditional bank branches are closing in droves.

Around 6,000 commercial bank branches closed down in the US over the last five years. And according to a 2025 report by the American Bankers Association, only 9pc of customers prefer brick and mortar branches as their preferred method of banking.

Advertisement

Meanwhile, UK’s Lloyds Banking Group closed down nearly 100 branches this February, and Santander bank said it would shut 44 bank branches.

Although, Barclays, which shut nearly 80pc of its branches since 2019, is now planning on opening new ones. “I truly believe that the combination of great digital and great human touch is the future of banking,” the bank’s UK CEO Vim Maru said earlier this month.

Revolut has major plans to become the “world’s first truly global banking platform”. The fintech recently secured a full banking licence in the UK, and has applied for the same in the US.

It also recently expanded operations to Mexico, opened a new global headquarters in London and secured a payments licence in India. The fintech hopes that this continued expansion can help it reach 100m customers by mid-2027.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

OpenAI Really Wants Codex to Shut Up About Goblins

Published

on

OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

Advertisement

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

Advertisement

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Advertisement

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

Source link

Continue Reading

Tech

‘The connective tissue between your data, your people, and your goals’: Google Cloud positions Gemini Enterprise as the one-stop shop for all your agentic affairs

Published

on


  • The new Gemini Enterprise Agent Platform is an end-to-end building and deployment tool
  • Google’s clearly committed to interoperability with third-party model and tool support
  • Even non-technical workers should be able to build their own AI agents

Google Cloud has unveiled Gemini Enterprise, which has evolved into a single interface where users can interact with their AI agents just as they would their Workspace apps.

Core to the announcement is the brand-new Gemini Enterprise Agent Platform, described as an end-to-end development platform for building, deploying and managing agents at scale.

Source link

Continue Reading

Tech

Webb and Chandra Spot X-Rays Escaping From a Distant Little Red Dot

Published

on

Webb Chandra X-Ray Observatory Little Red Dot
New discoveries shed light on the early cosmos by combining two of the most powerful telescopes available, the James Webb Space Telescope (JWST) and the Chandra X-ray Observatory. Interestingly, Webb spotted hundreds of little red dots dispersed throughout the sky practically as soon as it began observations. These little red dots are so far away that their light is stretched out to longer wavelengths as it travels and appears in the photographs as red dots.



Many of them are located at distances of 12 billion light years or greater. Astronomers have been scratching their brains for years, wondering what was fueling these red dots. It appeared that a thick layer of gas was around them, and that gas would conceal the typical powerful signals from a black hole devouring adjacent material. The Chandra X-ray Observatory will now play an important role in solving the enigma. Its detectors detected X-rays coming from one of the red dots. The one they discovered has the official designation 3DHST-AEGIS-12014 and is around 11.8 billion light years away. It checked all the boxes except one, yet this one standout object emitted X-rays that Chandra had detected years before. The old X-ray data, which dates back more than ten years and is the result of an eight-day survey, began to make a lot more sense as the new Webb data arrived.

Sale


Gskyer Telescope, 70mm Aperture 400mm AZ Mount Astronomical Refracting Telescope for Kids Beginners…
  • Superior Optics: 400mm(f/5.7) focal length and 70mm aperture, fully coated optics glass lens with high transmission coatings creates stunning images…
  • Magnification: Come with two replaceable eyepieces and one 3x Barlow lens.3x Barlow lens trebles the magnifying power of each eyepiece. 5×24 finder…
  • Wireless Remote: This refractor telescope includes one smart phone adapter and one Wireless camera remote to explore the nature of the world easily…

The fact that X-rays are observable at this point indicates that the black hole is in a specific stage of its life cycle. As the black hole consumes the gas, it thins out, and the once uniform cloud of gas splits into patches. Chandra detected X-rays coming out from the material that falls into the black hole and seeping through those holes. The fact that the intensity of the X-rays varies over time supports the idea of dense patches of cloud material rotating through and traveling across our line of sight.

Advertisement

Webb Chandra X-Ray Observatory Little Red Dot
In the end, the gas simply fades away, and the object ceases to appear as a small red dot and becomes a supermassive black hole that we can see sucking in material in X-rays, and our discovery demonstrates that we are catching one of these things in the midst of that transition. It provides us with a glimpse into something that was previously unknown.

Webb Chandra X-Ray Observatory Little Red Dot
This suggests that there could be hundreds, if not thousands, of “little red dots” out there. By examining even one of them as it changes, we can gain a better understanding of how these supermassive black holes were able to grow to such vast masses so fast after the big bang. The same Chandra and Webb will continue to explore the skies for new examples like this one, and each time we find one, our understanding of how black holes and galaxies formed when the universe was young will improve slightly.
[Source]

Source link

Continue Reading

Tech

The PX8 S2 over-ears are now available in more flashy colours

Published

on

Bowers & Wilkins is giving its flagship headphones a fresh coat of paint.

The Px8 S2, the brand’s top-tier wireless over-ears, are now available in two new finishes — Midnight Blue and Pearl Blue.

The update is purely aesthetic, and joins Onyx Black, Warm Stone and the McLaren Edition. More broadly, it pushes Bowers & Wilkins’ wider headphone and earbud portfolio to a total of 21 different colour variants. That’s the most the company has ever offered.

That growing focus on design is clearly intentional. Bowers & Wilkins has been leaning into the idea that premium headphones should feel as considered visually as they do sonically, and the Px8 S2 fits that brief. The new finishes stick with the same Nappa leather trim and aluminium detailing. However, they are just reworked into deeper, more expressive tones that feel a bit more eye-catching than the more understated originals.

Advertisement
Px8 S2 in Midnight BluePx8 S2 in Midnight Blue
Px8 S2 in Midnight Blue. Image Credit (Bowers & Wilkins)

Advertisement

It’s important to note that nothing else has changed under the hood. The Px8 S2 still delivers the same award-winning sound performance it’s known for, and is fitted with the same drivers, tuning and overall audio profile. If you were already sold on how they sound, this update is really about having more choice in how they look.

It also follows a broader refresh across the brand’s lineup. After recently introducing new finishes for the Px7 S3 and Pi8 models, Bowers & Wilkins seems to be doubling down on offering more personalisation across its premium range. This is something that’s becoming increasingly common in the high-end headphone space.

The new Px8 S2 finishes are available now, priced at £629 / €729 / $799, matching the existing models.

If nothing else, it’s a reminder that flagship headphones aren’t just about sound anymore. They are part fashion piece, part tech. And with Midnight Blue and Pearl Blue now in the mix, the Px8 S2 leans a little further into that idea.

Advertisement

Source link

Continue Reading

Tech

‘Free Speech’ President Trump, Once Again, Tries To Get Jimmy Kimmel Fired For Jokes

Published

on

from the no-laughing-matter dept

Apparently we’ve reached the stage of the second Trump presidency when we’re doing reruns of the old hits. As you’ll recall, Donald Trump has been desperate to get late-night TV host and comedian Jimmy Kimmel fired for quite some time. While Trump has long complained about any late night comedian making fun of him, he really has gone after Kimmel in particular. Things went into overdrive last fall when America’s top censor, FCC chair Brendan Carr, threatened an investigation if Disney didn’t punish Kimmel for a joke. Disney initially caved, before millions started canceling their subscriptions, leading to a backtracking.

But, since then, both Trump and Carr have continued to look for opportunities to get Kimmel fired for his speech.

In any normal world this would be a huge five alarm fire as an attack on the First Amendment. The president and his minions keep trying to get a comedian fired for his jokes because they are critical of the president. That’s not how any of this is supposed to work. But because Trump does it so often, almost everyone seems to just shrug and move on.

And now Trump is at it again. Both Donald and Melania went on social media to whine about Kimmel mocking Trump again — and to demand he be fired again. Because he told a pretty standard joke about Donald Trump being old.

Advertisement

While the White House Correspondents Dinner this past weekend was shut down after someone tried (and failed) to rush past security with a couple of guns (you know, the kind that Trump and the Republicans have made sure it’s easy for anyone to purchase), even before that the Correspondents Association knew better than to hire the usual comedian to entertain the journalistic elite in the room, preferring instead to hire a magician/mentalist.

Kimmel decided last week, on his show, to present an alternative — effectively what his own White House Correspondents Dinner roast would have been. It’s a pretty typical WHCD comic routine, interspersed with “audience reaction” shots spliced in from other events. You can watch it here:

One joke in it referred to Melania Trump, pretending that she was present (like she would be at the actual dinner) and saying: “Mrs. Trump, you have a glow like an expectant widow.”

Advertisement

Anyone not desperate to exploit a situation for political gain would hear that joke and recognize immediately that it’s about the fact that the president is decades older than his third wife, and that his health does not appear to be that great (in multiple ways).

But, because no big news story can go unexploited by the Trumps for personal and political gain, they’re pretending that this mid-level joke, combined with the failed security breach by a lone nut, somehow… demands the firing of Jimmy Kimmel all over again..

In his social media post Monday afternoon, Mr. Trump described the comedian’s joke as “really shocking” and “something far beyond the pale.” He ended his post: “Jimmy Kimmel should be immediately fired by Disney and ABC.”

The first lady had posted about Mr. Kimmel a few hours earlier.

“His monologue about my family isn’t comedy,” she wrote. “His words are corrosive and deepens the political sickness within America.” She called Mr. Kimmel “a coward” who “shouldn’t have the opportunity to enter our homes each evening to spread hate.” She said he “hides behind ABC because he knows the network will keep running cover to protect him.”

“Enough is enough,” she wrote. “It is time for ABC to take a stand.”

Oh come on.

This theatrical pearl-clutching over a joke is pathetic and ridiculous on almost every level. First, Kimmel was making an obvious joke about the age difference and the obvious decline in health of the president. It had nothing to do with political violence. Second, claiming that this joke has anything to do with the attempt at violence makes no sense. Kimmel’s joke about the age difference between the Trumps was made two days prior to the scheduled WHCD. The comments above act as though they’re somehow associated with the lone nut’s failed assassination attempt, but unless time works backwards that makes no sense.

Advertisement

Third, if we’re going to talk about “corrosive” dialogue that “deepens the political sickness within America,” the only one to talk about is President Trump, who can barely go a day without issuing corrosive attacks on anyone who criticizes him… or just anyone who is a non-white, non-male who doesn’t praise him.

Fourth, Trump has had it in for Kimmel for years, so of course he’d jump on this excuse to attack him again and demand he be fired — even though the last attempt not only failed badly, but made millions more people aware of Trump’s insecure lashing out at comedians.

Finally, Trump and his MAGA cultists keep pretending that they’re all about free speech, when he is actually (by far) the most censorial president of our lifetime. And here he is demanding someone be fired (not for the first time) over a simple joke. That is authoritarian, censorial bullshit.

Yet, we hear nothing from the folks who spent years insisting that when the Biden admin sent emails to Facebook asking them how they were going to handle health misinformation, that was the greatest attack on free speech in history. Those same people are still making things up about the Biden administration… and have nothing to say about yet another actual attack on free speech. We don’t need to review this all over again, but some Biden officials sent weak emails asking Facebook and Twitter to improve their policies on disinformation, which were mostly ignored. As the Supreme Court said clearly in the Murthy ruling, there was no evidence presented of any actual coercion by the government, which meant the plaintiffs had no standing to bring the case (there needs to be an actual case or controversy, and they could present none).

Advertisement

Meanwhile, between Trump and Carr, we see clear, detailed attempts by the administration to punish a comedian and the company he works for speech that is critical of the president. It’s about as big an attack on the First Amendment as we’ve seen from a President in decades.

Kimmel, for his part, mentioned the latest verbal attacks and attempt to get himself fired on his monologue Monday night, seemingly taking it in stride, but having the President of the United States repeatedly target a comedian for making jokes about him is about as far from a free speech presidency as you can get.

Filed Under: 1st amendment, donald trump, free speech, jimmy kimmel, jokes, melania trump, whcd

Companies: disney

Advertisement

Source link

Continue Reading

Tech

Best Dating Apps in 2026, Compared by Matching Technology

Published

on

All major dating apps claim to use algorithms to find you better matches. What they don’t all tell you is how those algorithms work — or how radically different the approaches are from one platform to the next. Tinder’s newest AI system analyzes your camera roll. Hinge runs deep learning models on mutual compatibility signals. eHarmony assigns you a psychometric score based on 32 measurable dimensions. The matching technology you choose shapes not just who you see, but whether the platform can realistically serve your actual goal.

This comparison breaks down each major platform’s matching technology in plain terms, so you can make an informed choice rather than defaulting to the most advertised name.

How to Read This Comparison

Each platform was evaluated across six dimensions: matching model type, AI depth, primary data input, best-fit goal, approximate user base, and a critical limitation. “AI depth” refers to how much the platform relies on behavioral inference and machine learning versus static user-set filters. A platform with high AI depth learns and adjusts over time; one with low AI depth executes rules you set at registration and stops there. Neither is inherently superior — it depends entirely on your use case and how much behavioral data you are willing to provide.

2026 Dating App Matching Technology — At a Glance

App Matching Model AI Depth Primary Input Best For Monthly Users (est.) Key Limitation
Tinder Behavioral AI + Camera Roll Analysis (Chemistry) High (2026) Swipe behavior, Q&A, optional photo library scan Maximum reach; casual to exploratory ~75M Camera roll access is opt-in but privacy-sensitive; still skews casual
Hinge Deep Learning Mutual Compatibility High Interaction history, response patterns, profile engagement Serious relationships ~23M Smaller pool than Tinder; algorithm weight favors active users
Bumble Swipe + Bee AI (in rollout) Medium → High Swipe behavior, quiz-based preference data Safety-first; women-controlled initiation ~50M Bee AI not yet fully public; swipe mechanic still dominant for now
eHarmony Psychometric Compatibility Scoring Medium 80+ question quiz across 32 dimensions Long-term commitment; 30s–50s demographic Not publicly disclosed No independent profile browsing; expensive; slow match cadence
OKCupid Question-Based Value Alignment Low–Medium Answered question database; stated preferences Values-first matching; best free option ~7% US share Match quality depends heavily on how many questions you answer
Coffee Meets Bagel Curated Daily Batch Algorithm Medium Profile data, stated preferences, social graph proximity Low-volume intentional daters Smaller niche Slow cadence frustrates high-volume users; in-app currency model costly
Grindr Geolocation Grid (no algorithm ranking) Low Real-time GPS proximity MSM community; immediate local connection ~7% US share No compatibility layer; volume and directness can overwhelm new users

Tinder — Behavioral AI and the Chemistry System

Tinder has historically been synonymous with volume-based swiping, but its 2026 product direction represents a deliberate departure from that model. As reported by TechCrunch, Tinder’s new Chemistry feature addresses “swipe fatigue” — the growing burnout from endless low-signal profile browsing — by replacing the scroll stream with a single daily curated match recommendation. Chemistry gets to know users through conversational Q&A prompts and, with explicit opt-in permission, analyzes photos from a user’s camera roll to infer lifestyle, hobbies, and personality signals that profiles alone do not surface.

The practical implication is significant: Tinder is moving from a system that showed you everyone who passed your filters toward one that learns what you actually respond to. The behavioral AI principle underlying Chemistry — that revealed preferences outperform stated ones — mirrors what Hinge has been building toward for several years. The limitation to acknowledge honestly is that Chemistry is an opt-in layer on top of the existing platform; users who do not engage with it remain in the older swipe-dominant experience, and Tinder’s brand still draws a disproportionately casual-use audience regardless of matching sophistication.

Advertisement

Hinge — Deep Learning Built Around Mutual Compatibility

Hinge’s positioning as “the app designed to be deleted” is backed by a matching architecture that differs structurally from Tinder’s. According to Hinge’s official 2025 product update, the platform rolled out a rebuilt deep learning recommendation system in 2025 that “better predicts mutual compatibility” — contributing to a double-digit increase in overall matches. The key word is mutual: rather than optimizing for one-sided likes, Hinge’s model attempts to identify pairs where both people are likely to engage, drawing on interaction history, conversation depth, response patterns, and how users engage with specific profile prompt types.

A documented feature of Hinge’s algorithm is its willingness to nudge users beyond their stated filter preferences — suggesting profiles slightly outside set distance or age parameters — when behavioral signals indicate likely compatibility. As analyzed in ProfileSharp’s breakdown of the 2026 algorithm, this filter-override behavior reflects a deliberate design choice: Hinge treats stated preferences as starting points, not hard constraints. This is one of the clearest real-world implementations of behavioral AI in consumer dating, and it partly explains why Hinge has the highest engagement depth-to-user ratio despite having roughly one-third of Tinder’s user volume.

Bumble — Women-First Design Meeting AI AssistanceBumble homepage hero banner on a yellow background showing stacked user profile cards with the large 'Bumble' logo, navigation tabs for Date, BFF, Stories, Safety, and Support, and a Sign In button.

Bumble’s defining structural feature remains unchanged: in heterosexual matches, women must initiate conversation within 24 hours or the match expires. This design choice is not algorithmic — it is a hard platform rule that shapes the entire dynamic of who can be contacted and when. What is changing is the layer above that structure. According to PCMag’s March 2026 coverage, Bumble CEO Whitney Wolfe Herd confirmed that the platform’s Bee AI assistant is undergoing internal testing ahead of a broader rollout, and that an upcoming “Dates” feature will incorporate quiz-based preference matching — potentially eliminating the swipe mechanism entirely if the AI model performs.

Until Bee AI is publicly available, Bumble operates on behavioral swipe data filtered through the women-first rule, which structurally limits match volume but meaningfully increases the signal quality of matches that do form. The platform’s gender ratio — approximately 60:40 male-to-female, meaningfully more balanced than Tinder — is partly a product of that safety-first design. Users on Hinge versus Bumble will find the core difference comes down to initiation control versus algorithmic depth, and both matter depending on what you are optimizing for.

eHarmony — Psychometric Matching at ScaleeHarmony homepage featuring a smiling woman in a denim shirt against a wooden fence background, with the tagline 'Get Who Gets You', a 'Start free today' call-to-action, an online dating experience survey, and a #1 Trusted Dating App badge.

eHarmony operates on a fundamentally different premise than every other app in this comparison. Rather than learning from your behavior on the platform, it attempts to measure your personality and relationship psychology before you ever see a single profile. New users complete an 80+ question quiz built around eHarmony’s 32 Dimensions of Compatibility — covering emotional temperament, communication style, attachment patterns, and values — and receive a compatibility score between 60 and 140 for every suggested match. Scores above 100 are considered above average; scores above 110 signal high compatibility potential.

The important structural caveat is that eHarmony does not allow users to browse the database independently. The algorithm selects all matches. If you disagree with its selections or want to explore outside its suggestions, the platform offers no mechanism to do so. This produces a more curated, lower-volume experience — intentional by design — but it represents a significant loss of agency that suits some users and frustrates others. The pricing model also reflects this commitment-tier positioning: messaging and photo access require a premium subscription, with costs ranging approximately £29.90–£59.90 per month depending on subscription length.

Advertisement

OKCupid — Value-Based Matching Through Answered Questions

OKCupid’s matching logic is built on a database of answered questions about life, values, politics, sexuality, and relationship philosophy. Users answer questions at their own pace, weight how important each issue is to them, and indicate what answers they find acceptable in a potential partner. The algorithm compares these weighted answers across users to generate a match percentage. The more questions a user answers, the more precise the match becomes — which means OKCupid rewards users who invest time in the platform with meaningfully better match quality than those who fill in only the basics.

As a free option with usable core functionality, OKCupid occupies a distinct position in the market. Its AI depth is lower than Tinder or Hinge — it does not learn extensively from behavioral patterns the way those platforms do — but its values-alignment methodology arguably captures a different and complementary compatibility dimension. For users where political alignment, lifestyle philosophy, or relationship structure (including non-monogamy) are filtering criteria, OKCupid’s question layer surfaces those signals in ways that photo-first swipe apps structurally cannot.

Coffee Meets Bagel — Intentional Matching, Reduced Volume

Coffee Meets Bagel is built around a deliberate anti-scroll philosophy. Instead of an infinite swipe stream, the platform delivers a small curated batch of matches each day — historically one to a handful depending on your subscription level — drawn from an algorithm that considers your stated preferences and, where available, social graph proximity through mutual connections. The design goal is to focus attention rather than distribute it across hundreds of low-engagement profile views.

The honest limitation is that this cadence can work against users in less populated markets, where the algorithm may be forced to send matches that fall noticeably outside stated preferences simply to fill the daily batch. The platform also uses an in-app currency model (“beans”) for additional profile interactions beyond the standard batch — a structure that can become expensive for active users who want more than passive daily delivery. Coffee Meets Bagel is best suited to users who have experienced burnout on high-volume swipe apps and want to apply more deliberate attention to fewer, better-curated options.

Advertisement

Grindr — Proximity Without an Algorithm

Grindr operates differently from every other platform in this comparison: it does not rank or sort potential matches by compatibility at all. Instead, it displays a real-time grid of nearby users sorted purely by GPS distance — the closest profiles appear first. There is no learning layer, no compatibility scoring, and no behavioral inference. This design, which has been Grindr’s architecture since its 2009 launch, serves the MSM community with near-instant local visibility and has maintained its position as the dominant platform in that space for over fifteen years.

The trade-off is direct: Grindr’s model optimizes for proximity and immediacy, not compatibility. Users seeking something beyond casual connection typically note that the platform’s design actively works against that goal — the grid interface and absence of algorithmic curation create a high-volume, low-context environment. It remains unmatched for its core use case, but users with relationship or compatibility goals typically find a higher return on platforms with matching layers beyond location alone.

Which App Fits Your Goal?

Choosing a platform based on brand familiarity is the most common mistake new users make. The more useful question is: what does my goal require, and which matching model is most likely to serve it? The following framework maps goals to platform architecture:

Goal-Based Decision Framework

Advertisement
  • Maximum exposure + casual or exploratory: Tinder — largest active user base; Chemistry feature adds AI layer for users willing to opt in
  • Serious long-term relationship, 20s–30s: Hinge — best behavioral AI depth combined with serious-relationship intent signals from the user base
  • Serious long-term relationship, 30s–50s: eHarmony — psychometric depth suits users who want structured compatibility filtering and are willing to pay for it
  • Safety-first; women want control of initiation: Bumble — structural rule, not algorithm, guarantees no unsolicited contact from men
  • Values alignment is a hard filter (politics, lifestyle, relationship structure): OKCupid — question-answer matching surfaces these dimensions where swipe apps cannot
  • Intentional dating, low volume, avoid scroll fatigue: Coffee Meets Bagel — curated daily batch enforces deliberate engagement
  • MSM community, proximity-first: Grindr — dominant platform for this use case; no comparably sized alternative exists

Privacy Trade-offs of AI Matching

The more sophisticated a platform’s matching AI becomes, the more behavioral data it necessarily collects. Tinder’s Chemistry feature makes this explicit by requesting optional access to a user’s camera roll — a meaningful escalation beyond in-app behavioral tracking. Users should be clear-eyed about what they are exchanging: more accurate AI matching requires more data, and that data is held under each platform’s own privacy policy, which varies in how it handles third-party sharing, data retention, and deletion requests.

For users in the UK and EU, GDPR provides the right to request data deletion and to opt out of behavioral profiling. Exercising those rights in practice — rather than assuming they apply automatically — requires navigating each platform’s settings individually. Anyone concerned about this trade-off should review data settings before enabling opt-in AI features like Chemistry’s camera roll scan, and should familiarize themselves with how personal data exposure intersects with romance fraud risk — a risk that grows when detailed lifestyle signals become inferred from your photo library.

Key Takeaways

  • Tinder’s Chemistry (2026) marks its most significant algorithmic shift — from swipe volume toward AI-curated daily matches using behavioral data and optional camera roll analysis
  • Hinge’s deep learning system, updated in 2025, now predicts mutual compatibility and actively pushes users past their stated filter preferences when behavioral signals suggest a good match
  • Bumble’s Bee AI is in development but not yet public — the platform’s structural advantage remains its women-first initiation rule, not its algorithm
  • eHarmony’s 32-dimension psychometric model is the most structured compatibility system available, but it removes all user control over browsing — the algorithm selects everything
  • OKCupid is the strongest free option for values-based matching; its accuracy scales with the number of questions you answer
  • No platform has solved the incentive misalignment problem: retention-driven design and genuine match quality are competing objectives on every app

Frequently Asked Questions

Which dating app has the most advanced AI matching in 2026?

Hinge and Tinder are the most technically advanced in 2026. Hinge uses a rebuilt deep learning model focused on mutual compatibility prediction. Tinder’s Chemistry feature adds behavioral AI plus optional camera roll analysis, though it is an opt-in layer rather than the app’s default experience for all users. Both draw on behavioral signals rather than purely stated preferences.

Is Tinder’s Chemistry feature available everywhere?

Chemistry was initially tested in Australia and New Zealand and launched in the US and Canada in early 2026. Global rollout to additional markets was confirmed as part of the Tinder Sparks 2026 product keynote. Availability in specific regions should be confirmed within the app, as regional rollouts typically follow a staged release schedule.

Does Hinge show you everyone, or does the algorithm control what you see?

Hinge’s algorithm controls the profiles surfaced in your Discover feed, but users can also browse in Standouts (curated by the algorithm) and respond to users who have already liked them. The algorithm actively influences the feed and, notably, can suggest profiles that fall outside your set preferences when it predicts mutual compatibility — a documented design choice per Hinge’s own 2025 product update.

Is eHarmony worth the cost in 2026?

eHarmony makes most sense for users with a specific profile: 30s–50s, seeking long-term commitment, and willing to let an algorithm control match selection in exchange for psychometric compatibility depth. The cost (approximately £29.90–£59.90/month depending on plan) is high relative to competitors. Users who want to browse freely or who are earlier in the exploratory dating phase are likely to find eHarmony’s structure frustrating before they find it useful.

Advertisement
What is the difference between Hinge and Bumble in 2026?

Hinge’s primary differentiation is algorithmic depth — its deep learning model is specifically optimized for serious compatibility. Bumble’s primary differentiation is structural safety design — women control initiation in heterosexual matches, which changes the behavioral dynamic of who reaches out first. Both target relationship-oriented users, but through different mechanisms. Hinge optimizes via algorithm; Bumble optimizes via platform rules.

Does OKCupid still work in 2026?

Yes, OKCupid remains functional and relevant in 2026, particularly as the most capable free option for values-aligned matching. Its question-answer database creates a compatibility layer that swipe-first apps do not replicate. The key caveat is that match quality scales directly with engagement: users who answer fewer questions receive significantly less precise recommendations than those who invest time in the question-answering process.

Should I be concerned about giving a dating app access to my camera roll?

Tinder’s Chemistry feature is opt-in, meaning you are not required to grant camera roll access to use the app. If you choose to enable it, the permission is subject to Tinder’s privacy policy and, in the UK and EU, GDPR data rights. The practical risk is that inferred lifestyle signals (travel, fitness, social patterns visible in photos) become part of your behavioral profile held by the platform. Reviewing Tinder’s data settings and privacy policy befo

re enabling this feature is a reasonable precaution.

Advertisement

Source link

Continue Reading

Tech

How Apple Vision Pro allows for collaborative cataract surgery

Published

on

The Apple Vision Pro has proven to be a useful tool for cataract surgery.

The Apple Vision Pro continues to prove its potential in the medical field, with the headset now seeing use for cataract surgeries in New York.

Priced at $3500, the Apple Vision Pro was never going to be a hit consumer product. Still, Apple’s spatial computing device has found limited success in the healthcare industry, a market the company had in mind from the get-go.

Surgeons have praised the Apple Vision Pro for its high-resolution images and ergonomics. The headset has been used in all sorts of medical and surgical procedures, including colonoscopies, a shoulder arthroscopy, and it’s now even proven to be useful for cataract surgeries.

Advertisement

Dr. Eric Rosenberg of SightMD was able to successfully perform cataract surgery using the Apple Vision Pro. In October 2025, he became the first surgeon in the world to conduct this sort of operation with the help of Apple’s spatial computing headset.

How the Apple Vision Pro has improved cataract surgery

Since then, the Apple Vision Pro has seen use in hundreds of additional cases, thanks to ScopeXR, a mixed reality surgical platform co-created by Dr. Rosenberg.

The software is specially designed for ophthalmic surgery, offering integration with 3D digital surgical microscopes via HDMI, USB, and wireless NDI protocols.

In short, ScopeXR lets surgeons view a live stereoscopic feed from surgical microscopes, along with diagnostic data. This feed can be forwarded to medical professionals, consultants, mentors, and students from around the world, allowing for virtual collaboration via two-way audio.

Advertisement

Dr. Eric Rosenberg described ScopeXR as a software platform that makes surgeons “safer, smarter, and more connected.”

“What we accomplished in that operating room is something that has never been done before anywhere in the world,” added Dr. Rosenberg. “This isn’t just about a new device, it’s about reimagining what the operating room of the future looks like.”

Commenting on the collaborative potential of the Apple Vision Pro and ScopeXR, Dr. Rosenberg said that it’s now possible to “bring the world’s best surgeon into any operating room, at any hour, from anywhere on the planet.”

The Apple Vision Pro has other medical applications

Though cataract surgery with the Apple Vision Pro is undoubtedly an impressive endeavor, it’s not entirely unexpected.

Advertisement
Technical patent diagram of a curved wearable electronic band with internal sensor array, detachable end module, and small external component, all shown in exploded, labeled view.

Detail from an Apple patent researching other ways to use Apple Vision Pro sensors to help read brainwaves, suggesting an interest in medical-related applications.

The spatial computing headset has already been used for various procedures in the United States and elsewhere, and its wear time in the operating room will likely continue to increase.

In the UK, for instance, the Apple Vision Pro was used for a spinal fusion operation, and it has also helped patients visualize complex operations and procedures to better understand them.

Apple’s spatial computing headset has additional potential for patient care. An October 2025 study explored using the Apple Vision Pro to help people with spinal cord injuries or ALS communicate. Apple itself has also been researching the use of brainwave sensors for the Apple Vision Pro.

Advertisement

As the Apple Vision Pro continues to evolve, we might see additional applications across the healthcare industry. visionOS, meanwhile, is set to receive an update at WWDC 2026, which starts on June 8.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025