Connect with us

Tech

Researchers Discover 14,000 Routers Wrangled Into Never-Before-Seen Botnet

Published

on

An anonymous reader quotes a report from Ars Technica: Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices — primarily made by Asus — that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime. The malware — dubbed KadNap — takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen’s Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it’s unlikely that the attackers are using any zero-days in the operation.

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia (PDF), a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

[…] Despite the resistance to normal takedown methods, Black Lotus says it has devised a means to block all network traffic to or from the control infrastructure.” The lab is also distributing the indicators of compromise to public feeds to help other parties block access. […] People who are concerned their devices are infected can check this page for IP addresses and a file hash found in device logs. To disinfect devices, they must be factory reset. Because KadNap stores a shell script that runs when an infected router reboots, simply restarting the device will result in it being compromised all over again. Device owners should also ensure all available firmware updates have been installed, that administrative passwords are strong, and that remote access has been disabled unless needed.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Google’s Gemini Embedding 2 arrives with native multimodal support to cut costs and speed up your enterprise data stack

Published

on

Yesterday amid a flurry of enterprise AI product updates, Google announced arguably its most significant one for enterprise customers: the public preview availability of Gemini Embedding 2, its new embeddings model — a significant evolution in how machines represent and retrieve information across different media types.

While previous embedding models were largely restricted to text, this new model natively integrates text, images, video, audio, and documents into a single numerical space — reducing latency by as much as 70% for some customers and reducing total cost for enterprises who use AI models powered by their own data to complete business tasks.

VentureBeat collaborator Sam Witteveen, co-founder of AI and ML training company Red Dragon AI, received early access to Gemini Embedding 2 and published a video of his impressions on YouTube. Watch it below:

Advertisement

Who needs and uses an embedding model?

For those who have encountered the term “embeddings” in AI discussions but find it abstract, a useful analogy is that of a universal library.

In a traditional library, books are organized by metadata: author, title, or genre. In the “embedding space” of an AI, information is organized by ideas.

Imagine a library where books aren’t organized by the Dewey Decimal System, but by their “vibe” or “essence”. In this library, a biography of Steve Jobs would physically fly across the room to sit next to a technical manual for a Macintosh. A poem about a sunset would drift toward a photography book of the Pacific Coast, with all thematically similar content organized in beautiful hovering “clouds” of books. This is basically what an embedding model does.

An embedding model takes complex data—like a sentence, a photo of a sunset, or a snippet of a podcast—and converts it into a long list of numbers called a vector.

Advertisement

These numbers represent coordinates in a high-dimensional map. If two items are “semantically” similar (e.g., a photo of a golden retriever and the text “man’s best friend”), the model places their coordinates very close to each other in this map. Today, these models are the invisible engine behind:

  • Search Engines: Finding results based on what you mean, not just the specific words you typed.

  • Recommendation Systems: Netflix or Spotify suggesting content because its “coordinates” are near things you already like.

  • Enterprise AI: Large companies use them for Retrieval-Augmented Generation (RAG), where an AI assistant “looks up” a company’s internal PDFs to answer an employee’s question accurately.

The concept of mapping words to vectors dates back to the 1950s with linguists like John Rupert Firth, but the modern “vector revolution” began in the early 2000s when Yoshua Bengio’s team first used the term “word embeddings”. The real breakthrough for the industry was Word2Vec, released by a team at Google led by Tomas Mikolov in 2013. Today, the market is led by a handful of major players:

  • OpenAI: Known for its widely-used text-embedding-3 series.

  • Google: With the new Gemini and previous Gecko models.

  • Anthropic and Cohere: Providing specialized models for enterprise search and developer workflows.

By moving beyond text to a natively multimodal architecture, Google is attempting to create a singular, unified map for the sum of human digital expression—text, images, video, audio, and documents—all residing in the same mathematical neighborhood.

Why Gemini Embedding 2 is such a big deal

Most leading models are still “text-first.” If you want to search a video library, the AI usually has to transcribe the video into text first, then embed that text.

Advertisement

Google’s Gemini Embedding 2 is natively multimodal.

As Logan Kilpatrick of Google DeepMind posted on X, the model allows developers to “bring text, images, video, audio, and docs into the same embedding space”.

It understands audio as sound waves and video as motion directly, without needing to turn them into text first. This reduces “translation” errors and captures nuances that text alone might miss.

For developers and enterprises, the “natively multimodal” nature of Gemini Embedding 2 represents a shift toward more efficient AI pipelines.

Advertisement

By mapping all media into a single 3,072-dimensional space, developers no longer need separate systems for image search and text search; they can perform “cross-modal” retrieval—using a text query to find a specific moment in a video or an image that matches a specific sound.

And unlike its predecessors, Gemini Embedding 2 can process requests that mix modalities. A developer can send a request containing both an image of a vintage car and the text “What is the engine type?”. The model doesn’t process them separately; it treats them as a single, nuanced concept. This allows for a much deeper understanding of real-world data where the “meaning” is often found in the intersection of what we see and what we say.

One of the model’s more technical features is Matryoshka Representation Learning. Named after Russian nesting dolls, this technique allows the model to “nest” the most important information in the first few numbers of the vector.

An enterprise can choose to use the full 3072 dimensions for maximum precision, or “truncate” them down to 768 or 1536 dimensions to save on database storage costs with minimal loss in accuracy.

Advertisement

Benchmarking the performance gains of moving to multimodal

Gemini Embedding 2 establishes a new performance ceiling for multimodal depth, specifically outperforming previous industry leaders across text, image, and video evaluation tasks.

Google Gemini Embedding 2 benchmarks

Google Gemini Embedding 2 benchmarks. Credit: Google

The model’s most significant lead is found in video and audio retrieval, where its native architecture allows it to bypass the performance degradation typically associated with text-based transcription pipelines.

Specifically, in video-to-text and text-to-video retrieval tasks, the model demonstrates a measurable performance gap over existing industry leaders, accurately mapping motion and temporal data into a unified semantic space.

Advertisement

The technical results show a distinct advantage in the following standardized categories:

  • Multimodal Retrieval: Gemini Embedding 2 consistently outperforms leading text and vision models in complex retrieval tasks that require understanding the relationship between visual elements and textual queries.

  • Speech and Audio Depth: The model introduces a new standard for native audio embeddings, achieving higher accuracy in capturing phonetic and tonal intent compared to models that rely on intermediate text-transcription.

  • Contextual Scaling: In text-based benchmarks, the model maintains high precision while utilizing its expansive 8,192 token context window, ensuring that long-form documents are embedded with the same semantic density as shorter snippets.

  • Dimension Flexibility: Testing across the Matryoshka Representation Learning (MRL) layers reveals that even when truncated to 768 dimensions, the model retains a significant majority of its 3,072-dimension performance, outperforming fixed-dimension models of similar size.

What it means for enterprise databases

For the modern enterprise, information is often a fragmented mess. A single customer issue might involve a recorded support call (audio), a screenshot of an error (image), a PDF of a contract (document), and a series of emails (text).

In previous years, searching across these formats required four different pipelines. With Gemini Embedding 2, an enterprise can create a Unified Knowledge Base. This enables a more advanced form of RAG, wherein a company’s internal AI doesn’t just look up facts, but understands the relationship between them regardless of format.

Early partners are already reporting drastic efficiency gains:

Advertisement
  • Sparkonomy, a creator economy platform, reported that the model’s native multimodality slashed their latency by up to 70%. By removing the need for intermediate LLM “inference” (the step where one model explains a video to another), they nearly doubled their semantic similarity scores for matching creators with brands.

  • Everlaw, a legal tech firm, is using the model to navigate the “high-stakes setting” of litigation discovery. In legal cases where millions of records must be parsed, Gemini’s ability to index images and videos alongside text allows legal professionals to find “smoking gun” evidence that traditional text-search would miss.

Understanding the limits

In its announcement, Google was upfront about some of the current limitations of Gemini Embedding 2. The new model can accommodate vectorization of individual files that comprise of as many as 8,192 text tokens, 6 images (in as single batch), 128 seconds of video (2 minutes, 8 seconds long), 80 seconds of native audio (1.34 minutes), and a 6-page PDF.

It is vital to clarify that these are input limits per request, not a cap on what the system can remember or store.

Think of it like a scanner. If a scanner has a limit of “one page at a time,” it doesn’t mean you can only ever scan one page. it means you have to feed the pages in one by one.

  • Individual File Size: You cannot “embed” a 100-page PDF in a single call. You must “chunk” the document—splitting it into segments of 6 pages or fewer—and send each segment to the model individually.

  • Cumulative Knowledge: Once those chunks are converted into vectors, they can all live together in your database. You can have a database containing ten million 6-page PDFs, and the model will be able to search across all of them simultaneously.

  • Video and Audio: Similarly, if you have a 10-minute video, you would break it into 128-second segments to create a searchable “timeline” of embeddings.

Licensing, pricing, and availability

As of March 10, 2026, Gemini Embedding 2 is officially in Public Preview.

Advertisement

For developers and enterprise leaders, this means the model is accessible for immediate testing and production integration, though it is still subject to the iterative refinements typical of “preview” software before it reaches General Availability (GA).

The model is deployed across Google’s two primary AI gateways, each catering to a different scale of operation:

  • Gemini API: Targeted at rapid prototyping and individual developers, this path offers a simplified pricing structure.

  • Vertex AI (Google Cloud): The enterprise-grade environment designed for massive scale, offering advanced security controls and integration with the broader Google Cloud ecosystem.

It’s also already integrated with the heavy hitters of AI infrastructure: LangChain, LlamaIndex, Haystack, Weaviate, Qdrant, and ChromaDB.

In the Gemini API, Google has introduced a tiered pricing model that distinguishes between “standard” data (text, images, and video) and “native” audio.

Advertisement
Gemini 2 Embedding pricing on Google Gemini API

Gemini 2 Embedding pricing on Google Gemini API. Credit: Google

  • The Free Tier: Developers can experiment with the model at no cost, though this tier comes with rate limits (typically 60 requests per minute) and uses data to improve Google’s products.

  • The Paid Tier: For production-level volume, the cost is calculated per million tokens. For text, image, and video inputs, the rate is $0.25 per 1 million tokens.

  • The “Audio Premium”: Because the model natively ingests audio data without intermediate transcription—a more computationally intensive task—the rate for audio inputs is doubled to $0.50 per 1 million tokens.

For large-scale deployments on Vertex AI, the pricing follows an enterprise-centric “Pay-as-you-go” (PayGo) model. This allows organizations to pay for exactly what they use across different processing modes:

  • Flex PayGo: Best for unpredictable, bursty workloads.

  • Provisioned Throughput: Designed for enterprises that require guaranteed capacity and consistent latency for high-traffic applications.

  • Batch Prediction: Ideal for re-indexing massive historical archives, where time-sensitivity is lower but volume is extremely high.

By making the model available through these diverse channels and integrating it natively with libraries like LangChain, LlamaIndex, and Weaviate, Google has ensured that the “switching cost” for businesses isn’t just a matter of price, but of operational ease. Whether a startup is building its first RAG-based assistant or a multinational is unifying decades of disparate media archives, the infrastructure is now live and globally accessible.

In addition, the official Gemini API and Vertex AI Colab notebooks, which contain the Python code necessary to implement these features, are licensed under the Apache License, Version 2.0.

Advertisement

The Apache 2.0 license is highly regarded in the tech community because it is “permissive.” It allows developers to take Google’s implementation code, modify it, and use it in their own commercial products without having to pay royalties or “open source” their own proprietary code in return.

How enterprises should respond: migrate to Gemini 2 Embedding or not?

For Chief Data Officers and technical leads, the decision to migrate to Gemini Embedding 2 hinges on the transition from a “text-plus” strategy to a “natively multimodal” one.

If your organization currently relies on fragmented pipelines — where images and videos are first transcribed or tagged by separate models before being indexed — the upgrade is likely a strategic necessity.

This model eliminates the “translation tax” of using intermediate LLMs to describe visual or auditory data, a move that partners like Sparkonomy found reduced latency by up to 70% while doubling semantic similarity scores. For businesses managing massive, diverse datasets, this isn’t just a performance boost; it is a structural simplification that reduces the number of points where “meaning” can be lost or distorted.

Advertisement

The effort to switch from a text-only foundation is lower than one might expect due to what early users describe as excellent “API continuity”.

Because the model integrates with industry-standard frameworks like LangChain, LlamaIndex, and Vector Search, it can often be “dropped into” existing workflows with minimal code changes. However, the real cost and energy investment lies in re-indexing. Moving to this model requires re-embedding your existing corpus to ensure all data points exist in the same 3,072-dimensional space.

While this is a one-time computational hurdle, it is the prerequisite for unlocking cross-modal search—where a simple text query can suddenly “see” into your video archives or “hear” specific customer sentiment in call recordings.

The primary trade-off for data leaders to weigh is the balance between high-fidelity retrieval and long-term storage economics. Gemini Embedding 2 addresses this directly through Matryoshka Representation Learning (MRL), which allows you to truncate vectors from 3072 dimensions down to 768 without a linear drop in quality.

Advertisement

This gives CDOs a tactical lever: you can choose maximum precision for high-stakes legal or medical discovery—as seen in Everlaw’s 20% lift in recall—while utilizing smaller, more efficient vectors for lower-priority recommendation engines to keep cloud storage costs in check.

Ultimately, the ROI is found in the “lift” of accuracy; in a landscape where an AI’s value is defined by its context, the ability to natively index a 6-page PDF or 128 seconds of video directly into a knowledge base provides a depth of insight that text-only models simply cannot replicate.

Source link

Advertisement
Continue Reading

Tech

Metropolis (1927) Created The Blueprint For Modern Science Fiction Worlds

Published

on

Metropolis 1927 Blueprint Modern Sci Fi
Metropolis, an iconic silent German production from 1927 directed by Fritz Lang, continues to throw a long shadow over the science fiction genre over a century after its release. Many people consider it to be the foundational work of the genre. Its cityscapes, people, and concepts reappear in subsequent stories, ranging from towering dystopias to gnawing conflicts between humans and robots.



Lang constructs a world split cleanly in two, with a privileged elite living lavishly in gleaming towers high above the city while the working class toil away in the gloomy depths below, keeping the machines alive at great personal cost. Into this divide steps Freder, the son of the city’s all powerful master, who ventures underground for the first time, falls for a kind and idealistic worker named Maria, and gets a brutal firsthand look at just how punishing life is down there. Things take a darker turn when Rotwang, a brilliant but dangerous inventor, builds a robot in Maria’s likeness and unleashes it on the masses to sow discord and keep the lower classes firmly under the thumb. What follows is mayhem on a grand scale, including a flood that threatens to swallow the entire underground city whole, and it takes Freder stepping forward as an unlikely peacemaker to finally pull things back from the brink.


Lang’s ambitions were quite high-tech for the time. He was inspired by his trip to New York and saw buildings as emblems of power. The sets combined Art Deco elements with Gothic shadows and a variety of futuristic gadgets. The way the workers were choreographed to move in perfect synchrony like the components of a gigantic clock was also rather impressive for a film from that era. To achieve all of the special effects, the crew used a variety of techniques such as miniatures, reflections, and creative lighting. The robot, a sleek, mechanical creature with a variety of human-like gestures, was a piece of art that grabbed viewers from the start.

Metropolis 1927 Blueprint Modern Sci Fi
Lang’s ideas are still relevant today, depicting how the privileged live in a bubble, disconnected from the people who keep the system running. The machines promise advancement, but all they accomplish is transform people into extensions of themselves. The robot poses numerous problems regarding control, dishonesty, and what truly defines someone as real. These beliefs are more than just leftovers of the industrial past; they are nonetheless crucial to our current arguments about automation and inequality.

Blade Runner World
Metropolis has inspired generations of films, as evidenced by Blade Runner’s rainy streets and towering skyscrapers, as well as the appearance of the golden protocol droid in Star Wars. The Matrix took the entire concept of underground toil and people gradually coming up to their controlled reality. Directors and artists have plagiarized Lang’s vertical cityscapes, such as the elegant gardens above and the depressing blackness below, in a variety of media, including movies and music videos.

Metropolis 1927 Blueprint Modern Sci Fi
What makes Metropolis feel so urgent even now is that the story it tells has never really gone out of date. A world carved up between those who have everything and those who have nothing, locked in a state of uneasy tension, is hardly a difficult concept to relate to in 2026. Set in a future that in many ways has already arrived, the film is a sharp reminder of how easily technology can widen the gap between people rather than close it. Lang saw a society where machines amplify our worst instincts rather than our best ones, and that particular warning feels more relevant than ever in an age of artificial intelligence and mass surveillance. Metropolis may not have predicted every twist the future had in store, but it shaped the way generations of people have imagined tomorrow, and that kind of influence doesn’t fade easily.
[Source]

Source link

Advertisement
Continue Reading

Tech

DJI Osmo Pocket 3 Creator Combo gets a 19% discount in Amazon’s sale event

Published

on

Capturing smooth, cinematic footage on the move normally means carrying bulky camera gear, but compact creator tools are getting surprisingly capable, especially when a strong discount makes them easier to justify.

And with the DJI Osmo Pocket 3 Creator Combo now £405, down from £499, that growing appeal becomes even clearer for creators who want smooth footage without carrying a full camera kit.

Deal DJI Osmo Pocket 3 Creator ComboDeal DJI Osmo Pocket 3 Creator Combo

DJI Osmo Pocket 3 Creator Combo gets a 19% discount in Amazon’s sale event

The DJI Osmo Pocket 3 Creator Combo has dropped by 19% in Amazon’s current sale, offering a compelling chance to grab DJI’s ultra‑portable camera kit.

View Deal

Advertisement

We awarded the Pocket 3 4.5 stars in our review, noting: “If you’re looking for a vlogging camera that makes it as easy to record smooth 4K footage for TikTok and Instagram as YouTube, the DJI Osmo Pocket 3 is ideal”.

The Creator Combo bundle expands its usefulness by including a DJI Mic 2 transmitter, letting vloggers record clearer dialogue without relying entirely on the camera’s built-in microphones.

Accessories such as a mini tripod, battery handle and protective case make the DJI Osmo Pocket 3 Creator Combo more practical straight out of the box, particularly for travel, interviews or spontaneous filming sessions.

Advertisement

Advertisement

Back to the camera; at its core sits a 1-inch CMOS sensor capable of recording 4K video at up to 120 frames per second, which gives creators more flexibility when capturing fast movement or slowing footage down smoothly during editing.

That sensor size matters because it gathers more light than smaller action camera sensors, helping footage retain clearer detail and colour when shooting indoors, at sunset or during unpredictable lighting conditions.

The Whatsapp LogoThe Whatsapp Logo

Get Updates Straight to Your WhatsApp

Join Now

Another highlight is the integrated three-axis mechanical stabilisation system, which works to counter small shakes and walking motion so clips remain steady even when filming handheld while moving through crowded streets or uneven paths.

Advertisement

The rotating two-inch touchscreen plays a bigger role than you might expect, allowing users to switch instantly between horizontal and vertical framing so the same camera can capture YouTube footage one moment and social media clips the next.

Face and object tracking also simplify solo filming since the camera can automatically follow a subject as they move through the frame, keeping the focus steady without needing someone behind the lens.

Advertisement

For creators who want smooth 4K footage, simple subject tracking and portable gear that fits in a jacket pocket, this current discount makes the DJI Osmo Pocket 3 Creator Combo far easier to recommend.

Advertisement

SQUIRREL_PLAYLIST_10148964

Source link

Continue Reading

Tech

vivo Y51 Pro 5G Launched in India With 7,200mAh Battery, Dimensity 7360 Turbo

Published

on

vivo has been on a bit of a roll recently with flagships like the X300 Pro and the X200T. Now, to cater more towards the mid-range buyers, the Chinese smartphone maker has expanded its Y-series lineup in India with the launch of the vivo Y51 Pro 5G. The device comes with a massive 7,200mAh battery, a MediaTek Dimensity processor, and IP68/IP69 durability ratings. The smartphone is priced at ₹24,999 for the 8GB + 128GB variant and ₹27,999 for the 8GB + 256GB model, and it will go on sale starting March 11, 2026, via the vivo India website, Flipkart, and partner retail stores. Here’s everything you need to know about it.

7,200mAh Battery With Fast Charging

vivo Y51 7200 mah battery

The main highlight of the Y51 is its 7,200mAh battery, which is among the largest in its segment. vivo claims the phone can last 23.7 hours of video streaming, 15.4 hours of gaming, 21 hours of social media usage, and 95 hours of music playback before needing a recharge.

Speaking of that, the Y51 supports 44W FlashCharge for faster charging and includes features such as Battery Health Algorithms, Battery Life Extender technology, and bypass charging to reduce battery wear. According to vivo, the battery is built to maintain performance for up to six years of usage. There’s also support for reverse charging.

MediaTek Dimensity 7360 Turbo Processor

Person holding the vivo Y51

Under the hood, the vivo Y51 Pro 5G is powered by the MediaTek Dimensity 7360 Turbo chipset, built on a 4nm process. vivo claims the chip delivers an AnTuTu benchmark score of over 920,000, promising smooth multitasking and gaming performance. The processor is paired with 8GB of RAM, and storage options of 128GB or 256GB.

OriginOS 6, on top of Android 16, runs the show here. The new skin comes with a myriad of AI features like AI Creation for content generation, AI Notes for organizing documents, AI Transcript Assist, and AI Captions for real-time translation. The phone also supports Circle to Search and Google Gemini, enabling smarter search and productivity features. Beyond these, there’s Private Space for secure storage of files and apps, Free Transfer for quick PC connectivity, and Face Unlock that works even when wearing certain types of helmets.

Design & Cameras

Camera specs of the Y51

On the front, the Y51 features a 6.75-inch display with a 120Hz refresh rate and up to 1250 nits peak brightness. The screen is TÜV Rheinland Low Blue Light certified to help reduce eye strain during long usage sessions. The design follows vivo’s recent trend, with a camera island on the back and a 50MP lens capable of 4K video recording at 30 FPS. The setup also includes features such as Electronic Image Stabilization (EIS), dual-view video recording, live photo capture, and multiple AI scene modes for portraits, night shots, and professional-style photography.

Thanks to its IP68 rating, the phone also supports underwater photography at depths of up to 1.5 meters for 30 minutes, though we’d recommend avoiding it, as water damage isn’t covered under warranty.

Advertisement

Source link

Continue Reading

Tech

What to Do in Dumbo If You’re Here for Business (2026)

Published

on

New York City has always been a place that people flock to—to live, to work, to visit, or to play. It’s big and exciting, and there’s almost always something happening: a new play, a new exhibit, or a new restaurant opening.

According to a 2024 report by venture capital firm SignalFire, NYC experienced a tech boom in 2023, becoming the top destination for people relocating with tech jobs, with around 15 percent of them choosing the Big Apple as their destination.

This isn’t the first time the city has seen an influx of technology workers; the 1990s tech boom saw Manhattan’s Flatiron District take off as a hub for high-tech companies, even going so far as to being nicknamed “Silicon Alley.”

That area has since spread, moving its way downtown to Soho, west to Hudson Yards, and more recently over the bridge(s) and into Brooklyn—specifically Dumbo, the Brooklyn Navy Yard, and Downtown Brooklyn, forming the Brooklyn Tech Triangle.

Advertisement

Dumbo, which stands for “Down under the Manhattan Bridge overpass,” is situated between the Brooklyn and Manhattan Bridges on the East River waterfront. The popular neighborhood has great views of Manhattan and the bridges, and an ever-expanding food and drink scene to keep you fed while working and making time to play.

Jump to Section

Where to Stay

What to Do in Dumbo If Youre Here for Business

Courtesy of 1 Hotel Brooklyn Bridge

60 Furman St., (347) 696-2500

Advertisement

If you’re going to stay in Dumbo, you’re going to want views of the Manhattan skyline, the East River, and the iconic bridges that extend between the two, and 1 Hotel Brooklyn Bridge offers that and more. Yes, there is a gym and spa, but there’s also a rooftop pool, which comes in quite handy on those stupidly hot summer days. James Beard Award–winning restaurateur Jonathan Waxman recently brought his iconic West Village restaurant, Barbuto, to the hotel. On the 10th Floor, find Harriet’s Lounge for sushi, bao buns, and wagyu toasts. From 10 pm on Friday, Saturday, and Sundays, listen to live DJs spinning sets while you enjoy craft cocktails and the view.

Don’t forget to end the day with a sustainable drink (or two) at Harriet’s Rooftop, just one floor up from the lounge, for more iconic sunset views. The hotel is pet-friendly, and there’s a café serving espresso, fresh-pressed juices, and artisanal and locally sourced snacks. There’s also a farm stand in the lobby daily from 7 am to 4 pm; grab seasonal fruits that, while they may look “ugly,” are perfect in taste, and all part of the hotel’s sustainability mission.

85 Flatbush Ave Ext., (718) 329-9537

About a 10-minute walk to the bridges and Brooklyn waterfront, The Tillary is a slightly more affordable stay for the area, but still boasts a lobby cafe and rooftop garden bar. Featuring pet-friendly rooms and a fully-equipped gym, this hotel is a great option for still being close to the action, but saving a bit more money. The lobby café offers an affordable range of options (think $4 for an English muffin with egg and cheese and up to $14 for a vegetarian wrap), while the rooftop has a variety of sandwiches, salads, and beverages (both n/a and boozy) to keep you from needing to stray too far.

Advertisement
What to Do in Dumbo If Youre Here for Business

Courtesy of Ace Brooklyn

252 Schermerhorn St., (718) 313-3636

Technically in Boerum Hill, bordering Downtown Brooklyn, the Ace Hotel is a boutique hotel with trendy furnishings and warm vibes, plus a fitness center. They feature a rotating artist in residence and DJ’s spinning in the lobby most weekend nights. For food, there’s Lele’s Roman, featuring a rotating selection of Roman Aperitivo bites daily from 5 to 7 pm, or hit them up for breakfast (lots of egg options!), lunch (panini, pizza, salad!), and dinner (pasta! pizza! classic contorni!). Don’t feel like Italian? Try Koju for an omakase experience set to a carefully curated vinyl music program.

Where to Work

What to Do in Dumbo If Youre Here for Business

Photograph: Michael Lee/Getty Images

68 Jay St., (718) 210-3650

Advertisement

Whether you’re looking for fully enclosed office spaces monthly or long-term, a coworking space, or a conference room, Greendesk has got you covered for a very reasonable price. The space is fully furnished with 24/7 access, high-speed internet, kitchens, and a cleaning service.

Multiple locations

From the SOHO House team, SOHO Works is a network of office spaces; rent a meeting room or use the shared lounge space, plus get access to SOHO member events and amenities. Work at either location—10 Jay Street or 55 Water Street—by the hour or rent by the day.

295 Front St., (347) 414-8782

Advertisement

Located in Vinegar Hill, the Bond Collective has numerous options for you to work, whether you need a dedicated desk, private office, team suite, conference rooms, coworking, or simply a day pass. You’ll have 24/7 access, Wi-Fi, fruits, snacks, and breakfast, plus unlimited printing.

Where to Get Your Coffee

What to Do in Dumbo If Youre Here for Business

Courtesy of Jacques Torres Chocolate

66 Water St., (718) 875-1269

Located on Water Street and open daily from 10 am to 7 pm, this flagship location of the famous chocolatier is where it all began 25 years ago. Here, you’ll find handmade confections, hot chocolate, and ice cream sandwiches. Sample it all, then grab a few things to take with you to share with friends (or not—sharing is overrated).

Advertisement

85 Water St., (718) 797-5026

Almondine has been in Dumbo for over 20 years. Opened by French baker Herve Poussot, this unpretentious bakery thrives on tradition, innovation, and evolution. You’ll feel as though you’ve been transported right to Paris with the fresh bread, croissants, and cakes. They even have a daily lunch special from 12 to 3 pm; choose from a half sandwich, then pair it with a soup, salad, cookie, and half-priced drink for only $18.

45 Washington St., (212) 924-7400

Grab a coffee here before strolling down Washington Street (it’s literally located at one of the most iconic spots that people snap photos of the bridge, so beware of influencers posing in the middle of the street) to the waterfront for a nice break and some fresh air.

Advertisement

Where to Eat

What to Do in Dumbo If Youre Here for Business

Courtesy of Vinegar Hill House

72 Hudson Ave., (718) 522-1018

This is the place you go when you want a relaxed environment with incredible food in cute surroundings. Dining in the outdoor garden is cozy and comforting, while the inside is vintage-inspired and laid back. The menu, while also simple and comforting, is consistent and hits every time.

68 Jay St. #119

Advertisement

Open Tuesday to Friday from 10 am to 2-ish, this unassuming French-style bakery from Ayako Kurokawa is tucked away in the lobby of 68 Jay Street. The pastries, though French in style, are inspired by Kurokawa’s Japanese upbringing. Scones, cookies, cakes, and slices of pie are all served on silver platters, with handwritten labels on blue paper. The gateau basque is a popular item; go early, as they sell out daily.

1 John St., (718) 522-5356

Opened in 2017, Celestine is the kind of spot that feels chill enough to be your neighborhood go-to, while also special enough to go for a celebration. The menu includes thoughtful vegetable-heavy starters and sides, as well as whole branzino and a 14-ounce ribeye. With floor-to-ceiling windows, there’s not a bad seat in the house to enjoy your meal with a view of the East River and all its happenings.

147 Front St.

Advertisement

This intimate, 10-seat chef’s counter offers a tasting menu and à la carte menu, featuring oysters, crudo, and natural wines by the glass. Try the caviar Frito pie: an open bag of Fritos topped with entirely too much caviar and creme fraiche.

1 Front St., (718) 858-4300

Originally opened in 1990 by Patsy Grimaldi and his wife, Carol, Grimaldi sold the business in 1998 to Frank Ciolli. Grimaldi is of the Patsy’s of Harlem lineage (Patsy is his uncle, from whom he learned to make pizza at age 12). In 2000, Grimaldi’s moved locations next door to their original spot where they continue to sell whole pies in a coal-fired oven.

19 Old Fulton St., (718) 596-6700

Advertisement

If you like a side of gossip with your slice, then Juliana’s is the place to go. Patsy and Carol Grimaldi opened Juliana’s in the original Grimaldi’s location at 19 Old Fulton Street in 2012, which caused a stir in the pizza community, since it’s located next door to Grimaldi’s, their previous business. They even got their original coal-fired oven back. Named after Patsy’s mother, Juliana’s serves coal-fired pizza, meatballs, and salads. They also sell four flavors of par-cooked pies to “take & bake” at home. Try an egg cream—a New York City classic of milk, chocolate or vanilla syrup, and seltzer made frothy by whisking the three ingredients vigorously until foamy. Grub Street called it the best in the city in 2017.

Source link

Continue Reading

Tech

3 huge new Disney+ shows to stream in March 2026

Published

on

Disney+ has three new TV shows that really caught my eye in March, and I’m confident there’s something to suit everyone here.

Better yet, a Hulu original has made its way onto this list, so you don’t have to flip through the best streaming services to find some great entertainment. Everything I’ve highlighted here is waiting for you on Disney+, whether you’re down for a gritty Marvel comeback or a Disney Channel classic.

Source link

Advertisement
Continue Reading

Tech

CISA orders feds to patch n8n RCE flaw exploited in attacks

Published

on

n8n

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered government agencies on Wednesday to patch their systems against an actively exploited n8n vulnerability.

n8n is an open-source workflow automation platform widely used in AI development for automating data ingestion, with over 50,000 weekly downloads on the npm registry and over 100 million pulls on Docker Hub.

As an automation hub, n8n often stores a wide range of highly sensitive data, including API keys, database credentials, OAuth tokens, cloud storage access credentials, and CI/CD secrets, making it an extremely attractive target for threat actors.

Tracked as CVE-2025-68613, this remote code execution vulnerability allows authenticated attackers to execute arbitrary code on vulnerable servers with the privileges of the n8n process.

Advertisement

“n8n contains an improper control of dynamically managed code resources vulnerability in its workflow expression evaluation system that allows for remote code execution,” CISA said.

“Successful exploitation may lead to full compromise of the affected instance, including unauthorized access to sensitive data, modification of workflows, and execution of system-level operations,” the n8n team added.

The n8n team addressed CVE-2025-68613 in December with the release of n8n v1.122.0 and also advised IT administrators to apply the patch immediately. Admins who can’t immediately upgrade can limit workflow creation and editing permissions to fully trusted users only, and restrict operating system privileges and network access as temporary mitigation measures to reduce the impact of potential exploitation.

Internet security watchdog group Shadowserver tracks over 40,000 unpatched instances exposed online, with more than 18,000 IPs found in North America and over 14,000 in Europe.

Advertisement
Vulnerable n8n instances exposed online
Vulnerable n8n instances exposed online (Shadowserver)

​CISA has added the vulnerability to its Known Exploited Vulnerabilities (KEV) catalog on Wednesday and ordered Federal Civilian Executive Branch (FCEB) agencies to patch their n8n instances by March 25, as mandated by a binding operational directive (BOD 22-01) issued in November 2021.

“This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise,” CISA warned.

“Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”

Although BOD 22-01 applies only to federal agencies, CISA has encouraged all network defenders to secure their systems against ongoing CVE-2025-68613 attacks as soon as possible.

Since the start of the year, the n8n security team has addressed several other severe vulnerabilities, including one dubbed Ni8mare that allows remote attackers without privileges to hijack unpatched n8n servers.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

Social Media and AI Want Your Attention at All Times. This New Documentary Says That’s Bad

Published

on

“Do you remember the world before cellphones?”

The question comes early in Your Attention Please, a documentary premiering this week at South by Southwest in Austin, Texas. And it hit me harder than I expected. As a 27-year-old tech reporter, I realized I don’t have too many clear memories of life before smartphones. My adolescence unfolded alongside the rise of smartphones, social media, push notifications and the routine of endless scrolling. Like many people my age, I’ve spent most of my life inside the attention economy — without ever really stepping outside it.

That’s the uneasy territory the documentary explores. 

Advertisement

CNET was given exclusive early access to the film’s trailer, embedded below.

Exploring how tech shapes our behavior

AI Atlas

Director Sara Robin said she originally set out to make something smaller: a documentary about people trying to reclaim their attention by breaking unhealthy phone habits. In an interview with CNET, Robin described the idea as a personal story about focus and self-control in an age of constant distraction.

As Robin interviewed researchers, technologists and families affected by social media and cyberbullying, the film’s scope widened. What started as a question about individual habits quickly became a larger investigation into how modern technology systems are designed to shape human behavior. The story stretches from the rise of social media to the emerging influence of AI. 

Along the way, Robin and her collaborators kept hearing the same observation from different corners of the digital world: Social media didn’t just change how people communicate; it quietly rewired what we value. Experiences that were once private or emotional — friendship, affection, belonging — began to acquire numerical equivalents. Followers, likes, comments, views and shares began to be how we saw our own self-worth. In the architecture of social platforms, those numbers function as a kind of social currency.

Advertisement

Trisha Prabhu, a digital-safety advocate and inventor of the anti-cyberbullying technology ReThink, argues that social platforms did more than create new online spaces. She says they fundamentally reshaped how social validation works. The metrics that define popularity often reward attention-seeking behavior and amplify conflict, while genuine connection is now harder to quantify and, therefore, easier to overlook.

Prabhu warns that the same dynamics already driving problems like cyberbullying could accelerate as automated systems become more capable. AI tools can generate abusive messages at scale, produce convincing impersonations or create deepfakes that spread rapidly online. In some cases, the technology may even blur the line between human interaction and machine-generated communication, which could deepen loneliness or encourage harmful behavior.

“There’s AI exacerbating existing harms [like automating cyberbullying], but then I also think that there’s AI creating completely new harms,” Prabhu told CNET. “There are reports of AI tools encouraging users, including minor users, to commit self-harm… Even for the everyday user who’s not experiencing the extreme outcome, I think we have to ask ourselves how much of our time and connection we want spent with an AI tool as opposed to a fellow human being.”

Bringing attention to attention

What struck Robin during filming the documentary was how universal these anxieties felt. Across conversations with families, educators and advocates around the world, the themes were remarkably consistent: overstimulated attention, declining focus in classrooms, rising anxiety among young people and a persistent sense of dread that comes from always being plugged in.

Advertisement
screenshot of Your Attention Please documentary poster

Your Attention Please

Those shared concerns have helped spark a coordinated moment around the film’s release.

On March 11, more than 25 organizations focused on digital well-being will simultaneously release the trailer for Your Attention Please as part of an initiative called Stand for Their Attention. What began as a small collaboration among five groups quickly grew as word spread through advocacy networks. The coalition now includes organizations such as Common Sense Media, Protect Young Eyes, Mothers Against Media Addiction, the Center for Humane Technology, Smartphone Free Childhood and Scrolling to Death. 

The idea behind the synchronized launch is simple: Use the attention surrounding the documentary to highlight the growing movement that’s already working to reshape digital culture. 

Many people feel overwhelmed by the scale of the problem, Robin says, but behind the scenes, a widening ecosystem of advocates is experimenting with ways to build healthier digital environments, from redesigning products to changing norms around screen use.

Advertisement

The campaign also arrives at a moment of growing scrutiny around the attention economy. Lawmakers in the US and abroad are increasingly debating how social platforms affect youth mental health and childhood development. Boycotts around AI use are taking off. Researchers are studying how these algorithms and chatbots influence behavior. Individuals are trying to figure out how much technology belongs in everyday life.

What can we do about it? 

Despite the weight of those conversations, Robin says the goal of the film isn’t to leave audiences feeling powerless. In fact, the rapid rise of public awareness around AI has made her more optimistic than she was during the early days of social media. The systems shaping digital life, she argues, are built by people, which means they can also be rebuilt.

“We have more power than we think,” Robin said. “And there are a lot of different ways to get involved in this, from changing individual habits to changing the culture in your own family and in your community, designing technology differently, getting engaged in these conversations, all the way to pushing for legislative change.”

The film intentionally avoids presenting a single solution.

Advertisement

Instead, Your Attention Please asks a broader question: What happens when attention, one of the most human parts of our lives, becomes one of the most valuable commodities in the global economy? And perhaps more importantly, what kind of digital world do we want to build next?

Source link

Advertisement
Continue Reading

Tech

A microscope reveals the ghost of analog video hidden inside a LaserDisc

Published

on


Jueden’s experiment began by accident. While using a low-cost digital microscope to inspect electronics, he turned it toward a LaserDisc out of curiosity. Under magnification, faint but recognizable images began to emerge – proof that LaserDisc’s analog encoding could still be decoded visually without a player, just by analyzing the…
Read Entire Article
Source link

Continue Reading

Tech

The best Samsung Galaxy S26 and S26 Plus plans in Australia for March 2026

Published

on

Samsung officially unveiled the much anticipated Galaxy S26 and S26 Plus on February 26, 2026 with some fresh upgrades over the Galaxy S25 and S25 Plus.

More of the best Samsung phone plans

The larger-screened Galaxy S26 Plus, meanwhile, retains the 6.7-inch display and 4,900mAh battery from its predecessor, and gets Samsung’s new Exynos 2600 chipset, with the Snapdragon 8 Elite Gen 5 chip reserved for the top-tier S26 Ultra. While the battery capacity is the same, the S26 Plus can be charged wirelessly at 20W, compared to the S25 Plus’ slower 15W wireless charging.

Advertisement

These are flagship phones, so the base model Galaxy S26 and S26 Plus won’t fit in our best cheap phones list. It also doesn’t help that both handsets are more expensive than the S25 lineup, with the base model S26 starting from AU$1,549 (up from the S25’s AU$1,399) and the S26 Plus from AU$1,849 (vs the S25 Plus’s starting price of AU$1,699). This would make paying in monthly instalments an attractive option for some.

While the retailers have finished their pre-order specials, Australia’s big three telcos still have some active deals of up to AU$500 off the handset price for the Samsung Galaxy S26 and S26 Plus (as well as the Galaxy S26 Ultra), with some also coinciding with existing promotions.

With so many options available to score a brand new upgrade, fInding the best plan for these new handsets may not be the most straightforward process, so we’ve done the hard work for you. Take a look at our picks for the best phone plans for the Samsung Galaxy S26 and S26 Plus below:

Advertisement

  • Samsung: pay in instalments of up to 24 months through Samsung financing; also save up to AU$865 when you trade in your old device
  • JB Hi-Fi: trade in your old tech for a JB Hi-Fi gift card to be used on a Galaxy S26 series handset
  • The Good Guys: Galaxy S26 and S26 Plus available in 256GB and 512GB
  • Amazon: Same day delivery with the world’s biggest retailer

Privacy Display and the gimbal-like horizontal lock video mode are only exclusive to the Galaxy S26 Ultra, so if you’re specifically looking for those features, you can check out the best Galaxy S26 Ultra plans.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025