vivo has been on a bit of a roll recently with flagships like the X300 Pro and the X200T. Now, to cater more towards the mid-range buyers, the Chinese smartphone maker has expanded its Y-series lineup in India with the launch of the vivo Y51 Pro 5G. The device comes with a massive 7,200mAh battery, a MediaTek Dimensity processor, and IP68/IP69 durability ratings. The smartphone is priced at ₹24,999 for the 8GB + 128GB variant and ₹27,999 for the 8GB + 256GB model, and it will go on sale starting March 11, 2026, via the vivo India website, Flipkart, and partner retail stores. Here’s everything you need to know about it.
7,200mAh Battery With Fast Charging
The main highlight of the Y51 is its 7,200mAh battery, which is among the largest in its segment. vivo claims the phone can last 23.7 hours of video streaming, 15.4 hours of gaming, 21 hours of social media usage, and 95 hours of music playback before needing a recharge.
Speaking of that, the Y51 supports 44W FlashCharge for faster charging and includes features such as Battery Health Algorithms, Battery Life Extender technology, and bypass charging to reduce battery wear. According to vivo, the battery is built to maintain performance for up to six years of usage. There’s also support for reverse charging.
MediaTek Dimensity 7360 Turbo Processor
Under the hood, the vivo Y51 Pro 5G is powered by the MediaTek Dimensity 7360 Turbo chipset, built on a 4nm process. vivo claims the chip delivers an AnTuTu benchmark score of over 920,000, promising smooth multitasking and gaming performance. The processor is paired with 8GB of RAM, and storage options of 128GB or 256GB.
OriginOS 6, on top of Android 16, runs the show here. The new skin comes with a myriad of AI features like AI Creation for content generation, AI Notes for organizing documents, AI Transcript Assist, and AI Captions for real-time translation. The phone also supports Circle to Search and Google Gemini, enabling smarter search and productivity features. Beyond these, there’s Private Space for secure storage of files and apps, Free Transfer for quick PC connectivity, and Face Unlock that works even when wearing certain types of helmets.
Design & Cameras
On the front, the Y51 features a 6.75-inch display with a 120Hz refresh rate and up to 1250 nits peak brightness. The screen is TÜV Rheinland Low Blue Light certified to help reduce eye strain during long usage sessions. The design follows vivo’s recent trend, with a camera island on the back and a 50MP lens capable of 4K video recording at 30 FPS. The setup also includes features such as Electronic Image Stabilization (EIS), dual-view video recording, live photo capture, and multiple AI scene modes for portraits, night shots, and professional-style photography.
Thanks to its IP68 rating, the phone also supports underwater photography at depths of up to 1.5 meters for 30 minutes, though we’d recommend avoiding it, as water damage isn’t covered under warranty.
Google Play has introduced a new feature called Game Trials, which will let you play a portion of paid games for free before you commit to buying them. It’s now rolling out to select paid games on mobile, and it’s coming soon to Google Play Games on PC. Titles that offer Game Trials will show a button marked “Try” on their profile pages. When you click it, you’ll see how long you can play the game before you have to buy it. In Google’s example, the survival and horror game Dredge will give you 60 minutes of free play time, after which you’ll get the option to either buy the game or delete it from your device.
Google has also announced that it’s releasing more paid indie games over the coming months, including Moonlight Peaks, Sledding Game and Low-Budget Repairs. It has launched a new section in the Play store, as well, to feature games optimized for Windows PCs. You can wishlist the games from that section to get a notification when they’re on sale.
Finally, the company is rolling out Play Games Sidekick, the Gemini-powered Android overlay it announced last year, to select games downloaded from Play. Sidekick can show you relevant info and tools for whatever game you’re playing without having to do a search query. But if you’d rather ask other people for gaming advice instead of an AI, you can also look at a game’s Community Posts, a feature now available in English for select titles on their Play pages.
Enterprise collaboration software giant Atlassian is laying off 63 workers in Washington, according to a WARN notice filed with state regulators.
Atlassian announced Wednesday that it will lay off about 10% of its staff, or 1,600 employees, as the 24-year-old software firm transitions to an “AI-first company.” Atlassian CEO Mike Cannon-Brookes wrote that AI is changing the mix of skills and number of roles required in certain areas.
“This is primarily about adaptation,” he said. “We are reshaping our skill mix and changing how we work to build for the future.”
Atlassian opened an office in Bellevue, Wash., in 2024. The WARN notice indicates that nearly all the employees affected by layoffs in Washington state are remote workers. About half of the affected workers are in engineering or data science roles.
The company also announced Wednesday that CTO Rajeev Rajan, who is based in the Seattle region, will step down after nearly four years with Atlassian. “Atlassian is thankful for Mr. Rajan’s many contributions in building a world-class R&D organization and congratulates the promotion of next generation AI talent in Taroon Mandhana (CTO Teamwork) and Vikram Rao (CTO Enterprise and Chief Trust Officer),” the company wrote in a SEC filing.
The recent rise of AI tools have also spooked some investors as some software stocks have taken a hit. Atlassian shares are down more than 50% this year.
The services will be rolled out under a new brand, Flexar
Singapore car-sharing company BlueSG is preparing to roll out a new service under a new brand, Flexar.
In comments to CNA, BlueSG confirmed that Flexar is currently in the “beta phase” of its shared car mobility service. It is slated to launch later this year.
The new brand will have the same operating concept, which allows users to pick up a car from a station near them and drop it off at another location in Singapore.
Advertisement
Flexar is recruiting early testers
Image Credit: BlueSg
In response to The Straits Times, a BlueSG spokesperson shared that the team behind Flexar is currently focused on “testing and refining a range of exciting new offerings designed to enable flexible urban mobility.”
The new service will introduce a revamped platform, a refreshed fleet featuring a different mix of vehicles, and an expanded network of pick-up and drop-off points. It is also expected to deliver “greater reliability and a smoother user experience.”
However, the spokesperson declined to share further details, such as pricing or the total number of pick-up and drop-off points, until the official launch.
Between Jan and Mar 2026, BlueSG has also been hiring for several roles, including automotive technicians, an operations manager, and customer service agents, across various job portals.
On Mar 9, the company reached out to its community to recruit early testers ahead of the official launch. The invitation, shared in a BlueSG Telegram user group, asked interested participants to complete a questionnaire. Shortlisted users will be able to try the revamped service and provide feedback.
Advertisement
Screengrab from the Flexar website
Flexar has also launched a website, with more details listed as “coming soon.”
“Access reliable cars when you need them, where you need them. No ownership hassles, no long-term commitments, just seamless A-to-B journeys across Singapore,” the company wrote on its homepage.
BlueSG ceased operations & laid off staff in Aug 2025
Back in Aug 2025, BlueSG announced a “pause” to its services and retrenched the majority of its employees shortly after.
At the time, the company said it planned to return with an “upgraded” service powered by “advanced technology, deep expertise, and enhanced operational capabilities.”
The overhaul was driven by the company’s observations of changes in Singapore’s car-sharing landscape and the opportunity to scale its user base.
Meanwhile, about 790 units of the purpose-built Blue Car were scrapped after the Land Transport Authority did not permit the vehicles to be transferred for uses outside the electric car-rental trial scheme. The vehicles were initially expected to be sold to Tribecar, another car-sharing operator in Singapore.
BlueSG’s pause also had ripple effects across the industry. French energy giant TotalEnergies, which previously served as BlueSG’s main charging infrastructure provider, exited Singapore’s EV charging market. By the end of 2025, it had transferred its network of more than 1,400 public charging points to other operators.
BlueSG was first launched in 2017 under the EV car-sharing programme by the Land Transport Authority. It was initially a subsidiary of the French Bolloré Group, but in 2021, the service was acquired by Singapore-based Goldbell Group.
Advertisement
Read other articles we’ve written on Singaporean businesses here.
Featured Image Credit: Wirestock Creators via Shutterstock.com/ Flexar
Yesterday amid a flurry of enterprise AI product updates, Google announced arguably its most significant one for enterprise customers: the public preview availability of Gemini Embedding 2, its new embeddings model — a significant evolution in how machines represent and retrieve information across different media types.
While previous embedding models were largely restricted to text, this new model natively integrates text, images, video, audio, and documents into a single numerical space — reducing latency by as much as 70% for some customers and reducing total cost for enterprises who use AI models powered by their own data to complete business tasks.
VentureBeat collaborator Sam Witteveen, co-founder of AI and ML training company Red Dragon AI, received early access to Gemini Embedding 2 and published a video of his impressions on YouTube. Watch it below:
Advertisement
Who needs and uses an embedding model?
For those who have encountered the term “embeddings” in AI discussions but find it abstract, a useful analogy is that of a universal library.
In a traditional library, books are organized by metadata: author, title, or genre. In the “embedding space” of an AI, information is organized by ideas.
Imagine a library where books aren’t organized by the Dewey Decimal System, but by their “vibe” or “essence”. In this library, a biography of Steve Jobs would physically fly across the room to sit next to a technical manual for a Macintosh. A poem about a sunset would drift toward a photography book of the Pacific Coast, with all thematically similar content organized in beautiful hovering “clouds” of books. This is basically what an embedding model does.
An embedding model takes complex data—like a sentence, a photo of a sunset, or a snippet of a podcast—and converts it into a long list of numbers called a vector.
Advertisement
These numbers represent coordinates in a high-dimensional map. If two items are “semantically” similar (e.g., a photo of a golden retriever and the text “man’s best friend”), the model places their coordinates very close to each other in this map. Today, these models are the invisible engine behind:
Search Engines: Finding results based on what you mean, not just the specific words you typed.
Recommendation Systems: Netflix or Spotify suggesting content because its “coordinates” are near things you already like.
Enterprise AI: Large companies use them for Retrieval-Augmented Generation (RAG), where an AI assistant “looks up” a company’s internal PDFs to answer an employee’s question accurately.
The concept of mapping words to vectors dates back to the 1950s with linguists like John Rupert Firth, but the modern “vector revolution” began in the early 2000s when Yoshua Bengio’s team first used the term “word embeddings”. The real breakthrough for the industry was Word2Vec, released by a team at Google led by Tomas Mikolov in 2013. Today, the market is led by a handful of major players:
OpenAI: Known for its widely-used text-embedding-3 series.
Google: With the new Gemini and previous Gecko models.
Anthropic and Cohere: Providing specialized models for enterprise search and developer workflows.
By moving beyond text to a natively multimodal architecture, Google is attempting to create a singular, unified map for the sum of human digital expression—text, images, video, audio, and documents—all residing in the same mathematical neighborhood.
Why Gemini Embedding 2 is such a big deal
Most leading models are still “text-first.” If you want to search a video library, the AI usually has to transcribe the video into text first, then embed that text.
Advertisement
Google’s Gemini Embedding 2 is natively multimodal.
As Logan Kilpatrick of Google DeepMind posted on X, the model allows developers to “bring text, images, video, audio, and docs into the same embedding space”.
It understands audio as sound waves and video as motion directly, without needing to turn them into text first. This reduces “translation” errors and captures nuances that text alone might miss.
For developers and enterprises, the “natively multimodal” nature of Gemini Embedding 2 represents a shift toward more efficient AI pipelines.
Advertisement
By mapping all media into a single 3,072-dimensional space, developers no longer need separate systems for image search and text search; they can perform “cross-modal” retrieval—using a text query to find a specific moment in a video or an image that matches a specific sound.
And unlike its predecessors, Gemini Embedding 2 can process requests that mix modalities. A developer can send a request containing both an image of a vintage car and the text “What is the engine type?”. The model doesn’t process them separately; it treats them as a single, nuanced concept. This allows for a much deeper understanding of real-world data where the “meaning” is often found in the intersection of what we see and what we say.
One of the model’s more technical features is Matryoshka Representation Learning. Named after Russian nesting dolls, this technique allows the model to “nest” the most important information in the first few numbers of the vector.
An enterprise can choose to use the full 3072 dimensions for maximum precision, or “truncate” them down to 768 or 1536 dimensions to save on database storage costs with minimal loss in accuracy.
Advertisement
Benchmarking the performance gains of moving to multimodal
Gemini Embedding 2 establishes a new performance ceiling for multimodal depth, specifically outperforming previous industry leaders across text, image, and video evaluation tasks.
Google Gemini Embedding 2 benchmarks. Credit: Google
The model’s most significant lead is found in video and audio retrieval, where its native architecture allows it to bypass the performance degradation typically associated with text-based transcription pipelines.
Specifically, in video-to-text and text-to-video retrieval tasks, the model demonstrates a measurable performance gap over existing industry leaders, accurately mapping motion and temporal data into a unified semantic space.
Advertisement
The technical results show a distinct advantage in the following standardized categories:
Multimodal Retrieval: Gemini Embedding 2 consistently outperforms leading text and vision models in complex retrieval tasks that require understanding the relationship between visual elements and textual queries.
Speech and Audio Depth: The model introduces a new standard for native audio embeddings, achieving higher accuracy in capturing phonetic and tonal intent compared to models that rely on intermediate text-transcription.
Contextual Scaling: In text-based benchmarks, the model maintains high precision while utilizing its expansive 8,192 token context window, ensuring that long-form documents are embedded with the same semantic density as shorter snippets.
Dimension Flexibility: Testing across the Matryoshka Representation Learning (MRL) layers reveals that even when truncated to 768 dimensions, the model retains a significant majority of its 3,072-dimension performance, outperforming fixed-dimension models of similar size.
What it means for enterprise databases
For the modern enterprise, information is often a fragmented mess. A single customer issue might involve a recorded support call (audio), a screenshot of an error (image), a PDF of a contract (document), and a series of emails (text).
In previous years, searching across these formats required four different pipelines. With Gemini Embedding 2, an enterprise can create a Unified Knowledge Base. This enables a more advanced form of RAG, wherein a company’s internal AI doesn’t just look up facts, but understands the relationship between them regardless of format.
Early partners are already reporting drastic efficiency gains:
Advertisement
Sparkonomy, a creator economy platform, reported that the model’s native multimodality slashed their latency by up to 70%. By removing the need for intermediate LLM “inference” (the step where one model explains a video to another), they nearly doubled their semantic similarity scores for matching creators with brands.
Everlaw, a legal tech firm, is using the model to navigate the “high-stakes setting” of litigation discovery. In legal cases where millions of records must be parsed, Gemini’s ability to index images and videos alongside text allows legal professionals to find “smoking gun” evidence that traditional text-search would miss.
Understanding the limits
In its announcement, Google was upfront about some of the current limitations of Gemini Embedding 2. The new model can accommodate vectorization of individual files that comprise of as many as 8,192 text tokens, 6 images (in as single batch), 128 seconds of video (2 minutes, 8 seconds long), 80 seconds of native audio (1.34 minutes), and a 6-page PDF.
It is vital to clarify that these are input limits per request, not a cap on what the system can remember or store.
Think of it like a scanner. If a scanner has a limit of “one page at a time,” it doesn’t mean you can only ever scan one page. it means you have to feed the pages in one by one.
Individual File Size: You cannot “embed” a 100-page PDF in a single call. You must “chunk” the document—splitting it into segments of 6 pages or fewer—and send each segment to the model individually.
Cumulative Knowledge: Once those chunks are converted into vectors, they can all live together in your database. You can have a database containing ten million 6-page PDFs, and the model will be able to search across all of them simultaneously.
Video and Audio: Similarly, if you have a 10-minute video, you would break it into 128-second segments to create a searchable “timeline” of embeddings.
Licensing, pricing, and availability
As of March 10, 2026, Gemini Embedding 2 is officially in Public Preview.
Advertisement
For developers and enterprise leaders, this means the model is accessible for immediate testing and production integration, though it is still subject to the iterative refinements typical of “preview” software before it reaches General Availability (GA).
The model is deployed across Google’s two primary AI gateways, each catering to a different scale of operation:
Gemini API: Targeted at rapid prototyping and individual developers, this path offers a simplified pricing structure.
Vertex AI (Google Cloud): The enterprise-grade environment designed for massive scale, offering advanced security controls and integration with the broader Google Cloud ecosystem.
It’s also already integrated with the heavy hitters of AI infrastructure: LangChain, LlamaIndex, Haystack, Weaviate, Qdrant, and ChromaDB.
In the Gemini API, Google has introduced a tiered pricing model that distinguishes between “standard” data (text, images, and video) and “native” audio.
Advertisement
Gemini 2 Embedding pricing on Google Gemini API. Credit: Google
The Free Tier: Developers can experiment with the model at no cost, though this tier comes with rate limits (typically 60 requests per minute) and uses data to improve Google’s products.
The Paid Tier: For production-level volume, the cost is calculated per million tokens. For text, image, and video inputs, the rate is $0.25 per 1 million tokens.
The “Audio Premium”: Because the model natively ingests audio data without intermediate transcription—a more computationally intensive task—the rate for audio inputs is doubled to $0.50 per 1 million tokens.
For large-scale deployments on Vertex AI, the pricing follows an enterprise-centric “Pay-as-you-go” (PayGo) model. This allows organizations to pay for exactly what they use across different processing modes:
Flex PayGo: Best for unpredictable, bursty workloads.
Provisioned Throughput: Designed for enterprises that require guaranteed capacity and consistent latency for high-traffic applications.
Batch Prediction: Ideal for re-indexing massive historical archives, where time-sensitivity is lower but volume is extremely high.
By making the model available through these diverse channels and integrating it natively with libraries like LangChain, LlamaIndex, and Weaviate, Google has ensured that the “switching cost” for businesses isn’t just a matter of price, but of operational ease. Whether a startup is building its first RAG-based assistant or a multinational is unifying decades of disparate media archives, the infrastructure is now live and globally accessible.
In addition, the official Gemini API and Vertex AI Colab notebooks, which contain the Python code necessary to implement these features, are licensed under the Apache License, Version 2.0.
Advertisement
The Apache 2.0 license is highly regarded in the tech community because it is “permissive.” It allows developers to take Google’s implementation code, modify it, and use it in their own commercial products without having to pay royalties or “open source” their own proprietary code in return.
How enterprises should respond: migrate to Gemini 2 Embedding or not?
For Chief Data Officers and technical leads, the decision to migrate to Gemini Embedding 2 hinges on the transition from a “text-plus” strategy to a “natively multimodal” one.
If your organization currently relies on fragmented pipelines — where images and videos are first transcribed or tagged by separate models before being indexed — the upgrade is likely a strategic necessity.
This model eliminates the “translation tax” of using intermediate LLMs to describe visual or auditory data, a move that partners like Sparkonomy found reduced latency by up to 70% while doubling semantic similarity scores. For businesses managing massive, diverse datasets, this isn’t just a performance boost; it is a structural simplification that reduces the number of points where “meaning” can be lost or distorted.
Advertisement
The effort to switch from a text-only foundation is lower than one might expect due to what early users describe as excellent “API continuity”.
Because the model integrates with industry-standard frameworks like LangChain, LlamaIndex, and Vector Search, it can often be “dropped into” existing workflows with minimal code changes. However, the real cost and energy investment lies in re-indexing. Moving to this model requires re-embedding your existing corpus to ensure all data points exist in the same 3,072-dimensional space.
While this is a one-time computational hurdle, it is the prerequisite for unlocking cross-modal search—where a simple text query can suddenly “see” into your video archives or “hear” specific customer sentiment in call recordings.
The primary trade-off for data leaders to weigh is the balance between high-fidelity retrieval and long-term storage economics. Gemini Embedding 2 addresses this directly through Matryoshka Representation Learning (MRL), which allows you to truncate vectors from 3072 dimensions down to 768 without a linear drop in quality.
Advertisement
This gives CDOs a tactical lever: you can choose maximum precision for high-stakes legal or medical discovery—as seen in Everlaw’s 20% lift in recall—while utilizing smaller, more efficient vectors for lower-priority recommendation engines to keep cloud storage costs in check.
Ultimately, the ROI is found in the “lift” of accuracy; in a landscape where an AI’s value is defined by its context, the ability to natively index a 6-page PDF or 128 seconds of video directly into a knowledge base provides a depth of insight that text-only models simply cannot replicate.
Metropolis, an iconic silent German production from 1927 directed by Fritz Lang, continues to throw a long shadow over the science fiction genre over a century after its release. Many people consider it to be the foundational work of the genre. Its cityscapes, people, and concepts reappear in subsequent stories, ranging from towering dystopias to gnawing conflicts between humans and robots.
Lang constructs a world split cleanly in two, with a privileged elite living lavishly in gleaming towers high above the city while the working class toil away in the gloomy depths below, keeping the machines alive at great personal cost. Into this divide steps Freder, the son of the city’s all powerful master, who ventures underground for the first time, falls for a kind and idealistic worker named Maria, and gets a brutal firsthand look at just how punishing life is down there. Things take a darker turn when Rotwang, a brilliant but dangerous inventor, builds a robot in Maria’s likeness and unleashes it on the masses to sow discord and keep the lower classes firmly under the thumb. What follows is mayhem on a grand scale, including a flood that threatens to swallow the entire underground city whole, and it takes Freder stepping forward as an unlikely peacemaker to finally pull things back from the brink.
Lang’s ambitions were quite high-tech for the time. He was inspired by his trip to New York and saw buildings as emblems of power. The sets combined Art Deco elements with Gothic shadows and a variety of futuristic gadgets. The way the workers were choreographed to move in perfect synchrony like the components of a gigantic clock was also rather impressive for a film from that era. To achieve all of the special effects, the crew used a variety of techniques such as miniatures, reflections, and creative lighting. The robot, a sleek, mechanical creature with a variety of human-like gestures, was a piece of art that grabbed viewers from the start.
Lang’s ideas are still relevant today, depicting how the privileged live in a bubble, disconnected from the people who keep the system running. The machines promise advancement, but all they accomplish is transform people into extensions of themselves. The robot poses numerous problems regarding control, dishonesty, and what truly defines someone as real. These beliefs are more than just leftovers of the industrial past; they are nonetheless crucial to our current arguments about automation and inequality.
Metropolis has inspired generations of films, as evidenced by Blade Runner’s rainy streets and towering skyscrapers, as well as the appearance of the golden protocol droid in Star Wars. The Matrix took the entire concept of underground toil and people gradually coming up to their controlled reality. Directors and artists have plagiarized Lang’s vertical cityscapes, such as the elegant gardens above and the depressing blackness below, in a variety of media, including movies and music videos.
What makes Metropolis feel so urgent even now is that the story it tells has never really gone out of date. A world carved up between those who have everything and those who have nothing, locked in a state of uneasy tension, is hardly a difficult concept to relate to in 2026. Set in a future that in many ways has already arrived, the film is a sharp reminder of how easily technology can widen the gap between people rather than close it. Lang saw a society where machines amplify our worst instincts rather than our best ones, and that particular warning feels more relevant than ever in an age of artificial intelligence and mass surveillance. Metropolis may not have predicted every twist the future had in store, but it shaped the way generations of people have imagined tomorrow, and that kind of influence doesn’t fade easily. [Source]
Capturing smooth, cinematic footage on the move normally means carrying bulky camera gear, but compact creator tools are getting surprisingly capable, especially when a strong discount makes them easier to justify.
We awarded the Pocket 3 4.5 stars in our review, noting: “If you’re looking for a vlogging camera that makes it as easy to record smooth 4K footage for TikTok and Instagram as YouTube, the DJI Osmo Pocket 3 is ideal”.
The Creator Combo bundle expands its usefulness by including a DJI Mic 2 transmitter, letting vloggers record clearer dialogue without relying entirely on the camera’s built-in microphones.
Accessories such as a mini tripod, battery handle and protective case make the DJI Osmo Pocket 3 Creator Combo more practical straight out of the box, particularly for travel, interviews or spontaneous filming sessions.
Advertisement
Advertisement
Back to the camera; at its core sits a 1-inch CMOS sensor capable of recording 4K video at up to 120 frames per second, which gives creators more flexibility when capturing fast movement or slowing footage down smoothly during editing.
That sensor size matters because it gathers more light than smaller action camera sensors, helping footage retain clearer detail and colour when shooting indoors, at sunset or during unpredictable lighting conditions.
Another highlight is the integrated three-axis mechanical stabilisation system, which works to counter small shakes and walking motion so clips remain steady even when filming handheld while moving through crowded streets or uneven paths.
Advertisement
The rotating two-inch touchscreen plays a bigger role than you might expect, allowing users to switch instantly between horizontal and vertical framing so the same camera can capture YouTube footage one moment and social media clips the next.
Face and object tracking also simplify solo filming since the camera can automatically follow a subject as they move through the frame, keeping the focus steady without needing someone behind the lens.
Advertisement
For creators who want smooth 4K footage, simple subject tracking and portable gear that fits in a jacket pocket, this current discount makes the DJI Osmo Pocket 3 Creator Combo far easier to recommend.
New York City has always been a place that people flock to—to live, to work, to visit, or to play. It’s big and exciting, and there’s almost always something happening: a new play, a new exhibit, or a new restaurant opening.
According to a 2024 report by venture capital firm SignalFire, NYC experienced a tech boom in 2023, becoming the top destination for people relocating with tech jobs, with around 15 percent of them choosing the Big Apple as their destination.
This isn’t the first time the city has seen an influx of technology workers; the 1990s tech boom saw Manhattan’s Flatiron District take off as a hub for high-tech companies, even going so far as to being nicknamed “Silicon Alley.”
That area has since spread, moving its way downtown to Soho, west to Hudson Yards, and more recently over the bridge(s) and into Brooklyn—specifically Dumbo, the Brooklyn Navy Yard, and Downtown Brooklyn, forming the Brooklyn Tech Triangle.
Advertisement
Dumbo, which stands for “Down under the Manhattan Bridge overpass,” is situated between the Brooklyn and Manhattan Bridges on the East River waterfront. The popular neighborhood has great views of Manhattan and the bridges, and an ever-expanding food and drink scene to keep you fed while working and making time to play.
Jump to Section
Where to Stay
Courtesy of 1 Hotel Brooklyn Bridge
60 Furman St., (347) 696-2500
Advertisement
If you’re going to stay in Dumbo, you’re going to want views of the Manhattan skyline, the East River, and the iconic bridges that extend between the two, and 1 Hotel Brooklyn Bridge offers that and more. Yes, there is a gym and spa, but there’s also a rooftop pool, which comes in quite handy on those stupidly hot summer days. James Beard Award–winning restaurateur Jonathan Waxman recently brought his iconic West Village restaurant, Barbuto, to the hotel. On the 10th Floor, find Harriet’s Lounge for sushi, bao buns, and wagyu toasts. From 10 pm on Friday, Saturday, and Sundays, listen to live DJs spinning sets while you enjoy craft cocktails and the view.
Don’t forget to end the day with a sustainable drink (or two) at Harriet’s Rooftop, just one floor up from the lounge, for more iconic sunset views. The hotel is pet-friendly, and there’s a café serving espresso, fresh-pressed juices, and artisanal and locally sourced snacks. There’s also a farm stand in the lobby daily from 7 am to 4 pm; grab seasonal fruits that, while they may look “ugly,” are perfect in taste, and all part of the hotel’s sustainability mission.
85 Flatbush Ave Ext., (718) 329-9537
About a 10-minute walk to the bridges and Brooklyn waterfront, The Tillary is a slightly more affordable stay for the area, but still boasts a lobby cafe and rooftop garden bar. Featuring pet-friendly rooms and a fully-equipped gym, this hotel is a great option for still being close to the action, but saving a bit more money. The lobby café offers an affordable range of options (think $4 for an English muffin with egg and cheese and up to $14 for a vegetarian wrap), while the rooftop has a variety of sandwiches, salads, and beverages (both n/a and boozy) to keep you from needing to stray too far.
Advertisement
Courtesy of Ace Brooklyn
252 Schermerhorn St., (718) 313-3636
Technically in Boerum Hill, bordering Downtown Brooklyn, the Ace Hotel is a boutique hotel with trendy furnishings and warm vibes, plus a fitness center. They feature a rotating artist in residence and DJ’s spinning in the lobby most weekend nights. For food, there’s Lele’s Roman, featuring a rotating selection of Roman Aperitivo bites daily from 5 to 7 pm, or hit them up for breakfast (lots of egg options!), lunch (panini, pizza, salad!), and dinner (pasta! pizza! classic contorni!). Don’t feel like Italian? Try Koju for an omakase experience set to a carefully curated vinyl music program.
Where to Work
Photograph: Michael Lee/Getty Images
68 Jay St., (718) 210-3650
Advertisement
Whether you’re looking for fully enclosed office spaces monthly or long-term, a coworking space, or a conference room, Greendesk has got you covered for a very reasonable price. The space is fully furnished with 24/7 access, high-speed internet, kitchens, and a cleaning service.
Multiple locations
From the SOHO House team, SOHO Works is a network of office spaces; rent a meeting room or use the shared lounge space, plus get access to SOHO member events and amenities. Work at either location—10 Jay Street or 55 Water Street—by the hour or rent by the day.
295 Front St., (347) 414-8782
Advertisement
Located in Vinegar Hill, the Bond Collective has numerous options for you to work, whether you need a dedicated desk, private office, team suite, conference rooms, coworking, or simply a day pass. You’ll have 24/7 access, Wi-Fi, fruits, snacks, and breakfast, plus unlimited printing.
Where to Get Your Coffee
Courtesy of Jacques Torres Chocolate
66 Water St., (718) 875-1269
Located on Water Street and open daily from 10 am to 7 pm, this flagship location of the famous chocolatier is where it all began 25 years ago. Here, you’ll find handmade confections, hot chocolate, and ice cream sandwiches. Sample it all, then grab a few things to take with you to share with friends (or not—sharing is overrated).
Advertisement
85 Water St., (718) 797-5026
Almondine has been in Dumbo for over 20 years. Opened by French baker Herve Poussot, this unpretentious bakery thrives on tradition, innovation, and evolution. You’ll feel as though you’ve been transported right to Paris with the fresh bread, croissants, and cakes. They even have a daily lunch special from 12 to 3 pm; choose from a half sandwich, then pair it with a soup, salad, cookie, and half-priced drink for only $18.
45 Washington St., (212) 924-7400
Grab a coffee here before strolling down Washington Street (it’s literally located at one of the most iconic spots that people snap photos of the bridge, so beware of influencers posing in the middle of the street) to the waterfront for a nice break and some fresh air.
Advertisement
Where to Eat
Courtesy of Vinegar Hill House
72 Hudson Ave., (718) 522-1018
This is the place you go when you want a relaxed environment with incredible food in cute surroundings. Dining in the outdoor garden is cozy and comforting, while the inside is vintage-inspired and laid back. The menu, while also simple and comforting, is consistent and hits every time.
68 Jay St. #119
Advertisement
Open Tuesday to Friday from 10 am to 2-ish, this unassuming French-style bakery from Ayako Kurokawa is tucked away in the lobby of 68 Jay Street. The pastries, though French in style, are inspired by Kurokawa’s Japanese upbringing. Scones, cookies, cakes, and slices of pie are all served on silver platters, with handwritten labels on blue paper. The gateau basque is a popular item; go early, as they sell out daily.
1 John St., (718) 522-5356
Opened in 2017, Celestine is the kind of spot that feels chill enough to be your neighborhood go-to, while also special enough to go for a celebration. The menu includes thoughtful vegetable-heavy starters and sides, as well as whole branzino and a 14-ounce ribeye. With floor-to-ceiling windows, there’s not a bad seat in the house to enjoy your meal with a view of the East River and all its happenings.
147 Front St.
Advertisement
This intimate, 10-seat chef’s counter offers a tasting menu and à la carte menu, featuring oysters, crudo, and natural wines by the glass. Try the caviar Frito pie: an open bag of Fritos topped with entirely too much caviar and creme fraiche.
1 Front St., (718) 858-4300
Originally opened in 1990 by Patsy Grimaldi and his wife, Carol, Grimaldi sold the business in 1998 to Frank Ciolli. Grimaldi is of the Patsy’s of Harlem lineage (Patsy is his uncle, from whom he learned to make pizza at age 12). In 2000, Grimaldi’s moved locations next door to their original spot where they continue to sell whole pies in a coal-fired oven.
19 Old Fulton St., (718) 596-6700
Advertisement
If you like a side of gossip with your slice, then Juliana’s is the place to go. Patsy and Carol Grimaldi opened Juliana’s in the original Grimaldi’s location at 19 Old Fulton Street in 2012, which caused a stir in the pizza community, since it’s located next door to Grimaldi’s, their previous business. They even got their original coal-fired oven back. Named after Patsy’s mother, Juliana’s serves coal-fired pizza, meatballs, and salads. They also sell four flavors of par-cooked pies to “take & bake” at home. Try an egg cream—a New York City classic of milk, chocolate or vanilla syrup, and seltzer made frothy by whisking the three ingredients vigorously until foamy. Grub Street called it the best in the city in 2017.
Disney+ has three new TV shows that really caught my eye in March, and I’m confident there’s something to suit everyone here.
Better yet, a Hulu original has made its way onto this list, so you don’t have to flip through the best streaming services to find some great entertainment. Everything I’ve highlighted here is waiting for you on Disney+, whether you’re down for a gritty Marvel comeback or a Disney Channel classic.
Here’s what you shouldn’t miss in March 2026.
Advertisement
Article continues below
Daredevil: Born Again season 2
Marvel Television’s Daredevil Born Again Season 2 | Stream March 24 on Disney+ – YouTube
Daredevil: Born Again season 2 is finally here, and Marvel fans have plenty to get excited about. I loved the original Netflix series, so it’s been amazing seeing it revamped again over on Disney+ (where you can now watch Daredevil, too).
Advertisement
Streaming writer Tom Power described the above trailer as “pure sensory overload”, hinting that season 2 will be leaning into the chaos. In season 2, we’ll follow Matt Murdock as he tries to fight back from the shadows to tear down Wilson Fisk’s corrupt empire for good.
She’s back! Hannah Montana is getting a special two decades after its Disney Channel debut. Fans of the iconic series won’t want to miss this one-off TV special, filmed in front of a live studio audience.
The Hannah Montana 20th Anniversary Special features an exclusive, in-depth interview with Miley Cyrus, hosted by podcaster Alex Cooper. The conversation will offer “an intimate look at the creation of one of pop culture’s most iconic characters and the lasting impact the show and character”, according to its official synopsis.
If it’s nostalgia you’re after, look no further than this.
Advertisement
If It’s Tuesday… It’s Murder
If It’s Tuesday It’s Murder | Trailer | Hulu – YouTube
Finally, this Hulu show caught my attention, and I didn’t want it to fall under the radar. You might not have heard of If It’s Tuesday… It’s Murder, but here’s why it’s one to watch.
The new series tells the story of a group of Spanish tourists who head to Lisbon for a holiday. But when one of them is found dead, tensions rise, and the remaining four must figure out what happened before their trip ends.
Is it one of their own, or is it someone else? What shocking truths await this group of mystery fans? Something tells me they’re way out of their depth here, and I can’t wait to see more.
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
An anonymous reader quotes a report from Ars Technica: Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices — primarily made by Asus — that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime. The malware — dubbed KadNap — takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen’s Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it’s unlikely that the attackers are using any zero-days in the operation.
The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia (PDF), a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.
[…] Despite the resistance to normal takedown methods, Black Lotus says it has devised a means to block all network traffic to or from the control infrastructure.” The lab is also distributing the indicators of compromise to public feeds to help other parties block access. […] People who are concerned their devices are infected can check this page for IP addresses and a file hash found in device logs. To disinfect devices, they must be factory reset. Because KadNap stores a shell script that runs when an infected router reboots, simply restarting the device will result in it being compromised all over again. Device owners should also ensure all available firmware updates have been installed, that administrative passwords are strong, and that remote access has been disabled unless needed.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered government agencies on Wednesday to patch their systems against an actively exploited n8n vulnerability.
n8n is an open-source workflow automation platform widely used in AI development for automating data ingestion, with over 50,000 weekly downloads on the npm registry and over 100 million pulls on Docker Hub.
As an automation hub, n8n often stores a wide range of highly sensitive data, including API keys, database credentials, OAuth tokens, cloud storage access credentials, and CI/CD secrets, making it an extremely attractive target for threat actors.
Tracked as CVE-2025-68613, this remote code execution vulnerability allows authenticated attackers to execute arbitrary code on vulnerable servers with the privileges of the n8n process.
Advertisement
“n8n contains an improper control of dynamically managed code resources vulnerability in its workflow expression evaluation system that allows for remote code execution,” CISA said.
“Successful exploitation may lead to full compromise of the affected instance, including unauthorized access to sensitive data, modification of workflows, and execution of system-level operations,” the n8n team added.
The n8n team addressed CVE-2025-68613 in December with the release of n8n v1.122.0 and also advised IT administrators to apply the patch immediately. Admins who can’t immediately upgrade can limit workflow creation and editing permissions to fully trusted users only, and restrict operating system privileges and network access as temporary mitigation measures to reduce the impact of potential exploitation.
CISA has added the vulnerability to its Known Exploited Vulnerabilities (KEV) catalog on Wednesday and ordered Federal Civilian Executive Branch (FCEB) agencies to patch their n8n instances by March 25, as mandated by a binding operational directive (BOD 22-01) issued in November 2021.
“This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise,” CISA warned.
“Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”
Although BOD 22-01 applies only to federal agencies, CISA has encouraged all network defenders to secure their systems against ongoing CVE-2025-68613 attacks as soon as possible.
Since the start of the year, the n8n security team has addressed several other severe vulnerabilities, including one dubbed Ni8mare that allows remote attackers without privileges to hijack unpatched n8n servers.
Advertisement
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.