Connect with us

Tech

SQLi flaw in Elementor Ally plugin impacts 250k+ WordPress sites

Published

on

SQLi flaw in Elementor Ally plugin impacts 250k+ WordPress sites

An SQL injection vulnerability in Ally, a WordPress plugin from Elementor for web accessibility and usability with more than 400,000 installations, could be exploited to steal sensitive data without authentication.

The security issue, tracked as CVE-2026-2313, received a high severity score. It was discovered by Drew Webber (mcdruid), an offensive security engineer at Acquia, a software-as-a-service company that provides an enterprise-level Digital Experience Platform (DXP).

SQL injection flaws have been around for more than 25 years and continue to be a threat today, despite being well understood and technically easy to fix and avoid. This type of security issue occurs when user input is directly inserted into an SQL database query without proper sanitization or parameterization.

This allows an attacker to inject SQL commands that alter the query’s behavior to read, modify, or delete information in the database.

Advertisement

CVE-2026-2313 affects all Ally versions up to 4.0.3 and lets an unauthenticated attacker to inject SQL queries via the URL path due to improper handling of a user-supplied URL parameter in a critical function.

“This is due to insufficient escaping on the user-supplied URL parameter in the `get_global_remediations()` method, where it is directly concatenated into an SQL JOIN clause without proper sanitization for SQL context,” reads a technical analysis from WordFence.

“While `esc_url_raw()` is applied for URL safety, it does not prevent SQL metacharacters (single quotes, parentheses) from being injected.

“This makes it possible for unauthenticated attackers to append additional SQL queries into already existing queries that can be used to extract sensitive information from the database via time-based blind SQL injection techniques,” the researchers explain.

Advertisement

Wordfence notes that exploiting the vulnerability is possible only if the plugin is connected to an Elementor account and its Remediation module is active.

The security firm validated the flaw and disclosed it to the vendor on February 13. Elementor fixed the flaw in version 4.1.0 (latest), released on February 23, and an $800 bug bounty was awarded to the researcher.

Data from WordPress.org shows that only about 36% of websites using the Ally plugin have upgraded to version 4.1.0, leaving more than 250,000 sites vulnerable to CVE-2026-2313.

In addition to upgrading Ally to version 4.1.0, site owners/administrators are also recommended to install the latest security update for WordPress, released yesterday.

Advertisement

WordPress 6.9.2, addresses 10 vulnerabilities, including cross-site request (XSS), authorization bypass, and server-side request forgery (SSRF) flaws. The new version of the platform is recommended to be installed “immediately.”

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

A Radio Power Amplifier For Not A Lot

Published

on

When building a radio transmitter, unless it’s a very small one indeed, there’s a need for an amplifier before the antenna. This is usually referred to as the power amplifier, or PA. How big your PA is depends on your idea of power, but at the lower end of the power scale a PA can be quite modest. QRP, as lowe power radio is referred to, has a transmit power in the miliwatts or single figure watts. [Guido] is here with a QRP PA that delivers about a watt from 1 to 30 MHz, is made from readily available parts, and costs very little.

Inspired by a circuit from [Harry Lythall], the prototype is built on a piece of stripboard. It’s getting away with using those cheap transistors without heatsinking because it’s a class C design. In other words, it’s in no way linear; instead it’s efficient, but creates harmonics and can’t be used for all modes of transmission. This PA will need a low-pass filter to avoid spraying the airwaves with spurious emissions, and on the bands it’s designed for, is for CW, or Morse, only.

We like it though, as it’s proof that building radios can still be done without a large bank balance. Meanwhile if the world of QRP interests you, it’s something we have explored in the past.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Why Falling Cats Always Seem To Land On Their Feet

Published

on

An anonymous reader quotes a report from the New York Times: In a paper, published last month in the journal The Anatomical Record, researchers offered a novel take on falling felines. Their evidence suggests new insights into the so-called falling cat problem, particularly that cats have a very flexible segment of their spines that allows them to correct their orientation midair. […] People have been curious about falling cats perhaps as long as the animals have been living with humans, but the method to their acrobatic abilities remains enigmatic. Part of the difficulty is that the anatomy of the cat has not been studied in detail, explains Yasuo Higurashi, a physiologist at Yamaguchi University in Japan and lead author of the study. […]

Modern research has split the falling cat problem into two competing models. The first, “legs in, legs out,” suggests that cats correct their falling trajectory by first extending their hind limbs before retracting them, using a sequential twist of their upper and then lower trunk to gain the proper posture while in free fall. The second model, “tuck and turn,” suggests that cats turn their upper and lower bodies in simultaneous juxtaposed movements. […]

The researchers found that the feline spine was extremely flexible in the upper thoracic vertebrae, but stiffer and heavier in the lower lumbar vertebrae. The discovery matches video evidence showing the cats first turn their front legs, and then their lower legs. The results suggest the cat quickly spins its flexible upper torso to face the ground, allowing it to see so that it can correctly twist the rest of its body to match. “The thoracic spine of the cat can rotate like our neck,” Dr. Higurashi said.

Experiments on the spine show the upper vertebrae can twist an astounding 360 degrees, he says, which helps cats make these correcting movements with ease. The results are consistent with the “legs in, legs out” model, but definitively determining which model is correct will take more work, Dr. Higurashi says. The results also yielded another discovery: Cats, like many animals, appear to have a right-side bias. One of the dropped cats corrected itself by turning to the right eight out of eight times, while the other turned right six out of eight times.

Advertisement

Source link

Continue Reading

Tech

IEEE Launches Global Virtual Career Fairs

Published

on

Last year IEEE launched its first virtual career fair to help strengthen the engineering workforce and connect top talent with industry professionals. The event, which was held in the United States, attracted thousands of students and professionals. They learned about more than 500 job opportunities in high-demand fields including artificial intelligence, semiconductors, and power and energy. They also gained access to career resources.

Hosted byIEEE Industry Engagement, the event marked a milestone in the organization’s expanding workforce development efforts to bridge the gap between academic training and industry needs while bolstering the technical talent pipeline, says Jessica Bian, 2025 chair of the IEEE Industry Engagement Committee. The IEC works to strengthen the connection with industry professionals, companies, and technology sectors through global career fairs, as well as its Industry Newsletter, AI-powered career guidance tools, and World Technology Summits, where industry leaders discuss solutions to societal challenges.

“We are bringing together companies, universities, and young professionals to help meet the demand for technical talent in critical sectors,” Bian says. “It is part of our commitment to preparing the next generation of innovators.”

The virtual career fairs are expanding to more IEEE regions this year. One was held last month for Region 9 (Latin America). One is scheduled next month for Region 8 (Europe, Middle East, and Africa) and another in May for Region 7 (Canada).

Advertisement

A global career fair is slated for June.

Registration information for all the fairs is available at careerfair.ieee.org.

Innovative recruitment events

The fairs, which use the vFairs virtual platform, provide interactive sessions with representatives from hiring companies, direct chats with recruiters, video interviews, and access to downloadable job resources. The features help remove geographic barriers and increase visibility for employers and job seekers.

The career fair platform features interactive engagement tools including networking roundtables, a live activity feed, a leaderboard, and a virtual photobooth to encourage participants to remain active throughout the day.

Advertisement

Bringing together thousands of professionals

STEM students participated in the U.S. and Latin America events, along with early-career professionals and seasoned engineers—almost 8,000 participants in all. They represented diverse fields including software engineering, AI, semiconductors, and power systems.

Siemens, Burns & McDonnell, and Morgan Stanley were among the dozens of companies that participated in the U.S. event. More than 500 internships, co-op opportunities, and full-time positions were promoted.

“I found the overall process highly efficient and the platform intuitive—which made for a great sourcing experience,” said a recruiter from Burns & McDonnell, a design and construction firm. “I was able to join a session, short-list several high-potential candidates, review their résumés, and initiate contact with a couple of them.

“I am optimistic that we will be able to extend at least one offer from this pipeline.”

Advertisement

Participating students described the fair as impactful.

“I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories,” said Michael Dugan, an electrical and computer engineering graduate student at Rice University, in Houston.

New tools elevating the candidate experience

Attendees had access to AI-guided job-matching tools and career development programs and resources.

Prior to the fair, registrants could use the IEEE Career Guidance Counselor, an AI-powered career advisor. The ICGC tool analyzes candidates’ skills and experience to suggest aligned roles and provides tailored professional development plans.

Advertisement

The ICGC also makes personalized recommendations for mentors, job opportunities, training resources, and career pathways.

Pre-event workshops and mock interview sessions helped participants refine their résumé, strengthen interview strategies, and manage expectations. They also provided tips on how to engage with recruiters.

“I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories.” —Michael Dugan, graduate student at Rice University, in Houston

During the Future Ready Engineers: Essential Skills and Networking Strategies to Stand Out at a Career Fair workshop, Shaibu Ibrahim, a senior electrical engineer and member of IEEE Young Professionals, shared networking strategies for career fairs and industry events as well as tips on preparation, engagement, and effective follow-up.

Advertisement

“The workshop offered advice that shaped my approach to the fair,” Dugan said. “It truly helped manage expectations and maximize my preparation.”

Learning more about IEEE

To help participants learn about IEEE and its volunteering opportunities, its societies and councils set up roundtables and technical community booths at the fairs. They were hosted by IEEE Technical Activities, IEEE Future Networks, and the IEEE Signal Processing Society.

“While exploring volunteer opportunities, I was excited to learn about IEEE Future Networks,” Dugan said. “Connecting with dedicated IEEE members, like Craig Polk, was a definite highlight.” Polk is an IEEE senior member and a senior program manager for IEEE Future Networks.

A commitment to career development

IEEE created the career fairs as free, accessible platforms for employers and job seekers to serve as a trusted bridge between companies seeking top technical talent and members dedicated to advancing their career. It is our responsibility to support them by connecting them with meaningful career opportunities.

Advertisement

In today’s unpredictable job landscape, IEEE is stepping up to help our talented members navigate change, build resilience, and connect with future employers.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Google Play will let you try a game before you buy it

Published

on

Google Play has introduced a new feature called Game Trials, which will let you play a portion of paid games for free before you commit to buying them. It’s now rolling out to select paid games on mobile, and it’s coming soon to Google Play Games on PC. Titles that offer Game Trials will show a button marked “Try” on their profile pages. When you click it, you’ll see how long you can play the game before you have to buy it. In Google’s example, the survival and horror game Dredge will give you 60 minutes of free play time, after which you’ll get the option to either buy the game or delete it from your device.

Google has also announced that it’s releasing more paid indie games over the coming months, including Moonlight Peaks, Sledding Game and Low-Budget Repairs. It has launched a new section in the Play store, as well, to feature games optimized for Windows PCs. You can wishlist the games from that section to get a notification when they’re on sale.

Finally, the company is rolling out Play Games Sidekick, the Gemini-powered Android overlay it announced last year, to select games downloaded from Play. Sidekick can show you relevant info and tools for whatever game you’re playing without having to do a search query. But if you’d rather ask other people for gaming advice instead of an AI, you can also look at a game’s Community Posts, a feature now available in English for select titles on their Play pages.

Source link

Advertisement
Continue Reading

Tech

Atlassian layoffs impact 63 workers in Washington as CTO steps down

Published

on

Rajeev Rajan. (LinkedIn Photo)

Enterprise collaboration software giant Atlassian is laying off 63 workers in Washington, according to a WARN notice filed with state regulators.

Atlassian announced Wednesday that it will lay off about 10% of its staff, or 1,600 employees, as the 24-year-old software firm transitions to an “AI-first company.” Atlassian CEO Mike Cannon-Brookes wrote that AI is changing the mix of skills and number of roles required in certain areas.

“This is primarily about adaptation,” he said. “We are reshaping our skill mix and changing how we work to build for the future.”

Atlassian opened an office in Bellevue, Wash., in 2024. The WARN notice indicates that nearly all the employees affected by layoffs in Washington state are remote workers. About half of the affected workers are in engineering or data science roles.

The company also announced Wednesday that CTO Rajeev Rajan, who is based in the Seattle region, will step down after nearly four years with Atlassian. “Atlassian is thankful for Mr. Rajan’s many contributions in building a world-class R&D organization and congratulates the promotion of next generation AI talent in Taroon Mandhana (CTO Teamwork) and Vikram Rao (CTO Enterprise and Chief Trust Officer),” the company wrote in a SEC filing.

Advertisement

Rajan was previously a VP of engineering at Meta and led the the company’s Pacific Northwest engineering hub. He also spent more than two decades at Microsoft in various leadership roles.

Several tech companies have cut staff in the Seattle area this year, including Amazon, Expedia, T-Mobile, and Smartsheet. Many corporations are slashing headcount to address pandemic-fueled corporate “bloat” while juggling economic uncertainty and impact from AI tools.

The recent rise of AI tools have also spooked some investors as some software stocks have taken a hit. Atlassian shares are down more than 50% this year.

Source link

Advertisement
Continue Reading

Tech

BlueSG to relaunch its car-sharing service as Flexar in 2026

Published

on

The services will be rolled out under a new brand, Flexar

Singapore car-sharing company BlueSG is preparing to roll out a new service under a new brand, Flexar.

In comments to CNA, BlueSG confirmed that Flexar is currently in the “beta phase” of its shared car mobility service. It is slated to launch later this year.

The move comes around seven months after the company ceased its operations on Aug 8, 2025 and let go of staff.

The new brand will have the same operating concept, which allows users to pick up a car from a station near them and drop it off at another location in Singapore.

Advertisement

Flexar is recruiting early testers

Image Credit: BlueSg

In response to The Straits Times, a BlueSG spokesperson shared that the team behind Flexar is currently focused on “testing and refining a range of exciting new offerings designed to enable flexible urban mobility.”

The new service will introduce a revamped platform, a refreshed fleet featuring a different mix of vehicles, and an expanded network of pick-up and drop-off points. It is also expected to deliver “greater reliability and a smoother user experience.”

However, the spokesperson declined to share further details, such as pricing or the total number of pick-up and drop-off points, until the official launch.

Between Jan and Mar 2026, BlueSG has also been hiring for several roles, including automotive technicians, an operations manager, and customer service agents, across various job portals.

On Mar 9, the company reached out to its community to recruit early testers ahead of the official launch. The invitation, shared in a BlueSG Telegram user group, asked interested participants to complete a questionnaire. Shortlisted users will be able to try the revamped service and provide feedback.

Advertisement
Screengrab from the Flexar website

Flexar has also launched a website, with more details listed as “coming soon.”

“Access reliable cars when you need them, where you need them. No ownership hassles, no long-term commitments, just seamless A-to-B journeys across Singapore,” the company wrote on its homepage.

BlueSG ceased operations & laid off staff in Aug 2025

Back in Aug 2025, BlueSG announced a “pause” to its services and retrenched the majority of its employees shortly after.

At the time, the company said it planned to return with an “upgraded” service powered by “advanced technology, deep expertise, and enhanced operational capabilities.”

The overhaul was driven by the company’s observations of changes in Singapore’s car-sharing landscape and the opportunity to scale its user base.

Advertisement

Since the pause, BlueSG’s fleet of around 190 electric Opel Corsa‑e hatchbacks has been sold to car dealers or listed on used-car marketplace Sgcarmart.

Meanwhile, about 790 units of the purpose-built Blue Car were scrapped after the Land Transport Authority did not permit the vehicles to be transferred for uses outside the electric car-rental trial scheme. The vehicles were initially expected to be sold to Tribecar, another car-sharing operator in Singapore.

BlueSG’s pause also had ripple effects across the industry. French energy giant TotalEnergies, which previously served as BlueSG’s main charging infrastructure provider, exited Singapore’s EV charging market. By the end of 2025, it had transferred its network of more than 1,400 public charging points to other operators.

BlueSG was first launched in 2017 under the EV car-sharing programme by the Land Transport Authority. It was initially a subsidiary of the French Bolloré Group, but in 2021, the service was acquired by Singapore-based Goldbell Group.

Advertisement
  • Read other articles we’ve written on Singaporean businesses here

Featured Image Credit: Wirestock Creators via Shutterstock.com/ Flexar

Source link

Advertisement
Continue Reading

Tech

Google’s Gemini Embedding 2 arrives with native multimodal support to cut costs and speed up your enterprise data stack

Published

on

Yesterday amid a flurry of enterprise AI product updates, Google announced arguably its most significant one for enterprise customers: the public preview availability of Gemini Embedding 2, its new embeddings model — a significant evolution in how machines represent and retrieve information across different media types.

While previous embedding models were largely restricted to text, this new model natively integrates text, images, video, audio, and documents into a single numerical space — reducing latency by as much as 70% for some customers and reducing total cost for enterprises who use AI models powered by their own data to complete business tasks.

VentureBeat collaborator Sam Witteveen, co-founder of AI and ML training company Red Dragon AI, received early access to Gemini Embedding 2 and published a video of his impressions on YouTube. Watch it below:

Advertisement

Who needs and uses an embedding model?

For those who have encountered the term “embeddings” in AI discussions but find it abstract, a useful analogy is that of a universal library.

In a traditional library, books are organized by metadata: author, title, or genre. In the “embedding space” of an AI, information is organized by ideas.

Imagine a library where books aren’t organized by the Dewey Decimal System, but by their “vibe” or “essence”. In this library, a biography of Steve Jobs would physically fly across the room to sit next to a technical manual for a Macintosh. A poem about a sunset would drift toward a photography book of the Pacific Coast, with all thematically similar content organized in beautiful hovering “clouds” of books. This is basically what an embedding model does.

An embedding model takes complex data—like a sentence, a photo of a sunset, or a snippet of a podcast—and converts it into a long list of numbers called a vector.

Advertisement

These numbers represent coordinates in a high-dimensional map. If two items are “semantically” similar (e.g., a photo of a golden retriever and the text “man’s best friend”), the model places their coordinates very close to each other in this map. Today, these models are the invisible engine behind:

  • Search Engines: Finding results based on what you mean, not just the specific words you typed.

  • Recommendation Systems: Netflix or Spotify suggesting content because its “coordinates” are near things you already like.

  • Enterprise AI: Large companies use them for Retrieval-Augmented Generation (RAG), where an AI assistant “looks up” a company’s internal PDFs to answer an employee’s question accurately.

The concept of mapping words to vectors dates back to the 1950s with linguists like John Rupert Firth, but the modern “vector revolution” began in the early 2000s when Yoshua Bengio’s team first used the term “word embeddings”. The real breakthrough for the industry was Word2Vec, released by a team at Google led by Tomas Mikolov in 2013. Today, the market is led by a handful of major players:

  • OpenAI: Known for its widely-used text-embedding-3 series.

  • Google: With the new Gemini and previous Gecko models.

  • Anthropic and Cohere: Providing specialized models for enterprise search and developer workflows.

By moving beyond text to a natively multimodal architecture, Google is attempting to create a singular, unified map for the sum of human digital expression—text, images, video, audio, and documents—all residing in the same mathematical neighborhood.

Why Gemini Embedding 2 is such a big deal

Most leading models are still “text-first.” If you want to search a video library, the AI usually has to transcribe the video into text first, then embed that text.

Advertisement

Google’s Gemini Embedding 2 is natively multimodal.

As Logan Kilpatrick of Google DeepMind posted on X, the model allows developers to “bring text, images, video, audio, and docs into the same embedding space”.

It understands audio as sound waves and video as motion directly, without needing to turn them into text first. This reduces “translation” errors and captures nuances that text alone might miss.

For developers and enterprises, the “natively multimodal” nature of Gemini Embedding 2 represents a shift toward more efficient AI pipelines.

Advertisement

By mapping all media into a single 3,072-dimensional space, developers no longer need separate systems for image search and text search; they can perform “cross-modal” retrieval—using a text query to find a specific moment in a video or an image that matches a specific sound.

And unlike its predecessors, Gemini Embedding 2 can process requests that mix modalities. A developer can send a request containing both an image of a vintage car and the text “What is the engine type?”. The model doesn’t process them separately; it treats them as a single, nuanced concept. This allows for a much deeper understanding of real-world data where the “meaning” is often found in the intersection of what we see and what we say.

One of the model’s more technical features is Matryoshka Representation Learning. Named after Russian nesting dolls, this technique allows the model to “nest” the most important information in the first few numbers of the vector.

An enterprise can choose to use the full 3072 dimensions for maximum precision, or “truncate” them down to 768 or 1536 dimensions to save on database storage costs with minimal loss in accuracy.

Advertisement

Benchmarking the performance gains of moving to multimodal

Gemini Embedding 2 establishes a new performance ceiling for multimodal depth, specifically outperforming previous industry leaders across text, image, and video evaluation tasks.

Google Gemini Embedding 2 benchmarks

Google Gemini Embedding 2 benchmarks. Credit: Google

The model’s most significant lead is found in video and audio retrieval, where its native architecture allows it to bypass the performance degradation typically associated with text-based transcription pipelines.

Specifically, in video-to-text and text-to-video retrieval tasks, the model demonstrates a measurable performance gap over existing industry leaders, accurately mapping motion and temporal data into a unified semantic space.

Advertisement

The technical results show a distinct advantage in the following standardized categories:

  • Multimodal Retrieval: Gemini Embedding 2 consistently outperforms leading text and vision models in complex retrieval tasks that require understanding the relationship between visual elements and textual queries.

  • Speech and Audio Depth: The model introduces a new standard for native audio embeddings, achieving higher accuracy in capturing phonetic and tonal intent compared to models that rely on intermediate text-transcription.

  • Contextual Scaling: In text-based benchmarks, the model maintains high precision while utilizing its expansive 8,192 token context window, ensuring that long-form documents are embedded with the same semantic density as shorter snippets.

  • Dimension Flexibility: Testing across the Matryoshka Representation Learning (MRL) layers reveals that even when truncated to 768 dimensions, the model retains a significant majority of its 3,072-dimension performance, outperforming fixed-dimension models of similar size.

What it means for enterprise databases

For the modern enterprise, information is often a fragmented mess. A single customer issue might involve a recorded support call (audio), a screenshot of an error (image), a PDF of a contract (document), and a series of emails (text).

In previous years, searching across these formats required four different pipelines. With Gemini Embedding 2, an enterprise can create a Unified Knowledge Base. This enables a more advanced form of RAG, wherein a company’s internal AI doesn’t just look up facts, but understands the relationship between them regardless of format.

Early partners are already reporting drastic efficiency gains:

Advertisement
  • Sparkonomy, a creator economy platform, reported that the model’s native multimodality slashed their latency by up to 70%. By removing the need for intermediate LLM “inference” (the step where one model explains a video to another), they nearly doubled their semantic similarity scores for matching creators with brands.

  • Everlaw, a legal tech firm, is using the model to navigate the “high-stakes setting” of litigation discovery. In legal cases where millions of records must be parsed, Gemini’s ability to index images and videos alongside text allows legal professionals to find “smoking gun” evidence that traditional text-search would miss.

Understanding the limits

In its announcement, Google was upfront about some of the current limitations of Gemini Embedding 2. The new model can accommodate vectorization of individual files that comprise of as many as 8,192 text tokens, 6 images (in as single batch), 128 seconds of video (2 minutes, 8 seconds long), 80 seconds of native audio (1.34 minutes), and a 6-page PDF.

It is vital to clarify that these are input limits per request, not a cap on what the system can remember or store.

Think of it like a scanner. If a scanner has a limit of “one page at a time,” it doesn’t mean you can only ever scan one page. it means you have to feed the pages in one by one.

  • Individual File Size: You cannot “embed” a 100-page PDF in a single call. You must “chunk” the document—splitting it into segments of 6 pages or fewer—and send each segment to the model individually.

  • Cumulative Knowledge: Once those chunks are converted into vectors, they can all live together in your database. You can have a database containing ten million 6-page PDFs, and the model will be able to search across all of them simultaneously.

  • Video and Audio: Similarly, if you have a 10-minute video, you would break it into 128-second segments to create a searchable “timeline” of embeddings.

Licensing, pricing, and availability

As of March 10, 2026, Gemini Embedding 2 is officially in Public Preview.

Advertisement

For developers and enterprise leaders, this means the model is accessible for immediate testing and production integration, though it is still subject to the iterative refinements typical of “preview” software before it reaches General Availability (GA).

The model is deployed across Google’s two primary AI gateways, each catering to a different scale of operation:

  • Gemini API: Targeted at rapid prototyping and individual developers, this path offers a simplified pricing structure.

  • Vertex AI (Google Cloud): The enterprise-grade environment designed for massive scale, offering advanced security controls and integration with the broader Google Cloud ecosystem.

It’s also already integrated with the heavy hitters of AI infrastructure: LangChain, LlamaIndex, Haystack, Weaviate, Qdrant, and ChromaDB.

In the Gemini API, Google has introduced a tiered pricing model that distinguishes between “standard” data (text, images, and video) and “native” audio.

Advertisement
Gemini 2 Embedding pricing on Google Gemini API

Gemini 2 Embedding pricing on Google Gemini API. Credit: Google

  • The Free Tier: Developers can experiment with the model at no cost, though this tier comes with rate limits (typically 60 requests per minute) and uses data to improve Google’s products.

  • The Paid Tier: For production-level volume, the cost is calculated per million tokens. For text, image, and video inputs, the rate is $0.25 per 1 million tokens.

  • The “Audio Premium”: Because the model natively ingests audio data without intermediate transcription—a more computationally intensive task—the rate for audio inputs is doubled to $0.50 per 1 million tokens.

For large-scale deployments on Vertex AI, the pricing follows an enterprise-centric “Pay-as-you-go” (PayGo) model. This allows organizations to pay for exactly what they use across different processing modes:

  • Flex PayGo: Best for unpredictable, bursty workloads.

  • Provisioned Throughput: Designed for enterprises that require guaranteed capacity and consistent latency for high-traffic applications.

  • Batch Prediction: Ideal for re-indexing massive historical archives, where time-sensitivity is lower but volume is extremely high.

By making the model available through these diverse channels and integrating it natively with libraries like LangChain, LlamaIndex, and Weaviate, Google has ensured that the “switching cost” for businesses isn’t just a matter of price, but of operational ease. Whether a startup is building its first RAG-based assistant or a multinational is unifying decades of disparate media archives, the infrastructure is now live and globally accessible.

In addition, the official Gemini API and Vertex AI Colab notebooks, which contain the Python code necessary to implement these features, are licensed under the Apache License, Version 2.0.

Advertisement

The Apache 2.0 license is highly regarded in the tech community because it is “permissive.” It allows developers to take Google’s implementation code, modify it, and use it in their own commercial products without having to pay royalties or “open source” their own proprietary code in return.

How enterprises should respond: migrate to Gemini 2 Embedding or not?

For Chief Data Officers and technical leads, the decision to migrate to Gemini Embedding 2 hinges on the transition from a “text-plus” strategy to a “natively multimodal” one.

If your organization currently relies on fragmented pipelines — where images and videos are first transcribed or tagged by separate models before being indexed — the upgrade is likely a strategic necessity.

This model eliminates the “translation tax” of using intermediate LLMs to describe visual or auditory data, a move that partners like Sparkonomy found reduced latency by up to 70% while doubling semantic similarity scores. For businesses managing massive, diverse datasets, this isn’t just a performance boost; it is a structural simplification that reduces the number of points where “meaning” can be lost or distorted.

Advertisement

The effort to switch from a text-only foundation is lower than one might expect due to what early users describe as excellent “API continuity”.

Because the model integrates with industry-standard frameworks like LangChain, LlamaIndex, and Vector Search, it can often be “dropped into” existing workflows with minimal code changes. However, the real cost and energy investment lies in re-indexing. Moving to this model requires re-embedding your existing corpus to ensure all data points exist in the same 3,072-dimensional space.

While this is a one-time computational hurdle, it is the prerequisite for unlocking cross-modal search—where a simple text query can suddenly “see” into your video archives or “hear” specific customer sentiment in call recordings.

The primary trade-off for data leaders to weigh is the balance between high-fidelity retrieval and long-term storage economics. Gemini Embedding 2 addresses this directly through Matryoshka Representation Learning (MRL), which allows you to truncate vectors from 3072 dimensions down to 768 without a linear drop in quality.

Advertisement

This gives CDOs a tactical lever: you can choose maximum precision for high-stakes legal or medical discovery—as seen in Everlaw’s 20% lift in recall—while utilizing smaller, more efficient vectors for lower-priority recommendation engines to keep cloud storage costs in check.

Ultimately, the ROI is found in the “lift” of accuracy; in a landscape where an AI’s value is defined by its context, the ability to natively index a 6-page PDF or 128 seconds of video directly into a knowledge base provides a depth of insight that text-only models simply cannot replicate.

Source link

Advertisement
Continue Reading

Tech

Metropolis (1927) Created The Blueprint For Modern Science Fiction Worlds

Published

on

Metropolis 1927 Blueprint Modern Sci Fi
Metropolis, an iconic silent German production from 1927 directed by Fritz Lang, continues to throw a long shadow over the science fiction genre over a century after its release. Many people consider it to be the foundational work of the genre. Its cityscapes, people, and concepts reappear in subsequent stories, ranging from towering dystopias to gnawing conflicts between humans and robots.



Lang constructs a world split cleanly in two, with a privileged elite living lavishly in gleaming towers high above the city while the working class toil away in the gloomy depths below, keeping the machines alive at great personal cost. Into this divide steps Freder, the son of the city’s all powerful master, who ventures underground for the first time, falls for a kind and idealistic worker named Maria, and gets a brutal firsthand look at just how punishing life is down there. Things take a darker turn when Rotwang, a brilliant but dangerous inventor, builds a robot in Maria’s likeness and unleashes it on the masses to sow discord and keep the lower classes firmly under the thumb. What follows is mayhem on a grand scale, including a flood that threatens to swallow the entire underground city whole, and it takes Freder stepping forward as an unlikely peacemaker to finally pull things back from the brink.


Lang’s ambitions were quite high-tech for the time. He was inspired by his trip to New York and saw buildings as emblems of power. The sets combined Art Deco elements with Gothic shadows and a variety of futuristic gadgets. The way the workers were choreographed to move in perfect synchrony like the components of a gigantic clock was also rather impressive for a film from that era. To achieve all of the special effects, the crew used a variety of techniques such as miniatures, reflections, and creative lighting. The robot, a sleek, mechanical creature with a variety of human-like gestures, was a piece of art that grabbed viewers from the start.

Metropolis 1927 Blueprint Modern Sci Fi
Lang’s ideas are still relevant today, depicting how the privileged live in a bubble, disconnected from the people who keep the system running. The machines promise advancement, but all they accomplish is transform people into extensions of themselves. The robot poses numerous problems regarding control, dishonesty, and what truly defines someone as real. These beliefs are more than just leftovers of the industrial past; they are nonetheless crucial to our current arguments about automation and inequality.

Blade Runner World
Metropolis has inspired generations of films, as evidenced by Blade Runner’s rainy streets and towering skyscrapers, as well as the appearance of the golden protocol droid in Star Wars. The Matrix took the entire concept of underground toil and people gradually coming up to their controlled reality. Directors and artists have plagiarized Lang’s vertical cityscapes, such as the elegant gardens above and the depressing blackness below, in a variety of media, including movies and music videos.

Metropolis 1927 Blueprint Modern Sci Fi
What makes Metropolis feel so urgent even now is that the story it tells has never really gone out of date. A world carved up between those who have everything and those who have nothing, locked in a state of uneasy tension, is hardly a difficult concept to relate to in 2026. Set in a future that in many ways has already arrived, the film is a sharp reminder of how easily technology can widen the gap between people rather than close it. Lang saw a society where machines amplify our worst instincts rather than our best ones, and that particular warning feels more relevant than ever in an age of artificial intelligence and mass surveillance. Metropolis may not have predicted every twist the future had in store, but it shaped the way generations of people have imagined tomorrow, and that kind of influence doesn’t fade easily.
[Source]

Source link

Advertisement
Continue Reading

Tech

DJI Osmo Pocket 3 Creator Combo gets a 19% discount in Amazon’s sale event

Published

on

Capturing smooth, cinematic footage on the move normally means carrying bulky camera gear, but compact creator tools are getting surprisingly capable, especially when a strong discount makes them easier to justify.

And with the DJI Osmo Pocket 3 Creator Combo now £405, down from £499, that growing appeal becomes even clearer for creators who want smooth footage without carrying a full camera kit.

Deal DJI Osmo Pocket 3 Creator ComboDeal DJI Osmo Pocket 3 Creator Combo

DJI Osmo Pocket 3 Creator Combo gets a 19% discount in Amazon’s sale event

The DJI Osmo Pocket 3 Creator Combo has dropped by 19% in Amazon’s current sale, offering a compelling chance to grab DJI’s ultra‑portable camera kit.

View Deal

Advertisement

We awarded the Pocket 3 4.5 stars in our review, noting: “If you’re looking for a vlogging camera that makes it as easy to record smooth 4K footage for TikTok and Instagram as YouTube, the DJI Osmo Pocket 3 is ideal”.

The Creator Combo bundle expands its usefulness by including a DJI Mic 2 transmitter, letting vloggers record clearer dialogue without relying entirely on the camera’s built-in microphones.

Accessories such as a mini tripod, battery handle and protective case make the DJI Osmo Pocket 3 Creator Combo more practical straight out of the box, particularly for travel, interviews or spontaneous filming sessions.

Advertisement

Advertisement

Back to the camera; at its core sits a 1-inch CMOS sensor capable of recording 4K video at up to 120 frames per second, which gives creators more flexibility when capturing fast movement or slowing footage down smoothly during editing.

That sensor size matters because it gathers more light than smaller action camera sensors, helping footage retain clearer detail and colour when shooting indoors, at sunset or during unpredictable lighting conditions.

The Whatsapp LogoThe Whatsapp Logo

Get Updates Straight to Your WhatsApp

Join Now

Another highlight is the integrated three-axis mechanical stabilisation system, which works to counter small shakes and walking motion so clips remain steady even when filming handheld while moving through crowded streets or uneven paths.

Advertisement

The rotating two-inch touchscreen plays a bigger role than you might expect, allowing users to switch instantly between horizontal and vertical framing so the same camera can capture YouTube footage one moment and social media clips the next.

Face and object tracking also simplify solo filming since the camera can automatically follow a subject as they move through the frame, keeping the focus steady without needing someone behind the lens.

Advertisement

For creators who want smooth 4K footage, simple subject tracking and portable gear that fits in a jacket pocket, this current discount makes the DJI Osmo Pocket 3 Creator Combo far easier to recommend.

Advertisement

SQUIRREL_PLAYLIST_10148964

Source link

Continue Reading

Tech

vivo Y51 Pro 5G Launched in India With 7,200mAh Battery, Dimensity 7360 Turbo

Published

on

vivo has been on a bit of a roll recently with flagships like the X300 Pro and the X200T. Now, to cater more towards the mid-range buyers, the Chinese smartphone maker has expanded its Y-series lineup in India with the launch of the vivo Y51 Pro 5G. The device comes with a massive 7,200mAh battery, a MediaTek Dimensity processor, and IP68/IP69 durability ratings. The smartphone is priced at ₹24,999 for the 8GB + 128GB variant and ₹27,999 for the 8GB + 256GB model, and it will go on sale starting March 11, 2026, via the vivo India website, Flipkart, and partner retail stores. Here’s everything you need to know about it.

7,200mAh Battery With Fast Charging

vivo Y51 7200 mah battery

The main highlight of the Y51 is its 7,200mAh battery, which is among the largest in its segment. vivo claims the phone can last 23.7 hours of video streaming, 15.4 hours of gaming, 21 hours of social media usage, and 95 hours of music playback before needing a recharge.

Speaking of that, the Y51 supports 44W FlashCharge for faster charging and includes features such as Battery Health Algorithms, Battery Life Extender technology, and bypass charging to reduce battery wear. According to vivo, the battery is built to maintain performance for up to six years of usage. There’s also support for reverse charging.

MediaTek Dimensity 7360 Turbo Processor

Person holding the vivo Y51

Under the hood, the vivo Y51 Pro 5G is powered by the MediaTek Dimensity 7360 Turbo chipset, built on a 4nm process. vivo claims the chip delivers an AnTuTu benchmark score of over 920,000, promising smooth multitasking and gaming performance. The processor is paired with 8GB of RAM, and storage options of 128GB or 256GB.

OriginOS 6, on top of Android 16, runs the show here. The new skin comes with a myriad of AI features like AI Creation for content generation, AI Notes for organizing documents, AI Transcript Assist, and AI Captions for real-time translation. The phone also supports Circle to Search and Google Gemini, enabling smarter search and productivity features. Beyond these, there’s Private Space for secure storage of files and apps, Free Transfer for quick PC connectivity, and Face Unlock that works even when wearing certain types of helmets.

Design & Cameras

Camera specs of the Y51

On the front, the Y51 features a 6.75-inch display with a 120Hz refresh rate and up to 1250 nits peak brightness. The screen is TÜV Rheinland Low Blue Light certified to help reduce eye strain during long usage sessions. The design follows vivo’s recent trend, with a camera island on the back and a 50MP lens capable of 4K video recording at 30 FPS. The setup also includes features such as Electronic Image Stabilization (EIS), dual-view video recording, live photo capture, and multiple AI scene modes for portraits, night shots, and professional-style photography.

Thanks to its IP68 rating, the phone also supports underwater photography at depths of up to 1.5 meters for 30 minutes, though we’d recommend avoiding it, as water damage isn’t covered under warranty.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025