Connect with us

Tech

Google’s Gemini Embedding 2 arrives with native multimodal support to cut costs and speed up your enterprise data stack

Published

on

Yesterday amid a flurry of enterprise AI product updates, Google announced arguably its most significant one for enterprise customers: the public preview availability of Gemini Embedding 2, its new embeddings model — a significant evolution in how machines represent and retrieve information across different media types.

While previous embedding models were largely restricted to text, this new model natively integrates text, images, video, audio, and documents into a single numerical space — reducing latency by as much as 70% for some customers and reducing total cost for enterprises who use AI models powered by their own data to complete business tasks.

VentureBeat collaborator Sam Witteveen, co-founder of AI and ML training company Red Dragon AI, received early access to Gemini Embedding 2 and published a video of his impressions on YouTube. Watch it below:

Advertisement

Who needs and uses an embedding model?

For those who have encountered the term “embeddings” in AI discussions but find it abstract, a useful analogy is that of a universal library.

In a traditional library, books are organized by metadata: author, title, or genre. In the “embedding space” of an AI, information is organized by ideas.

Imagine a library where books aren’t organized by the Dewey Decimal System, but by their “vibe” or “essence”. In this library, a biography of Steve Jobs would physically fly across the room to sit next to a technical manual for a Macintosh. A poem about a sunset would drift toward a photography book of the Pacific Coast, with all thematically similar content organized in beautiful hovering “clouds” of books. This is basically what an embedding model does.

An embedding model takes complex data—like a sentence, a photo of a sunset, or a snippet of a podcast—and converts it into a long list of numbers called a vector.

Advertisement

These numbers represent coordinates in a high-dimensional map. If two items are “semantically” similar (e.g., a photo of a golden retriever and the text “man’s best friend”), the model places their coordinates very close to each other in this map. Today, these models are the invisible engine behind:

  • Search Engines: Finding results based on what you mean, not just the specific words you typed.

  • Recommendation Systems: Netflix or Spotify suggesting content because its “coordinates” are near things you already like.

  • Enterprise AI: Large companies use them for Retrieval-Augmented Generation (RAG), where an AI assistant “looks up” a company’s internal PDFs to answer an employee’s question accurately.

The concept of mapping words to vectors dates back to the 1950s with linguists like John Rupert Firth, but the modern “vector revolution” began in the early 2000s when Yoshua Bengio’s team first used the term “word embeddings”. The real breakthrough for the industry was Word2Vec, released by a team at Google led by Tomas Mikolov in 2013. Today, the market is led by a handful of major players:

  • OpenAI: Known for its widely-used text-embedding-3 series.

  • Google: With the new Gemini and previous Gecko models.

  • Anthropic and Cohere: Providing specialized models for enterprise search and developer workflows.

By moving beyond text to a natively multimodal architecture, Google is attempting to create a singular, unified map for the sum of human digital expression—text, images, video, audio, and documents—all residing in the same mathematical neighborhood.

Why Gemini Embedding 2 is such a big deal

Most leading models are still “text-first.” If you want to search a video library, the AI usually has to transcribe the video into text first, then embed that text.

Advertisement

Google’s Gemini Embedding 2 is natively multimodal.

As Logan Kilpatrick of Google DeepMind posted on X, the model allows developers to “bring text, images, video, audio, and docs into the same embedding space”.

It understands audio as sound waves and video as motion directly, without needing to turn them into text first. This reduces “translation” errors and captures nuances that text alone might miss.

For developers and enterprises, the “natively multimodal” nature of Gemini Embedding 2 represents a shift toward more efficient AI pipelines.

Advertisement

By mapping all media into a single 3,072-dimensional space, developers no longer need separate systems for image search and text search; they can perform “cross-modal” retrieval—using a text query to find a specific moment in a video or an image that matches a specific sound.

And unlike its predecessors, Gemini Embedding 2 can process requests that mix modalities. A developer can send a request containing both an image of a vintage car and the text “What is the engine type?”. The model doesn’t process them separately; it treats them as a single, nuanced concept. This allows for a much deeper understanding of real-world data where the “meaning” is often found in the intersection of what we see and what we say.

One of the model’s more technical features is Matryoshka Representation Learning. Named after Russian nesting dolls, this technique allows the model to “nest” the most important information in the first few numbers of the vector.

An enterprise can choose to use the full 3072 dimensions for maximum precision, or “truncate” them down to 768 or 1536 dimensions to save on database storage costs with minimal loss in accuracy.

Advertisement

Benchmarking the performance gains of moving to multimodal

Gemini Embedding 2 establishes a new performance ceiling for multimodal depth, specifically outperforming previous industry leaders across text, image, and video evaluation tasks.

Google Gemini Embedding 2 benchmarks

Google Gemini Embedding 2 benchmarks. Credit: Google

The model’s most significant lead is found in video and audio retrieval, where its native architecture allows it to bypass the performance degradation typically associated with text-based transcription pipelines.

Specifically, in video-to-text and text-to-video retrieval tasks, the model demonstrates a measurable performance gap over existing industry leaders, accurately mapping motion and temporal data into a unified semantic space.

Advertisement

The technical results show a distinct advantage in the following standardized categories:

  • Multimodal Retrieval: Gemini Embedding 2 consistently outperforms leading text and vision models in complex retrieval tasks that require understanding the relationship between visual elements and textual queries.

  • Speech and Audio Depth: The model introduces a new standard for native audio embeddings, achieving higher accuracy in capturing phonetic and tonal intent compared to models that rely on intermediate text-transcription.

  • Contextual Scaling: In text-based benchmarks, the model maintains high precision while utilizing its expansive 8,192 token context window, ensuring that long-form documents are embedded with the same semantic density as shorter snippets.

  • Dimension Flexibility: Testing across the Matryoshka Representation Learning (MRL) layers reveals that even when truncated to 768 dimensions, the model retains a significant majority of its 3,072-dimension performance, outperforming fixed-dimension models of similar size.

What it means for enterprise databases

For the modern enterprise, information is often a fragmented mess. A single customer issue might involve a recorded support call (audio), a screenshot of an error (image), a PDF of a contract (document), and a series of emails (text).

In previous years, searching across these formats required four different pipelines. With Gemini Embedding 2, an enterprise can create a Unified Knowledge Base. This enables a more advanced form of RAG, wherein a company’s internal AI doesn’t just look up facts, but understands the relationship between them regardless of format.

Early partners are already reporting drastic efficiency gains:

Advertisement
  • Sparkonomy, a creator economy platform, reported that the model’s native multimodality slashed their latency by up to 70%. By removing the need for intermediate LLM “inference” (the step where one model explains a video to another), they nearly doubled their semantic similarity scores for matching creators with brands.

  • Everlaw, a legal tech firm, is using the model to navigate the “high-stakes setting” of litigation discovery. In legal cases where millions of records must be parsed, Gemini’s ability to index images and videos alongside text allows legal professionals to find “smoking gun” evidence that traditional text-search would miss.

Understanding the limits

In its announcement, Google was upfront about some of the current limitations of Gemini Embedding 2. The new model can accommodate vectorization of individual files that comprise of as many as 8,192 text tokens, 6 images (in as single batch), 128 seconds of video (2 minutes, 8 seconds long), 80 seconds of native audio (1.34 minutes), and a 6-page PDF.

It is vital to clarify that these are input limits per request, not a cap on what the system can remember or store.

Think of it like a scanner. If a scanner has a limit of “one page at a time,” it doesn’t mean you can only ever scan one page. it means you have to feed the pages in one by one.

  • Individual File Size: You cannot “embed” a 100-page PDF in a single call. You must “chunk” the document—splitting it into segments of 6 pages or fewer—and send each segment to the model individually.

  • Cumulative Knowledge: Once those chunks are converted into vectors, they can all live together in your database. You can have a database containing ten million 6-page PDFs, and the model will be able to search across all of them simultaneously.

  • Video and Audio: Similarly, if you have a 10-minute video, you would break it into 128-second segments to create a searchable “timeline” of embeddings.

Licensing, pricing, and availability

As of March 10, 2026, Gemini Embedding 2 is officially in Public Preview.

Advertisement

For developers and enterprise leaders, this means the model is accessible for immediate testing and production integration, though it is still subject to the iterative refinements typical of “preview” software before it reaches General Availability (GA).

The model is deployed across Google’s two primary AI gateways, each catering to a different scale of operation:

  • Gemini API: Targeted at rapid prototyping and individual developers, this path offers a simplified pricing structure.

  • Vertex AI (Google Cloud): The enterprise-grade environment designed for massive scale, offering advanced security controls and integration with the broader Google Cloud ecosystem.

It’s also already integrated with the heavy hitters of AI infrastructure: LangChain, LlamaIndex, Haystack, Weaviate, Qdrant, and ChromaDB.

In the Gemini API, Google has introduced a tiered pricing model that distinguishes between “standard” data (text, images, and video) and “native” audio.

Advertisement
Gemini 2 Embedding pricing on Google Gemini API

Gemini 2 Embedding pricing on Google Gemini API. Credit: Google

  • The Free Tier: Developers can experiment with the model at no cost, though this tier comes with rate limits (typically 60 requests per minute) and uses data to improve Google’s products.

  • The Paid Tier: For production-level volume, the cost is calculated per million tokens. For text, image, and video inputs, the rate is $0.25 per 1 million tokens.

  • The “Audio Premium”: Because the model natively ingests audio data without intermediate transcription—a more computationally intensive task—the rate for audio inputs is doubled to $0.50 per 1 million tokens.

For large-scale deployments on Vertex AI, the pricing follows an enterprise-centric “Pay-as-you-go” (PayGo) model. This allows organizations to pay for exactly what they use across different processing modes:

  • Flex PayGo: Best for unpredictable, bursty workloads.

  • Provisioned Throughput: Designed for enterprises that require guaranteed capacity and consistent latency for high-traffic applications.

  • Batch Prediction: Ideal for re-indexing massive historical archives, where time-sensitivity is lower but volume is extremely high.

By making the model available through these diverse channels and integrating it natively with libraries like LangChain, LlamaIndex, and Weaviate, Google has ensured that the “switching cost” for businesses isn’t just a matter of price, but of operational ease. Whether a startup is building its first RAG-based assistant or a multinational is unifying decades of disparate media archives, the infrastructure is now live and globally accessible.

In addition, the official Gemini API and Vertex AI Colab notebooks, which contain the Python code necessary to implement these features, are licensed under the Apache License, Version 2.0.

Advertisement

The Apache 2.0 license is highly regarded in the tech community because it is “permissive.” It allows developers to take Google’s implementation code, modify it, and use it in their own commercial products without having to pay royalties or “open source” their own proprietary code in return.

How enterprises should respond: migrate to Gemini 2 Embedding or not?

For Chief Data Officers and technical leads, the decision to migrate to Gemini Embedding 2 hinges on the transition from a “text-plus” strategy to a “natively multimodal” one.

If your organization currently relies on fragmented pipelines — where images and videos are first transcribed or tagged by separate models before being indexed — the upgrade is likely a strategic necessity.

This model eliminates the “translation tax” of using intermediate LLMs to describe visual or auditory data, a move that partners like Sparkonomy found reduced latency by up to 70% while doubling semantic similarity scores. For businesses managing massive, diverse datasets, this isn’t just a performance boost; it is a structural simplification that reduces the number of points where “meaning” can be lost or distorted.

Advertisement

The effort to switch from a text-only foundation is lower than one might expect due to what early users describe as excellent “API continuity”.

Because the model integrates with industry-standard frameworks like LangChain, LlamaIndex, and Vector Search, it can often be “dropped into” existing workflows with minimal code changes. However, the real cost and energy investment lies in re-indexing. Moving to this model requires re-embedding your existing corpus to ensure all data points exist in the same 3,072-dimensional space.

While this is a one-time computational hurdle, it is the prerequisite for unlocking cross-modal search—where a simple text query can suddenly “see” into your video archives or “hear” specific customer sentiment in call recordings.

The primary trade-off for data leaders to weigh is the balance between high-fidelity retrieval and long-term storage economics. Gemini Embedding 2 addresses this directly through Matryoshka Representation Learning (MRL), which allows you to truncate vectors from 3072 dimensions down to 768 without a linear drop in quality.

Advertisement

This gives CDOs a tactical lever: you can choose maximum precision for high-stakes legal or medical discovery—as seen in Everlaw’s 20% lift in recall—while utilizing smaller, more efficient vectors for lower-priority recommendation engines to keep cloud storage costs in check.

Ultimately, the ROI is found in the “lift” of accuracy; in a landscape where an AI’s value is defined by its context, the ability to natively index a 6-page PDF or 128 seconds of video directly into a knowledge base provides a depth of insight that text-only models simply cannot replicate.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

We Just Got Our First Look Ever At The B-21 Raider Performing This Risky Maneuver

Published

on





The B-21 Raider is a stealth bomber developed by Northrop Grumman. It isn’t yet in active duty but, as of February 2026, is scheduled to enter service in 2027. In order to make that happen, Air & Space Forces Magazine reports an enormous investment of $4.5 billion dollars was committed by Congress in order to speed up to process of its development. There’s tremendous faith in the U.S. Air Force, then, that the aircraft’s going to be a huge asset. True enough, several factors make the B-2 Raider stealth bomber special compared to other jets, and now the world has got its first look at one of the test models performing a very risky maneuver: Approaching for midair refueling.

This image, captured by X’s @minor_triad, shows the sleek, enigmatic B-2 refueling through connection to a KC-135R Stratotanker:

Advertisement

This was one of the very first public sightings of the Air Force’s formidable new asset, and a spokesperson for the military branch moved quickly to address all the speculation and confirm the identities of the two aircraft. In a statement provided to Defense One on March 11, the day after images were posted, they noted that “a test event involving a close-proximity flight” took place between a B-21 and a KC-135R. Furthermore, it was just one flight in a series of maneuvers, tests, and trials that are intended “to validate the B-21’s capabilities and operational readiness.” The specific tanker in question, according to The War Zone, is a veteran of midair refueling flight tests, operating from Edwards Air Force Base in southern California. If the B-21 is to have a long service life ahead of it (and the significant investment in it suggests that’s the intent), it’s critical to know that it can perform these sorts of risky midair maneuvers. 

Advertisement

The great importance, and potential risks, of mid-air refueling

The Air Force highlighted that the incredibly close proximity of the two aircraft was a key element of this particular test flight. Though midair refueling is the primary purpose of a KC-135R, that doesn’t detract from the fact that it’s one of aviation’s most dangerous maneuvers, and confidence and experience is crucial. As the Hill Aerospace Museum emphasizes, there’s an enormous size disparity between tankers built for capacity and a fighter jet, and they have to be close enough throughout for the connection to be established and continue while the fuel is transferred. It’s a feat that demands enormous skill, and there may be adverse weather conditions or other environmental factors to also account for.  

Nonetheless, it’s vital for some bombers and other aircraft to be able to be refueled in this way, and that’s why the B-2 engaged in a flight exercise that brought it so very close to a KC-135R. The operation must be perfected, and adapted to the capabilities of the tanker and the unique physics of each aircraft in need of refueling. Depending on where a bomber is operating, the mission it’s engaged in, and other factors, it may well not have the luxury of anywhere to land to refuel conventionally, and operations would be sorely limited in terms of scope without this capacity. This is why, while commercial planes don’t refuel in the sky, it’s generally important that military aircraft like bombers can. 

Advertisement



Source link

Advertisement
Continue Reading

Tech

NASA astronauts to venture into the void on a historic day

Published

on

NASA is preparing to conduct its first spacewalk at the International Space Station (ISS) in nearly a year, ending an unusually long break for the activity.

Truth be told, NASA had a spacewalk planned for early January but called it off after one of the two participating astronauts experienced a serious health issue that ultimately forced the early return to Earth of a SpaceX crew.

The space agency is currently targeting March 18 for a spacewalk involving NASA astronauts Jessica Meir and Chris Williams.

Coincidentally, the spacewalk is scheduled for the 61st anniversary of the first-ever spacewalk. The milestone was achieved by Alexei Leonov, who exited his spacecraft for around 10 minutes during the Voskhod 2 mission in 1965. This was followed about three months later by the first-ever U.S. spacewalk, performed by NASA astronaut Ed White during the Gemini 4 mission.

Advertisement

Meir and Williams have been getting ready for their upcoming extravehicular activity by inspecting their spacesuits, trying them on, and checking the Quest airlock from where they will exit the space-based facility.

The pair will spend around six-and-a-half hours in the vacuum of space, installing a modification kit and route cables on the port side of the orbital outpost as part of preparations for a future roll-out solar array. The seventh roll-out solar array will be installed on a later spacewalk to augment the main solar arrays’ power generation capabilities, NASA said.

This will be the fourth spacewalk for Meir, who participated in her first one in 2019, followed by two more several months later. Meir arrived at the space station last month as part of SpaceX’s Crew-12.

Williams, on the other hand, is on his first space mission and so the upcoming spacewalk will mark his debut outside the station. The American astronaut arrived at the ISS before Meir, in November 2025, aboard a Russian Soyuz spacecraft.

Advertisement

Source link

Continue Reading

Tech

US Air Force Sends B-21 Bomber Production Into Overdrive

Published

on





For close to three decades, the Northrop Grumman B-2 “Spirit” carried the mantle of being the only stealth bomber in the U.S. Air Force arsenal. Alongside the Lockheed Martin F-117 Nighthawk stealth attack aircraft, the B-2 is one of the most advanced stealth planes ever made. It continues to be one of the mainstays of the U.S. nuclear triad, even in 2026. While the aircraft remains operational, there is no denying the B-2s are slowly approaching retirement age and will need to be replaced by an equally capable — or even better — stealth bomber in the years to come.

As it turns out, the U.S. Air Force already has that successor in sight. A small number of next-generation stealth bombers have begun entering the USAF inventory, with at least two test aircraft delivered so far. Known as the Northrop Grumman B-21 Raider, this new platform is expected to gradually take over the B-2’s role in the decades to come. While visually similar to the aging B2 bombers, the new B-21 features several changes from the B2, including fewer engines and smaller dimensions. 

Advertisement

The B-21 bomber should greatly enhance the U.S. Air Force’s strike-anywhere capabilities. To that end, the Department of the Air Force signed a new agreement with Northrop Grumman, essentially directing the manufacturer to speed up production of the aircraft. The U.S. Air Force is slated to receive at least two more B-21 test aircraft in FY2026, and the new agreement means that the U.S. Air Force now expects to start fielding B-21s in 2027.

Advertisement

The USAF needs B-21s, and it needs them fast

The signing of the new agreement between Northrop Grumman and the U.S. Air Force to enhance the B-21’s production capacity was publicly announced in February 2026. As per the revised terms, the manufacturer will increase the annual production rate of the B-21 by around 25%. According to the U.S. Air Force, this increase in production rate will allow it to acquire B-21s faster than originally anticipated. This move will also ensure that more B-21s will be combat-ready for any future conflicts. In addition, the compressed delivery schedule should ensure that the program doesn’t massively exceed the projected budget, as more aircraft would be delivered in a shorter timespan.

This move requires some serious money. The U.S. Air Force will spend an additional $4.5 billion as part of this move, which had already been authorized and appropriated under the FY2025 Reconciliation Act (also known as the One Big Beautiful Bill). It is pertinent to note that, unlike several other crucial U.S. military projects that are running way behind schedule – like the heavily delayed USS Enterprise — or have been plagued by cost overruns, the B-21 program has largely stuck to its schedule. It will be interesting to see whether the accelerated delivery requirements change anything in this regard.



Advertisement

Source link

Continue Reading

Tech

If A Mechanic Refuses To Release Your Car, Here’s What You’ll Have To Do

Published

on





There may be several reasons for your mechanic’s refusal to give your car back. Maybe the bill came in way higher than what you were expecting, and you are unable to pay it in full. Whatever the reason, if you find yourself in such a situation, it’s likely because of a legal guarantee called a mechanic’s lien. It essentially lets repair shops hold onto your car until the bill is settled, similar to how collateral works at a bank. Every state in the U.S. has some version of this on the books, though the specific rules around those can vary quite a bit. 

For instance, in some states, the shop has to give you written notice of the lien before they can even enforce it. Others are stricter and demand that the shop file paperwork with local authorities on top of that. Some states even let the shop sell the car to recover anything that’s owed to them. In Louisiana, for example, that window is 45 days after the lien notice goes out.

Of course, those are the rules when everything is done properly and by the book. The good news is that not every shop actually follows them correctly, which gives you some room to push back.

Advertisement

The first thing to do

Before you escalate anything, figure out whether the mechanic’s lien is even valid from their end. Knowing how to avoid getting ripped off by a car mechanic starts with understanding that you have the right to approve every charge before the work begins. Most states require written authorization for repairs above a certain dollar amount. 

The most common way repair shops get themselves into legal trouble is by hitting customers with bills for work they never approved. For example, if someone drops off their car, and when they come back to pick it up there’s a $5,000 invoice just waiting for them. The shop says the work was necessary, but the customer maintains they never signed off on any of it. Now, to prevent this kind of miscommunication from the start, there is a specific phrase you should never say to your mechanic, or you may find yourself victim of a common car mechanic scam.

Advertisement

Anyway, your first move in the situation should always be to request an itemized bill and compare it against the original estimate. If those numbers don’t add up, or if the work was done poorly or left incomplete, the lien might not hold up at all. Some states will even let you pay under protest – you basically settle the bill to get your car back, and the shop has to note that it was paid under protest on the receipt, which protects you for what comes next.

Advertisement

Getting your car back (and getting even)

If the shop still won’t budge after all of that, or if the shop won’t communicate with you at all, the first real step is sending a formal demand letter (preferably by email and certified mail) requesting an update on the vehicle and a deadline for its return. Doing so ensures that any silence from the shop becomes a problem for them legally. It essentially shows they’re potentially detaining your vehicle without any real justification for doing so.

From there, you can file a complaint with your state’s consumer protection agency or attorney general’s office. In some states, like North Carolina, there’s also this neat legal mechanism where you can post a bond with the Clerk of Superior Court for the disputed amount, and then the court will order the shop to release your car while the whole dispute gets sorted out.

If the bill is small enough, small claims court is always on the table for something like this. You represent yourself, lay out all the evidence, and let a judge decide on it. For bigger amounts, or if you suspect outright fraud, hiring a consumer protection attorney is probably worth the cost. In some states, if the shop violated the law, you could actually be entitled to triple your losses plus legal fees on top of those.

Advertisement



Source link

Continue Reading

Tech

A Radio Power Amplifier For Not A Lot

Published

on

When building a radio transmitter, unless it’s a very small one indeed, there’s a need for an amplifier before the antenna. This is usually referred to as the power amplifier, or PA. How big your PA is depends on your idea of power, but at the lower end of the power scale a PA can be quite modest. QRP, as lowe power radio is referred to, has a transmit power in the miliwatts or single figure watts. [Guido] is here with a QRP PA that delivers about a watt from 1 to 30 MHz, is made from readily available parts, and costs very little.

Inspired by a circuit from [Harry Lythall], the prototype is built on a piece of stripboard. It’s getting away with using those cheap transistors without heatsinking because it’s a class C design. In other words, it’s in no way linear; instead it’s efficient, but creates harmonics and can’t be used for all modes of transmission. This PA will need a low-pass filter to avoid spraying the airwaves with spurious emissions, and on the bands it’s designed for, is for CW, or Morse, only.

We like it though, as it’s proof that building radios can still be done without a large bank balance. Meanwhile if the world of QRP interests you, it’s something we have explored in the past.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Why Falling Cats Always Seem To Land On Their Feet

Published

on

An anonymous reader quotes a report from the New York Times: In a paper, published last month in the journal The Anatomical Record, researchers offered a novel take on falling felines. Their evidence suggests new insights into the so-called falling cat problem, particularly that cats have a very flexible segment of their spines that allows them to correct their orientation midair. […] People have been curious about falling cats perhaps as long as the animals have been living with humans, but the method to their acrobatic abilities remains enigmatic. Part of the difficulty is that the anatomy of the cat has not been studied in detail, explains Yasuo Higurashi, a physiologist at Yamaguchi University in Japan and lead author of the study. […]

Modern research has split the falling cat problem into two competing models. The first, “legs in, legs out,” suggests that cats correct their falling trajectory by first extending their hind limbs before retracting them, using a sequential twist of their upper and then lower trunk to gain the proper posture while in free fall. The second model, “tuck and turn,” suggests that cats turn their upper and lower bodies in simultaneous juxtaposed movements. […]

The researchers found that the feline spine was extremely flexible in the upper thoracic vertebrae, but stiffer and heavier in the lower lumbar vertebrae. The discovery matches video evidence showing the cats first turn their front legs, and then their lower legs. The results suggest the cat quickly spins its flexible upper torso to face the ground, allowing it to see so that it can correctly twist the rest of its body to match. “The thoracic spine of the cat can rotate like our neck,” Dr. Higurashi said.

Experiments on the spine show the upper vertebrae can twist an astounding 360 degrees, he says, which helps cats make these correcting movements with ease. The results are consistent with the “legs in, legs out” model, but definitively determining which model is correct will take more work, Dr. Higurashi says. The results also yielded another discovery: Cats, like many animals, appear to have a right-side bias. One of the dropped cats corrected itself by turning to the right eight out of eight times, while the other turned right six out of eight times.

Advertisement

Source link

Continue Reading

Tech

IEEE Launches Global Virtual Career Fairs

Published

on

Last year IEEE launched its first virtual career fair to help strengthen the engineering workforce and connect top talent with industry professionals. The event, which was held in the United States, attracted thousands of students and professionals. They learned about more than 500 job opportunities in high-demand fields including artificial intelligence, semiconductors, and power and energy. They also gained access to career resources.

Hosted byIEEE Industry Engagement, the event marked a milestone in the organization’s expanding workforce development efforts to bridge the gap between academic training and industry needs while bolstering the technical talent pipeline, says Jessica Bian, 2025 chair of the IEEE Industry Engagement Committee. The IEC works to strengthen the connection with industry professionals, companies, and technology sectors through global career fairs, as well as its Industry Newsletter, AI-powered career guidance tools, and World Technology Summits, where industry leaders discuss solutions to societal challenges.

“We are bringing together companies, universities, and young professionals to help meet the demand for technical talent in critical sectors,” Bian says. “It is part of our commitment to preparing the next generation of innovators.”

The virtual career fairs are expanding to more IEEE regions this year. One was held last month for Region 9 (Latin America). One is scheduled next month for Region 8 (Europe, Middle East, and Africa) and another in May for Region 7 (Canada).

Advertisement

A global career fair is slated for June.

Registration information for all the fairs is available at careerfair.ieee.org.

Innovative recruitment events

The fairs, which use the vFairs virtual platform, provide interactive sessions with representatives from hiring companies, direct chats with recruiters, video interviews, and access to downloadable job resources. The features help remove geographic barriers and increase visibility for employers and job seekers.

The career fair platform features interactive engagement tools including networking roundtables, a live activity feed, a leaderboard, and a virtual photobooth to encourage participants to remain active throughout the day.

Advertisement

Bringing together thousands of professionals

STEM students participated in the U.S. and Latin America events, along with early-career professionals and seasoned engineers—almost 8,000 participants in all. They represented diverse fields including software engineering, AI, semiconductors, and power systems.

Siemens, Burns & McDonnell, and Morgan Stanley were among the dozens of companies that participated in the U.S. event. More than 500 internships, co-op opportunities, and full-time positions were promoted.

“I found the overall process highly efficient and the platform intuitive—which made for a great sourcing experience,” said a recruiter from Burns & McDonnell, a design and construction firm. “I was able to join a session, short-list several high-potential candidates, review their résumés, and initiate contact with a couple of them.

“I am optimistic that we will be able to extend at least one offer from this pipeline.”

Advertisement

Participating students described the fair as impactful.

“I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories,” said Michael Dugan, an electrical and computer engineering graduate student at Rice University, in Houston.

New tools elevating the candidate experience

Attendees had access to AI-guided job-matching tools and career development programs and resources.

Prior to the fair, registrants could use the IEEE Career Guidance Counselor, an AI-powered career advisor. The ICGC tool analyzes candidates’ skills and experience to suggest aligned roles and provides tailored professional development plans.

Advertisement

The ICGC also makes personalized recommendations for mentors, job opportunities, training resources, and career pathways.

Pre-event workshops and mock interview sessions helped participants refine their résumé, strengthen interview strategies, and manage expectations. They also provided tips on how to engage with recruiters.

“I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories.” —Michael Dugan, graduate student at Rice University, in Houston

During the Future Ready Engineers: Essential Skills and Networking Strategies to Stand Out at a Career Fair workshop, Shaibu Ibrahim, a senior electrical engineer and member of IEEE Young Professionals, shared networking strategies for career fairs and industry events as well as tips on preparation, engagement, and effective follow-up.

Advertisement

“The workshop offered advice that shaped my approach to the fair,” Dugan said. “It truly helped manage expectations and maximize my preparation.”

Learning more about IEEE

To help participants learn about IEEE and its volunteering opportunities, its societies and councils set up roundtables and technical community booths at the fairs. They were hosted by IEEE Technical Activities, IEEE Future Networks, and the IEEE Signal Processing Society.

“While exploring volunteer opportunities, I was excited to learn about IEEE Future Networks,” Dugan said. “Connecting with dedicated IEEE members, like Craig Polk, was a definite highlight.” Polk is an IEEE senior member and a senior program manager for IEEE Future Networks.

A commitment to career development

IEEE created the career fairs as free, accessible platforms for employers and job seekers to serve as a trusted bridge between companies seeking top technical talent and members dedicated to advancing their career. It is our responsibility to support them by connecting them with meaningful career opportunities.

Advertisement

In today’s unpredictable job landscape, IEEE is stepping up to help our talented members navigate change, build resilience, and connect with future employers.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Google Play will let you try a game before you buy it

Published

on

Google Play has introduced a new feature called Game Trials, which will let you play a portion of paid games for free before you commit to buying them. It’s now rolling out to select paid games on mobile, and it’s coming soon to Google Play Games on PC. Titles that offer Game Trials will show a button marked “Try” on their profile pages. When you click it, you’ll see how long you can play the game before you have to buy it. In Google’s example, the survival and horror game Dredge will give you 60 minutes of free play time, after which you’ll get the option to either buy the game or delete it from your device.

Google has also announced that it’s releasing more paid indie games over the coming months, including Moonlight Peaks, Sledding Game and Low-Budget Repairs. It has launched a new section in the Play store, as well, to feature games optimized for Windows PCs. You can wishlist the games from that section to get a notification when they’re on sale.

Finally, the company is rolling out Play Games Sidekick, the Gemini-powered Android overlay it announced last year, to select games downloaded from Play. Sidekick can show you relevant info and tools for whatever game you’re playing without having to do a search query. But if you’d rather ask other people for gaming advice instead of an AI, you can also look at a game’s Community Posts, a feature now available in English for select titles on their Play pages.

Source link

Advertisement
Continue Reading

Tech

Atlassian layoffs impact 63 workers in Washington as CTO steps down

Published

on

Rajeev Rajan. (LinkedIn Photo)

Enterprise collaboration software giant Atlassian is laying off 63 workers in Washington, according to a WARN notice filed with state regulators.

Atlassian announced Wednesday that it will lay off about 10% of its staff, or 1,600 employees, as the 24-year-old software firm transitions to an “AI-first company.” Atlassian CEO Mike Cannon-Brookes wrote that AI is changing the mix of skills and number of roles required in certain areas.

“This is primarily about adaptation,” he said. “We are reshaping our skill mix and changing how we work to build for the future.”

Atlassian opened an office in Bellevue, Wash., in 2024. The WARN notice indicates that nearly all the employees affected by layoffs in Washington state are remote workers. About half of the affected workers are in engineering or data science roles.

The company also announced Wednesday that CTO Rajeev Rajan, who is based in the Seattle region, will step down after nearly four years with Atlassian. “Atlassian is thankful for Mr. Rajan’s many contributions in building a world-class R&D organization and congratulates the promotion of next generation AI talent in Taroon Mandhana (CTO Teamwork) and Vikram Rao (CTO Enterprise and Chief Trust Officer),” the company wrote in a SEC filing.

Advertisement

Rajan was previously a VP of engineering at Meta and led the the company’s Pacific Northwest engineering hub. He also spent more than two decades at Microsoft in various leadership roles.

Several tech companies have cut staff in the Seattle area this year, including Amazon, Expedia, T-Mobile, and Smartsheet. Many corporations are slashing headcount to address pandemic-fueled corporate “bloat” while juggling economic uncertainty and impact from AI tools.

The recent rise of AI tools have also spooked some investors as some software stocks have taken a hit. Atlassian shares are down more than 50% this year.

Source link

Advertisement
Continue Reading

Tech

BlueSG to relaunch its car-sharing service as Flexar in 2026

Published

on

The services will be rolled out under a new brand, Flexar

Singapore car-sharing company BlueSG is preparing to roll out a new service under a new brand, Flexar.

In comments to CNA, BlueSG confirmed that Flexar is currently in the “beta phase” of its shared car mobility service. It is slated to launch later this year.

The move comes around seven months after the company ceased its operations on Aug 8, 2025 and let go of staff.

The new brand will have the same operating concept, which allows users to pick up a car from a station near them and drop it off at another location in Singapore.

Advertisement

Flexar is recruiting early testers

Image Credit: BlueSg

In response to The Straits Times, a BlueSG spokesperson shared that the team behind Flexar is currently focused on “testing and refining a range of exciting new offerings designed to enable flexible urban mobility.”

The new service will introduce a revamped platform, a refreshed fleet featuring a different mix of vehicles, and an expanded network of pick-up and drop-off points. It is also expected to deliver “greater reliability and a smoother user experience.”

However, the spokesperson declined to share further details, such as pricing or the total number of pick-up and drop-off points, until the official launch.

Between Jan and Mar 2026, BlueSG has also been hiring for several roles, including automotive technicians, an operations manager, and customer service agents, across various job portals.

On Mar 9, the company reached out to its community to recruit early testers ahead of the official launch. The invitation, shared in a BlueSG Telegram user group, asked interested participants to complete a questionnaire. Shortlisted users will be able to try the revamped service and provide feedback.

Advertisement
Screengrab from the Flexar website

Flexar has also launched a website, with more details listed as “coming soon.”

“Access reliable cars when you need them, where you need them. No ownership hassles, no long-term commitments, just seamless A-to-B journeys across Singapore,” the company wrote on its homepage.

BlueSG ceased operations & laid off staff in Aug 2025

Back in Aug 2025, BlueSG announced a “pause” to its services and retrenched the majority of its employees shortly after.

At the time, the company said it planned to return with an “upgraded” service powered by “advanced technology, deep expertise, and enhanced operational capabilities.”

The overhaul was driven by the company’s observations of changes in Singapore’s car-sharing landscape and the opportunity to scale its user base.

Advertisement

Since the pause, BlueSG’s fleet of around 190 electric Opel Corsa‑e hatchbacks has been sold to car dealers or listed on used-car marketplace Sgcarmart.

Meanwhile, about 790 units of the purpose-built Blue Car were scrapped after the Land Transport Authority did not permit the vehicles to be transferred for uses outside the electric car-rental trial scheme. The vehicles were initially expected to be sold to Tribecar, another car-sharing operator in Singapore.

BlueSG’s pause also had ripple effects across the industry. French energy giant TotalEnergies, which previously served as BlueSG’s main charging infrastructure provider, exited Singapore’s EV charging market. By the end of 2025, it had transferred its network of more than 1,400 public charging points to other operators.

BlueSG was first launched in 2017 under the EV car-sharing programme by the Land Transport Authority. It was initially a subsidiary of the French Bolloré Group, but in 2021, the service was acquired by Singapore-based Goldbell Group.

Advertisement
  • Read other articles we’ve written on Singaporean businesses here

Featured Image Credit: Wirestock Creators via Shutterstock.com/ Flexar

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025