Connect with us
DAPA Banner

Tech

Earthset and eclipse, oh my! NASA releases magnificent images from Artemis mission’s moon flyby

Published

on

Wide-angle view of Earthset during Artemis 2 lunar flyby
A crescent Earth sinks behind the moon’s disk in a wide-angle version of the Artemis 2 crew’s “Earthset” picture. (NASA Photo)

A day after the Artemis 2 mission’s historic lunar flyby, NASA has released a stunning set of high-resolution images documenting Earthset and Earthrise, a solar eclipse that set the moon aglow, and other views of the lunar far side and the astronauts who took the pictures.

The photographs were taken during a seven-hour lunar observation period at the farthest point of the Orion space capsule’s 10-day odyssey. The mission marked the first crewed trip around the moon since Apollo 17 in 1972, and the farthest-ever voyage by space travelers (252,756 miles from Earth, and more than 4,000 miles beyond the moon).

The Earthset photo was captured just as our home planet was sinking beneath the lunar horizon, followed about 40 minutes later by a picture of Earth rising above the horizon on the other side of the moon. The pictures rekindled the spirit of NASA’s original Earthrise photo, taken by astronaut Bill Anders during Apollo 8’s round-the-moon mission in 1968.

As Artemis 2’s astronauts prepared to take their own Earthrise photo, NASA astronaut Christina Koch said she was inspired by the original. “I had the photo up in my room as a kid, and it was part of what inspired me to keep working hard to achieve things I dreamed about,” she said.

The original Earthrise is one of the best-known photos from the Apollo era, but it took decades to confirm who actually took the shot. Anders wasn’t the sort of person to make a fuss over attribution. After a long career at NASA, at the Nuclear Regulatory Commission, in the diplomatic corps and in private industry, he settled down in Western Washington and founded the Heritage Flight Museum in Burlington, Wash. Two years ago, he died in a plane crash in waters off the San Juan Islands at the age of 90.

Advertisement

Anders and the original Earthrise aren’t the only connections linking Artemis 2 with the Pacific Northwest. The success of the mission depends in part on components built in the Seattle area. L3Harris’ Aerojet Rocketdyne facility in Redmond worked on Orion’s main engine and built some of its thrusters, while Karman Space Systems’ Mukilteo facility provided mechanisms for Orion’s parachute deployment system and emergency hatch release system.

Artemis 2’s four astronauts — Koch, NASA mission commander Reid Wiseman, pilot Victor Glover and Canadian astronaut Jeremy Hansen — were scheduled for off-duty periods today as Orion coasted toward Friday’s Pacific Ocean splashdown. The astronauts took questions from the crew of the International Space Station during a ship-to-ship chat.

“Basically, every single thing we learned on ISS is up here,” Koch said. The big difference? “I found myself noticing not only the beauty of the Earth, but how much blackness there was around it,” she said. “It just made it even more special. It truly emphasized how alike we are, how the same thing keeps every single person on planet Earth alive. … We have some shared things about how we love and live that are just universal. The specialness and preciousness of that really is emphasized when you notice how much else there is around it.”

Meanwhile, NASA’s image-processing team put in long hours overnight to work on the pictures taken by Artemis 2’s astronauts during the flyby. Pictures are being posted to NASA’s lunar flyby gallery. Check out these highlights, and click on the images to feast your eyes on higher-resolution views:

Advertisement
Solar eclipse with dark moon surrounded by sun's glow
This Artemis 2 image shows the moon fully eclipsing the Sun. From the crew’s perspective, the moon appears large enough to block the sun completely, creating nearly 54 minutes of totality and extending the view far beyond what is possible from Earth. The dark lunar disk is surrounded by a glowing halo of scattered sunlight. Also visible are stars, typically too faint to see when imaging the moon. The faint glow of the near side of the moon is visible along the left edge of the disk, due to illumination by Earth’s reflected light. (NASA Photo)
The Artemis 2 crew – Christina Koch (top left), Jeremy Hansen (bottom left), Reid Wiseman (bottom right) and Victor Glover – used eclipse glassesto protect their eyes at key moments during the solar eclipse. This was the first use of eclipse glasses at the moon for safe viewing of a partial solar eclipse. The glasses weren’t needed during the eclipse’s total phase. (NASA Photo)
This image shows the sun beginning to peek out from behind the moon as the eclipse transitions out of totality. Only a portion of the moon is visible in frame, its curved edge revealing a bright sliver of sunlight returning after nearly an hour of darkness. Space artist Don Davis posted a processed version of the image that brings out details of the sun’s corona. (NASA Photo)
Earthset picture from Artemis 2: Crescent Earth dips beneath lunar horizon
Artemis 2’s Earthset picture, captured as Earth sank beneath the lunar horizon, is reminiscent of the classic Earthrise picture that was taken by Apollo 8 astronaut Bill Anders in 1968. Earthset came at the beginning of a communications blackout for the Artemis 2 crew, and was followed 40 minutes later by Earthrise and the resumption of communications. (NASA Photo)
Our home planet appears as a delicate crescent in Artemis 2’s Earthrise photo, captured as the Earth emerged from behind the lunar disk. The moon itself is shrouded in darkness on the right half of the image. (NASA Photo)
This photo, taken just before the Artemis 2 crew began their official lunar observation period, zeroes in on a 600-mile-wide impact crater known as Orientale Basin. The black patch in the center of the crater is a mass of ancient lava that punched through the moon’s crust in an eruption billions of years ago. Orientale Basin lies along the transition between the near and far sides and is sometimes partly visible from Earth. The small, bright crater to its left is Byrgius, which has 250-mile rays extending out from its basin. (NASA Photo)
The heavily cratered terrain of the eastern edge of the South Pole-Aitken Basin is seen with the shadowed terminator – the boundary between lunar day and night – at the top of the image. The South Pole-Aitken Basin is the largest and oldest basin on the moon, providing a glimpse into an ancient geologic history built up over billions of years. NASA is targeting the moon’s south polar region for the Artemis program’s first crewed lunar landing, which is scheduled for no earlier than 2028. (NASA Photo)
Artemis 2 pilot Victor Glover and mission specialist Christina Koch peer out of the darkness of Orion’s cabin to observe the moon and acquire images during the lunar flyby. Over the course of about seven hours, the astronauts took turns looking out Orion’s windows as they flew around the moon’s far side. At closest approach, they came within 4,067 miles of the lunar surface. (NASA Photo)

Artemis updates from Alan Boyle’s Cosmic Log

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Amazon brings dark mode to Kindle Colorsoft and Scribe Colorsoft

Published

on

Amazon has today announced a software update for both the Kindle Colorsoft and Kindle Scribe Colorsoft which will bring dark mode to both e-readers. Even better, users will be able to toggle the settings for specific menus on both devices, so if they want their library dark and their notebook light, they can. Given the option is available on plenty of other Kindle devices, its omission here always felt like something Amazon was just getting around to addressing.

In addition, the update brings Smart Shapes to notebooks, enabling users to add pre-drawn lines, arrows, circles, triangles and rectangles from the toolbar. In addition, a hold-to-snap tool lets you draw a shape freehand, after which point it’ll pull itself into a nice tidy design. Both should help folks who want to add some graphical zing to their note taking who can’t do all those fancy journal designs on their own.

The update is rolling out across the ecosystem across the next few days, further empowering would-be journal scribes using these tablets. For tablets like the Kindle Scribe Colorsoft, it’s clear Amazon needs to build out the Scribe half of the equation, which looks like a poor relative compared to its competition. As Cherlynn Low wrote in her review, it’s a fine e-reader, but one that’s sorely lacking in many areas.

Source link

Advertisement
Continue Reading

Tech

The GPS III Rollout Is Almost Complete, But What Is It?

Published

on

Considering how integral it is to our modern way of life, you could be excused for thinking that the Global Positioning System (GPS) is a product of the smartphone era. But the first satellites actually came online back in 1978, although the system didn’t reach full operational status until April of 1995. While none of the active GPS satellites currently in orbit are quite that old, several of them were launched in the early 2000s — and despite a few tweaks and upgrades, their core technology isn’t far removed from their 1990s era predecessors.

But in the coming years, that’s finally going to change. Just last week, the tenth GPS III satellite was placed in orbit by a SpaceX Falcon 9 rocket. Once it’s properly configured and operational, it will join its peers to form the first complete “block” of third-generation GPS satellites. Over the next decade, as many as 22 revised GPS III satellites are slated to take their position over the Earth, eventually replacing all of the aging satellites that billions of people currently rely on.

So what new capabilities do these third-generation GPS satellites offer, and why has it taken so long to implement needed upgrades in such a critical system?

GPS Is Good, But Could Be Better

To understand the future of GPS, it’s helpful to look at its past. Developed by the United States military during the Cold War, what we now call GPS was originally known as Navigation System with Timing and Ranging (NAVSTAR). While the intent was always to allow civilian use of NAVSTAR, the equipment necessary to receive the signal and get a position was cumbersome and expensive.

There was little public interest in the system until Korean Air Lines Flight 007 was shot down in 1983 after mistakenly entering the Soviet Union’s airspace. With the lifesaving potential of NAVSTAR clearly evident, pressure started building on the industry to develop smaller and more affordable receivers — GPS as we know it was born.

Advertisement
NAVSTAR Satellite

That the development of such devices was possible in the first place was thanks to the design of NAVSTAR. Each satellite in the constellation broadcasts a timed radio signal which receivers on the ground use to compute their distance from the source. By comparing the signals from multiple satellites, a receiver can plot its position without the need for any local infrastructure. Since the process is entirely one-way, the can could be freely used by any device can can receive and decode the signal.

But while this operational simplicity was key to the proliferation of cheap ubiquitous GPS receivers, there’s certainly room for improvement given more modern technology. When NAVSTAR was designed knowing where a receiver was located within a radius of a few meters was more than sufficient, but today there’s a demand for greater accuracy by both civilian and military users. Given the essentially incalculable value of GPS to the global economy, improving reliability is also paramount. Not only has GPS jamming and spoofing become trivial, but even without the involvement of bad actors, legacy GPS struggles in urban environments.

Plans to deliver improved performance in these areas have been in the works for decades, with the United States Congress first authorizing the work on what would become GPS III all the way back in 2000. But when working on a system so critical that even a few minutes of downtime could put the entire planet into turmoil, such changes don’t come easy.

Can You Hear Me Now?

While modern GPS receivers are more sensitive than those in the past, there’s simply no getting over the fact that signals coming from a satellite more than 20,000 kilometers away will be by their very nature weak. So not only is it relatively easy for adverse environmental conditions to block or hinder the signal, but it doesn’t take much to override the signal with a local transmitter if somebody is looking to cause trouble.

As such, one of the key goals of the GPS III program was to deliver higher transmission power. This will lead to better reception for all GPS users across the board, but the new satellites also offer some special modes that offer even greater performance.

Advertisement

In addition to the backwards compatible signals transmitted by GPS III satellites, there’s also a new “Safety of Life” signal. This signal is transmitted at a different frequency, 1176 MHz, and at a higher power, so compatible receivers should hear it come in at approximately 3 dB above the “classic” signal. It’s intended primarily for high-performance applications such as aviation, but as compatible receivers get cheaper, it will start to show up in more devices.

These improvements should be enough for civilian use, but the military has higher expectations and operates under more challenging conditions. In such cases, future GPS III satellites will come equipped with a high-gain directional antenna that can project a “spot beam” signal anywhere on Earth. For receivers located within the beam, which is estimated to be a few hundred kilometers in diameter, the received signal from the satellite will be boosted by up to 20 dB. In contested environments, this should make it far more resistant to jamming and spoofing.

Speaking New Languages

The new signals being transmitted by GPS III satellites won’t just be louder than their predecessors, they’ll gain some new features as well.

For one thing, GPS III satellites will transmit a standardized signal known as L1C which offers interoperability with other global navigation systems such as Europe’s Galileo, China’s BeiDou, the Indian Regional Navigation Satellite System (IRNSS), and Japan’s Quasi-Zenith Satellite System. In theory a compatible receiver will be able to process signals from any combination of these systems simultaneously, improving overall performance.

Advertisement

The new satellites will also support the L2C signal. While this signal was technically available on earlier generation satellites, it’s still not considered fully operational and its adoption is expected to accelerate as more GPS III satellites come online. Compared with the legacy GPS protocol, L2C offers improved faster acquisition of signal, better error correction, and a more capable packet format.

To make GPS III transmissions even more secure, the military is also getting their own signal known as M-code. As you might expect, little is publicly known about M-code currently, but it’s a safe bet that it utilizes encryption and other features to make it more difficult for adversaries to create spoofed transmissions. For what it’s worth, a recent press release from the US Space Force claims that the use of M-code makes the next-generation GPS satellites “three-times more accurate and eight times more resistant to jamming than the previous constellation.”

Testing Out New Toys

Although all ten GPS III satellites are now in orbit, that doesn’t mean the constellation is complete. Starting in 2027, a new fleet of revised satellites known as GPS IIIF will start launching. They will take the lessons learned from the initial GPS III deployment to create a smaller, lighter, and more efficient platform that should have a service life of at least 15 years.

Artist impression of a future GPS IIIF satellite.

They’ll also include new in-development equipment that wasn’t quite ready for deployment when the current GPS III satellites were being assembled. This includes optical reflectors that will allow ground stations to more accurately track the position of each satellite, laser data links that will allow high-speed communication between satellites, and an improved atomic clock known as the Digital Rubidium Atomic Frequency Standard (DRAFS).

Of course, the vast majority of the people who use GPS every day will never be aware of all the changes and improvements happening behind the scenes. When they get a new phone with a GPS III-compatible receiver, they may notice that their navigation app locks on a bit faster or that the position shown on the screen is a little closer to where they are actually standing, but only if they are particularly attentive. But that’s entirely by design — the most important aspect of implementing GPS III is making the whole process as invisible as possible.

Advertisement

Source link

Continue Reading

Tech

Amazon Gets Exemption From Trump FCC Router (Extortion) Ban, Doesn’t Say How

Published

on

from the cybersecurity-shakedown dept

Late last month we noted how the Trump FCC under Brendan Carr announced a new “ban” on all routers made overseas (which means pretty much all of them). At the time, we also noted how this was less of a ban and more of a shakedown, with router manufacturers required to beg the Trump FCC for conditional waivers (fees, favors, whatever) to continue doing business in the States.

Not long after, Netgear, which does a lot of work with the U.S. government, announced it had received an exemption from the Trump FCC, though neither Netgear or the government transparently indicated what Netgear had to do to get the exemption. Pay a bribe? Host Brendan Carr for a game of golf? Install a surreptitious backdoor for CIA and ICE access? Nobody knows.

Now Amazon is the latest to get an exemption for both its Eero consumer routers and its Leo low Earth orbit (LEO) routers. Amazon showed up on the exemption list, but again there’s absolutely no indication of what the company had to actually do to get it, or the standards the Trump FCC is using to determine what hardware can be trusted. An Amazon announcement is painfully vague:

“We’re pleased to share that the U.S. government has recognized eero as a trusted and secure provider of routers.”

How did this happen? Does anybody trust the Trump administration to make this determination? Are there concerns about backdoors in exchange for being allowed to continue to do business? Nobody knows, though the FCC has indicated the ban has been expanded to include personal hotspots.

Advertisement

This would all likely be less alarming if the Trump administration wasn’t aggressively transactional, unethical, and authoritarian. Little to nothing Brendan Carr and Donald Trump do is genuinely for the public interest; and while this ban is being proposed as an act to protect national security, with their other hand they’ve taken countless steps to ensure consumers are less secure than ever.

That’s ranged from firing of officials responsible for online election security and investigating hacks, or to the relentless “deregulation” (real, the elimination of corporate oversight) of a U.S. telecom sector that was just the target of one of the worst cybersecurity incidents in U.S. history (in large part because telecom executives failed to change default router admin passwords).

Most press coverage of this new router ban acts as if the Trump FCC is still a trusted actor when it comes to the public interest, but that’s a pretty broad assumption given all the dodgy, unethical, and illegal behavior we’ve seen from the agency and administration more generally.

I don’t think most U.S. journalism is journalism. It’s some weird simulacrum designed to not offend. Why would you not at least include one sentence or paragraph on how nothing about this is transparent? Or that the administration has a bad track record on ethics and transparency?

Advertisement

Similarly, no outlets have been inclined to mention that the Trump administration’s open corruption and mindless dismantling of corporate oversight and consumer protection have most certainly endangered national security and consumer cybersecurity and privacy in ways we’ve not yet begun to calculate. “You can trust us on this,” isn’t something anybody, especially media outlets, should be accepting as an answer.

Filed Under: backdoors, cybersecurity, extortion, fcc, hardware, hotspots, privacy, routers

Companies: amazon

Source link

Advertisement
Continue Reading

Tech

Get Ready for More Brain-Scanning Consumer Gadgets

Published

on

The next gadget you put on your head could scan your brain. Neurable, a Boston-based company that embeds its noninvasive brain-scanning technology into hardware to monitor a person’s focus levels, announced on Tuesday that it is transitioning to a licensing platform model. By certifying third parties, Neurable expects its tech to be in a “flood” of consumer gadgets this year and next.

Neurable has until now focused its efforts on a pair of consumer-grade headphones—made in partnership with audio brand Master & Dynamic. It also has a contract with the US Department of Defense to see how its technology can monitor blast overpressure and potentially help diagnose mild traumatic brain injuries in soldiers. With the licensing model, we could see more of Neurable’s tech in everyday head-based wearables.

The headphones use built-in electroencephalography (EEG) sensors to monitor brain waves. That information is sent to a companion app and lets wearers know when they need a “brain break,” nudging them to take a breather before they feel burnt out to maximize productivity. The app also lets users discover their cognitive readiness for the day, their brain age, and other metrics, such as mental recovery, cognitive strain, and anxiety resilience. WIRED staff writer Emily Mullin tested the original headphones in 2024, though she found it difficult to verify the accuracy of Neurable’s algorithms.

Now, HP-owned gaming brand HyperX is releasing a gaming headset with Neurable’s technology, and it’s all about improving human performance while esports gaming. The headphones are purported to help wearers ease into the right state of mind for the best performance. Ramses Alcaide, Neurable cofounder and CEO, tells WIRED that the company has published a white paper showing improved performance among gamers using Neurable’s tech, with reduced response times in first-person shooter games and a small increase in accuracy.

Advertisement

The improvements may sound minor, but milliseconds are precious in the fast-paced world of esports gaming. And Alcaide says it could translate similarly to other fields: It could help a student reduce anxiety before an exam, while athletes could condition their nerves ahead of a race or game. Neurable is hardware-agnostic; Alcaide says it can be embedded in headphones, smart glasses, hats, or helmets. “There’s a whole landscape of technology that touches your head that’s yet to be embedded with our platform,” he says.

He likens it to when Fitbit made the idea of a wrist-worn heart-rate tracker popular. In the beginning, no one knew how fitness wearables would be received, but now no one blinks an eye at one on a wrist. Soon, no one will think twice about brain-scanning tech in headphones—or, at least, that’s the idea. Neurable’s tech is “invisible” in these types of gadgets.

Companies licensing Neurable’s tech can integrate it into existing hardware, Alcaide says, and will control the entire experience from product design to the software experience; these products will be advertised as “Powered by Neurable AI.” The user data still flows to Neurable’s servers for processing, but Neurable sets the data privacy protections. User identifiers are separated from the data, and while partner companies host the user-facing layer, Neurable says it keeps control of the underlying system and data handling. Neurable has previously said its business model is not to sell user data.

“Any time there’s a new transition to technology, there’s always going to be some anxiety,” Alcaide says. “We’ve been very careful when it comes to that transition. We’re protecting the data, being as ethical as possible.”

Advertisement

Neurable is one of many brain-computer interface (BCI) companies in the growing category. Elemind uses EEGs to improve sleep quality, and Sabi wants to turn thoughts into text. Even Apple filed a patent for EEG-sensing AirPods, though they’re not yet available.

Source link

Continue Reading

Tech

At Nvidia, compute already costs more than employees. The rest of corporate America is catching up

Published

on


At Nvidia, that shift is already visible. “For my team, the cost of compute is far beyond the costs of the employees,” Bryan Catanzaro, vice president of applied deep learning at Nvidia, told Axios.
Read Entire Article
Source link

Continue Reading

Tech

Towson Apple Store employee union strikes back, allege unfair treatment after closure

Published

on

Apple is claiming that the Towson Apple Store employee contract prevents guaranteed employment at other locations, and the union has filed an unlawful discrimination suit about the matter.

IAM Union logo on blue background, featuring a white gear with red and blue sections, central mechanical emblem, and bold white IAM UNION text to the right
IAM Union lobs Unfair Labor Practice charge at Apple after alleged discrimination against unionized Towson workers | Image Credit: IAM

On Monday, the International Association of Machinists and Aerospace Workers (IAM) Union officially filed an Unfair Labor Practice (ULP) charge against Apple. The charge, which has been submitted to the National Labor Relations Board (NLRB), alleges that the company unlawfully discriminated against unionized workers at its Towson, Maryland retail location.
Towson, Maryland was the first unionized Apple retail location in the United States. It was also one of three locations Apple would be closing permanently in June.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

DJI Mic Mini 2 vs DJI Mic Mini: tiny upgrade, massive price cut, but there’s a Mini 2S on the horizon which will add a key feature

Published

on


We rated the DJI Mic Mini as the best small wireless mic when it was launched in 2024, and it now has a successor in the shape of the Mic Mini 2. Both are 5-star products for content creators wanting an affordable, lightweight, and simple mic for better audio on the go.

Advertisement

If you already own a Mic Mini, there’s very little reason to upgrade to the Mic Mini 2 because performance is practically the same; both mics feature clear 24-bit audio, two-level noise reduction, a transmission range up to 400m, healthy battery life, and a lightweight 11g build.

Source link

Advertisement
Continue Reading

Tech

‘Human lives are already being lost’: Open letter signed by hundreds of Google employees requests CEO reject ‘unethical and dangerous’ US military AI use

Published

on


  • Google employees sign open letter to CEO over concerns of military AI use
  • AI developers do not want their technology used for ‘classified purposes’
  • Google is currently negotiating a contract with the Pengaton

Over 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any uses of its AI technology for military purposes.

The open letter highlights the serious ethical concerns the staff have, stating, “Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building.”

Source link

Continue Reading

Tech

15 best employers in S’pore to grow your career in 2026

Published

on

On Apr 28, LinkedIn unveiled its 2026 Top Companies list, naming the 15 best places to work in Singapore.

The rankings are based on LinkedIn’s own data, with companies assessed on various elements of career progression, including factors like how well they help employees progress in their careers and build new skills.

Here are this year’s top companies to grow your career in Singapore, according to LinkedIn:

1. DBS Bank

Image Credit: DBS Bank

Claiming the top spot once again is DBS Bank, Southeast Asia’s largest bank. The financial giant is currently hiring for over 200 roles here, including:

You can view their job openings here.

Advertisement

2. Microsoft

Image Credit: Shutterstock.com

Microsoft is a technology company that develops software, hardware and cloud‑based services. Singapore serves as a key regional hub for its Asia‑Pacific operations, supporting customers across consumer, enterprise and public sector markets.

It is also the parent company of Activision Blizzard, GitHub, Skype, LinkedIn and others. LinkedIn and its employees are excluded from Microsoft’s score.

The company is looking for new hires for these positions:

Click here to view their full job list.

3. Goldman Sachs

Image Credit: Paulo Fridman

Goldman Sachs is a financial services firm that provides investment banking, asset management and financial advisory services. It has offices across Asia, including Singapore, serving corporations, governments and institutional investors in the region.

These are some of the jobs the firm is hiring for:

Advertisement

View Goldman Sachs’ full job list here.

4. Roche

Image Credit: SCA Design

Originally founded in Switzerland, Roche is a multinational healthcare company that focuses on research and development of medical solutions for major disease areas such as oncology, immunology, and neuroscience.

Some of its available positions include:

See its full job list here.

5. JPMorgan Chase

Image Credit: Anim Farm via Google

The fifth largest bank in the world, JPMorgan Chase & Company, first opened in Singapore back in 1964 and has established itself as a global financial services firm across 17 markets in the Asian Pacific region.

The firm is looking for fresh faces for these roles:

Advertisement

Here’s the bank’s full list of available roles.

6. HP

Image Credit: Travel_Adventure/ Shutterstock.com

A heavyweight in the global IT industry, HP is a technology company that manufactures a range of monitors, laptops and desktops. It also produces and offers services around printers and 3D printers.

The tech company is currently looking to fill these roles:

You can browse through HP’s full job listings here.

7. Standard Chartered

Image Credit: Standard Chartered

Another notable bank on the list, Standard Chartered offers banking services across 52 markets worldwide.

The bank’s on the lookout for people to fill these positions:

Advertisement

You can look at Standard Chartered’s full job list here.

8. MSD

Image Credit: MSD

Known as Mereck in the United States and Canada, MSD is a pharmaceutical company that specialises in producing prescription medicines, vaccines and animal health products.

MSD is currently hiring for the following roles:

Browse through their full job list here.

9.  Genting Berhad

Image Credit: Genting

Genting Berhad is a diversified company with businesses in leisure, hospitality, energy and plantations.

The group’s Singapore subsidiary, Genting Singapore Limited, has a significant presence in the city-state linked to its regional leisure and hospitality activities.

Advertisement

It is currently hiring for these roles in Singapore:

Click here to view their full job list.

10. Alphabet

Image Credit: Shutterstock

Alphabet is the parent company behind tech powerhouses, including Google and YouTube.

It is currently hiring for the positions below:

View Alphabet’s full job list here.

Advertisement

11. Barclays

Image Credit: Shutterstock.com

Barclays is a financial services company providing banking, lending, investment and wealth management services. It serves individuals, businesses and institutional clients through retail and corporate banking operations.

These are some of the roles it is hiring for in Singapore:

You can look at Barclays’ full job list here.

12. Apple

Image Credit: Shutterstock.com

The company behind the all-familiar iPhone, Apple, first opened its facility in Singapore in 1981 and has since grown its presence in the city-state with three outlets in Orchard, Marina Bay Sands and Jewel Changi.

Apple has close to 100 openings listed on LinkedIn as of writing, including:

View all of Apple’s job openings here.

Advertisement

13. Micron Technology

Image Credit: Micron Technology

Micron Technology is a semiconductor company that designs and manufactures memory and storage products. These components are used in computers, mobile devices, data centres and other electronic systems.

The firm is currently hiring for these positions:

Click here to view Micron Technology’s full job list.

14. Rockwell Automation

Image Credit: Shutterstock.com

Rockwell Automation is an industrial technology company that provides hardware, software and services for manufacturing and production operations. Its products help businesses automate processes and manage industrial systems.

In Singapore, it has 38 job openings, including:

View Rockwell Automation’s full job listing here.

Advertisement

15. Citi

Image Credit: Bloomberg

Citi operates as a full-service bank in Singapore. It provides individuals, corporations, governments, investors and institutions with a range of financial products and banking services.

The bank’s on the lookout for people to fill these positions:

You can view their job openings here.

Featured Image Credit: Shutterstock.com/ Micron Technology/ Standard Chartered/ Bloomberg

Advertisement

Source link

Continue Reading

Tech

RAG precision tuning can quietly cut retrieval accuracy by 40%, putting agentic pipelines at risk

Published

on

Enterprise teams that fine-tune their RAG embedding models for better precision may be unintentionally degrading the retrieval quality those pipelines depend on, according to new research from Redis.

The paper, “Training for Compositional Sensitivity Reduces Dense Retrieval Generalization,” tested what happens when teams train embedding models for compositional sensitivity. That is the ability to catch sentences that look nearly identical but mean something different — “the dog bit the man” versus “the man bit the dog,” or a negation flip that reverses a statement’s meaning entirely. That training consistently broke dense retrieval generalization, how well a model retrieves correctly across broad topics and domains it wasn’t specifically trained on. Performance dropped by 8 to 9 percent on smaller models and by 40 percent on a current mid-size embedding model teams are actively using in production.

The findings have direct implications for enterprise teams building agentic AI pipelines, where retrieval quality determines what context flows into an agent’s reasoning chain. A retrieval error in a single-stage pipeline returns a wrong answer. The same error in an agentic pipeline can trigger a cascade of wrong actions downstream.

Srijith Rajamohan, AI Research Leader at Redis and one of the paper’s authors, said the finding challenges a widespread assumption about how embedding-based retrieval actually works. 

Advertisement

“There’s this general notion that when you use semantic search or similar semantic similarity, we get correct intent. That’s not necessarily true,” Rajamohan told VentureBeat. “A close or high semantic similarity does not actually mean an exact intent.”

The geometry behind the retrieval tradeoff

Embedding models work by compressing an entire sentence into a single point in a high-dimensional space, then finding the closest points to a query at retrieval time. That works well for broad topical matching — documents about similar subjects end up near each other. The problem is that two sentences with nearly identical words but opposite meanings also end up near each other, because the model is working from word content rather than structure.

That is what the research quantified. When teams fine-tune an embedding model to push structurally different sentences apart — teaching it that a negation flip which reverses a statement’s meaning is not the same as the original — the model uses representational space it was previously using for broad topical recall. The two objectives compete for the same vector.

The research also found the regression is not uniform across failure types. Negation and spatial flip errors improved measurably with structured training. Binding errors — where a model confuses which modifier applies to which word, such as which party a contract obligation falls on — barely moved. For enterprise teams, that means the precision problem is harder to fix in exactly the cases where getting it wrong has the most consequences.

Advertisement

The reason most teams don’t catch it is that fine-tuning metrics measure the task being trained for, not what happens to general retrieval across unrelated topics. A model can show strong improvement on near-miss rejection during training while quietly regressing on the broader retrieval job it was hired to do. The regression only surfaces in production.

Rajamohan said the instinct most teams reach for — moving to a larger embedding model — does not address the underlying architecture.

“You can’t scale your way out of this,” he said. “It’s not a problem you can solve with more dimensions and more parameters.”

Why the standard alternatives all fall short

The natural instinct when retrieval precision fails is to layer on additional approaches. The research tested several of them and found each fails in a different way.

Advertisement

Hybrid search. Combining embedding-based retrieval with keyword search is already standard practice for closing precision gaps. But Rajamohan said keyword search cannot catch the failure mode this research identifies, because the problem is not missing words — it is misread structure.

“If you have a sentence like ‘Rome is closer than Paris’ and another that says ‘Paris is closer than Rome,’ and you do an embedding retrieval followed by a text search, you’re not going to be able to tell the difference,” he said. “The same words exist in both sentences.”

MaxSim reranking. Some teams add a second scoring layer that compares individual query words against individual document words rather than relying on the single compressed vector. This approach, known as MaxSim or late interaction and used in systems like ColBERT, did improve relevance benchmark scores in the research. But it completely failed to reject structural near-misses, assigning them near-identity similarity scores. 

The problem is that relevance and identity are different objectives. MaxSim is optimized for the former and blind to the latter. A team that adds MaxSim and sees benchmark improvement may be solving a different problem than the one they have.

Advertisement

Cross-encoders. These work by feeding the query and candidate document into the model simultaneously, letting it compare every word against every word before making a decision. That full comparison is what makes them accurate — and what makes them too expensive to run at production scale. Rajamohan said his team investigated them. They work in the lab and break under real query volumes.

Contextual memory. Also sometimes referred to as agentic memory, these systems are increasingly cited as the path beyond RAG, but Rajamohan said moving to that type of  architecture does not eliminate the structural retrieval problem. Those systems still depend on retrieval at query time, which means the same failure modes apply. The main difference is looser latency requirements, not a precision fix.

The two-stage fix the research validated

The common thread across every failed approach is the same: a single scoring mechanism trying to handle both recall and precision at once. The research validated a different architecture: stop trying to do both jobs with one vector, and assign each job to a dedicated stage.

Stage one: recall. The first stage works exactly as standard dense retrieval does today — the embedding model compresses documents into vectors and retrieves the closest matches to a query. Nothing changes here. The goal is to cast a wide net and bring back a set of strong candidates quickly. Speed and breadth are what matter at this stage, not perfect precision.

Advertisement

Stage two: precision. The second stage is where the fix lives. Rather than scoring candidates with a single similarity number, a small learned Transformer model examines the query and each candidate at the token level — comparing individual words against individual words to detect structural mismatches like negation flips or role reversals. This is the verification step the single-vector approach cannot perform.

The results. Under end-to-end training, the Transformer verifier outperformed every other approach the research tested on structural near-miss rejection. It was the only approach that reliably caught the failure modes the single-vector system missed.

The tradeoff. Adding a verification stage costs latency. The latency cost depends on how much verification a team runs. For precision-sensitive workloads like legal or accounting applications, full verification at every query is warranted. For general-purpose search, lighter verification may be sufficient. 

The research grew out of a real production problem. Enterprise customers running semantic caching systems were getting fast but semantically incorrect responses back — the retrieval system was treating similar-sounding queries as identical even when their meaning differed. The two-stage architecture is Redis’s proposed fix, with incorporation into its LangCache product on the roadmap but not yet available to customers.

Advertisement

What this means for enterprise teams

The research does not require enterprise teams to rebuild their retrieval pipelines from scratch. But it does ask them to pressure-test assumptions most teams have never examined — about what their embedding models are actually doing, which metrics are worth trusting and where the real precision gaps live in production.

Recognize the tradeoff before tuning around it. Rajamohan said the first practical step is understanding the regression exists. He evaluates any LLM-based retrieval system on three criteria: correctness, completeness and usefulness. Correctness failures cascade directly into the other two, which means a retrieval system that scores well on relevance benchmarks but fails on structural near-misses is producing a false sense of production readiness.

RAG is not obsolete — but know what it can’t do. Rajamohan pushed back firmly on claims that RAG has been superseded. “That’s a massive oversimplification,” he said. “RAG is a very simple pipeline that can be productionized by almost anyone with very little lift.” The research does not argue against RAG as an architecture. It argues against assuming a single-stage RAG pipeline with a fine-tuned embedding model is production-ready for precision-sensitive workloads.

The fix is real but not free. For teams that do need higher precision, Rajamohan said the two-stage architecture is not a prohibitive implementation lift, but adding a verification stage costs latency. “It’s a mitigation problem,” he said. “Not something we can actually solve.”

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025