Connect with us

Technology

Confluent platform update targets developer choice, security

Published

on

Confluent platform update targets developer choice, security

Data streaming specialist Confluent on Tuesday unveiled its latest platform update, including new security capabilities and support for the Table API that makes the Apache Flink platform accessible to Java and Python developers.

The release, which includes generally available features as well as some in preview, closely follows Confluent’s acquisition of WarpStream, another streaming data vendor that Confluent bought Sept. 9.

Based in Mountain View, Calif., Confluent develops a streaming data platform built on Apache Kafka, an open source technology developed by Confluent co-founders Jay Kreps, Neha Narkhede and Jun Rao when they were working at LinkedIn. Kafka, which was first released in 2011, enables users to ingest and process data as it is produced in real time.

Using Kafka as a foundation, Confluent offers Confluent Cloud as a managed service and Confluent Platform for on-premises users.

Advertisement

Apache Flink, meanwhile, was launched in 2014 and is a processing framework for data streaming similar to Confluent’s proprietary platforms. Flink provides a compute layer that enables users to filter, combine and enrich data as it’s produced and processed to foster real-time analysis.

Confluent unveiled support for Flink in March to provide users the option of using it as a managed service rather than Confluent Cloud.

New capabilities

Just as adding support for Flink provided Confluent users with more choice as they build their streaming data infrastructure, adding support for the Table API — which is now in open preview — similarly adds more choice to the Confluent platform while also opening it to a new set of potential users.

When Confluent first provided customers with Flink as an option, it did so with a SQL API that enabled developers to build data streams using SQL code. However, not all developers know SQL. And even among those who do know SQL, the programming language may not be their preferred coding format.

Advertisement

The Table API, like the SQL API, is a tool that enables Flink users to develop pipelines by writing code. But rather than SQL, the Table API enables developers to use Java and Python.

Choice is important as developers create environments for data management and analytics. It not only enables enterprises to avoid vendor lock-in but also lets them use the tools that best fit their needs for a given task or that users know best and prefer. Therefore, Confluent’s addition of support for the Table API is a logical step for the vendor following its initial support for Flink, according to David Menninger, an analyst at ISG’s Ventana Research.

It will be significant to developers that would prefer to write code rather than SQL statements. In some cases, developers may not be very well versed in SQL. In some cases, it may just be a preference.
David MenningerAnalyst, ISG’s Ventana Research

“It will be significant to developers that would prefer to write code rather than SQL statements,” he said. “In some cases, developers may not be very well versed in SQL. In some cases, it may just be a preference.”

Beyond support for the Table API, Confluent’s addition of new security features is important, according to Menninger.

Advertisement

Specifically, Confluent’s platform now offers private networking support for Flink so users of private networks rather than public clouds can take advantage of Flink’s capabilities. In addition, the platform now includes client-side field level encryption, which enables customers to encrypt fields within data streams to ensure security and regulatory compliance.

Data volume is growing at an exponential rate. So is the complexity of data. To ensure security so sensitive information remains private, many organizations have hybrid data storage environments, with their less-regulated data stored in public clouds such as AWS and Azure and their more regulated data, such as that with personally identifiable information, kept on premises or in private clouds.

By enabling customers to use Flink in private networks, Confluent is supporting potential customers that may not have been able to use its platform in the past due to security concerns to now use its streaming data capabilities.

Specific features of Confluent’s private networking support for Flink, which is generally available on AWS for Confluent Enterprise users, include:

Advertisement
  • Safeguards for in-transit data, including a private network to provide secure connections between private clouds and Flink.
  • Simple configuration that enables users without extensive networking expertise to set up private connections between their private data storage environments and Flink.
  • Flexible data stream processing of Kafka clusters within the secure environment so that private cloud users can benefit from the same speed and efficiency as other Confluent users.

“It may not be very sexy, but new security features including private networking and client-side field-level encryption will be welcomed additions,” Menninger said. “Enterprises have a heightened focus on governance, compliance and security. The lack of these capabilities may, in fact, have prevented certain organizations from using Flink previously.”

Confluent’s impetus for including support for the Table API and the new security features — along with an extension for the Visual Studio Code development platform — came from a combination of customer interactions and observation of market trends, according to Jean-Sébastien Brunner, Confluent’s director of product management.

Confluent maintains a feedback loop with its users and takes information gathered from that feedback into account when deciding what to add in any given platform update, he said.

In addition, the vendor pays close attention to industry trends to make sure its tools are consistent with those being offered by competing platforms such as Cloudera, Aiven and streaming data tools from tech giants such as AWS, Google Cloud and Microsoft.

Finally, with its roots in the open source community, a focal point for Confluent is making sure that technologies such as Kafka and Flink are accessible and easy to use.

Advertisement

“We look at several signals,” he said.

While Confluent’s platform update aims to meet customer needs and respond to industry trends, the vendor’s acquisition of WarpStream was designed to expand Confluent’s reach within an enterprise’s data stack by adding new applications for its platform, according to Kreps, Confluent’s CEO.

Confluent, which was founded in 2014, provides certain capabilities and is a good fit for certain companies. WarpStream provides different capabilities such as a bring-your-own-cloud (BYOC) architecture that enables users to deploy the streaming data platform in their own clouds rather than a vendor’s.

In a sense, BYOC is similar to Confluent’s private networking support for Flink. However, as a native architecture, it is a foundation rather than an add-on.

Advertisement

“Our goal is to make data streaming the central nervous system of every company,” Kreps said. “To do that we need to make it something that is a great fit for a vast array of use cases and companies. The big thing they did that got our attention was their next-generation approach to BYOC architectures.”

Once integrated, WarpStream’s BYOC capabilities should help Confluent accomplish its aim of providing customers with more deployment options, according to Menninger.

He noted that some vendors offer a managed cloud service or a self-managed option that can be run in the cloud. Other vendors that are more mature offer both. Both options have benefits and drawbacks. For example, managed cloud versions reduce management burdens but can be expensive. Self-managed versions can be less expensive but require more labor.

WarpStream provides a third choice.

Advertisement

“WarpStream offers an option in between,” Menninger said. “Enterprises can offload some of the management and administrative responsibilities, but not all of them.”

How data streaming works

Plans

As Confluent plots future platform updates, continuing to add security and networking capabilities to ensure regulatory compliance is a continued focus, according to Brunner. So is enabling customers to connect to external sources to better foster real-time analysis and insights.

“We remain focused on helping our customers get insights faster by making data accessible once it’s generated,” Brunner said.

Menninger, meanwhile, suggested that Confluent could further meet the needs of customers by enabling them to more easily combine streaming data with data at rest.

Advertisement

While streaming data is an imperative for real-time decision-making, streaming data can have broader applications when used together with data at rest. For example, as enterprises increasingly develop generative AI tools, streaming data could be used to keep models current.

However, despite potential real-world applications for streaming data and data at rest being used together, the two are too often kept separate, according to Menninger. Therefore, anything vendors such as Confluent can do to bring streaming data together with data at rest would be beneficial.

“The worlds of streaming data and data at rest are coming closer together, but they are still largely separate worlds that can be integrated or co-exist,” Menninger said. “I’d like to see Confluent and others create a more unified platform across both streaming data and data at rest.”

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Advertisement

Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

What is CrowdStrike? Everything You Need to Know

Published

on

What is CrowdStrike? Everything You Need to Know

In this video, we delve into what CrowdStrike is, how its Falcon software works, and the recent update incident that impacted millions of Windows machines.

Source link

Continue Reading

Science & Environment

Creature that washed up on New Zealand beach may be world’s rarest whale — a spade-toothed whale

Published

on

Creature that washed up on New Zealand beach may be world's rarest whale — a spade-toothed whale


Endangered whale species finds home in waters off New York, New Jersey

Advertisement

Advertisement


Endangered whale species finds home in waters off New York, New Jersey

02:20

Advertisement

Wellington, New Zealand — Spade-toothed whales are the world’s rarest, with no live sightings ever recorded. No one knows how many there are, what they eat, or even where they live in the vast expanse of the southern Pacific Ocean. However, scientists in New Zealand may have finally caught a break.

The country’s conservation agency said Monday a creature that washed up on a South Island beach this month is believed to be a spade-toothed whale. The five-meter-long creature, a type of beaked whale, was identified after it washed ashore on Otago beach from its color patterns and the shape of its skull, beak and teeth

“We know very little, practically nothing” about the creatures, Hannah Hendriks, Marine Technical Advisor for the Department of Conservation, told The Associated Press. “This is going to lead to some amazing science and world-first information.”

New Zealand Whale
In this photo provided by the Department of Conservation, rangers Jim Fyfe and Tūmai Cassidy walk alongside what’s believed to be a rare spade-toothed whale, on July 5, 2024, after its was found washed ashore on a beach near Otago, New Zealand. 

Advertisement

Department of Conservation / AP


If the cetacean is confirmed to be the elusive spade-toothed whale, it would be the first specimen found in a state that would permit scientists to dissect it, allowing them to map the relationship of the whale to the few others of the species found and learn what it eats and perhaps lead to clues about where they live.

Only six other spade-toothed whales have ever been pinpointed, and those found intact on New Zealand’s North Island beaches had been buried before DNA testing could verify their identification, Hendriks said, thwarting any chance to study them.

This time, the beached whale was quickly transported to cold storage and researchers will work with local Māori iwi (tribes) to plan how it will be examined, the conservation agency said.

Advertisement

New Zealand’s Indigenous people consider whales a taonga – a sacred treasure – of cultural significance. In April, Pacific Indigenous leaders signed a treaty recognizing whales as “legal persons,” although such a declaration is not reflected in the laws of participating nations.

Nothing is currently known about the whales’ habitat. The creatures deep-dive for food and likely surface so rarely that it has been impossible to narrow their location further than the southern Pacific Ocean, home to some of the world’s deepest ocean trenches, Hendriks said.

New Zealand Whale
In this photo provided by the Department of Conservation, rangers inspect what’s believed to be a rare spade-toothed whale on July 5, 2024, after it was found washed ashore on a beach near Otago, New Zealand.

Department of Conservation / AP

Advertisement


“It’s very hard to do research on marine mammals if you don’t see them at sea,” she said. “It’s a bit of a needle in a haystack. You don’t know where to look.”

The conservation agency said the genetic testing to confirm the whale’s identification could take months.

It took “many years and a mammoth amount of effort by researchers and local people” to identify the “incredibly cryptic” mammals, Kirsten Young, a senior lecturer at the University of Exeter who has studied spade-toothed whales, said in emailed remarks.

The fresh discovery “makes me wonder – how many are out in the deep ocean and how do they live?” Young said.

Advertisement

The first spade-toothed whale bones were found in 1872 on New Zealand’s Pitt Island. Another discovery was made at an offshore island in the 1950s, and the bones of a third were found on Chile’s Robinson Crusoe Island in 1986. DNA sequencing in 2002 proved that all three specimens were of the same species – and that it was one distinct from other beaked whales.

Researchers studying the mammal couldn’t confirm if the species went extinct. Then in 2010, two whole spade-toothed whales, both dead, washed up on a New Zealand beach. Firstly mistaken for one of New Zealand’s 13 other more common types of beaked whale, tissue samples – taken after they were buried – revealed them as the enigmatic species.

New Zealand is a whale-stranding hotspot, with more than 5,000 episodes recorded since 1840, according to the Department of Conservation.  

Advertisement



Source link

Continue Reading

Technology

5 data governance framework examples

Published

on

5 data governance framework examples

Organizations that treat their data as an asset have a data governance framework that matches their style, structure and culture.

Organizations of all sizes collect, store and process data from numerous sources, including customers, employees and business partners. Without proper oversight, they can miss valuable insights, violate privacy regulations and make decisions based on inaccurate or incomplete information. They might encounter other data issues such as inconsistencies, errors and duplications.

A data governance framework provides a structured, consistent approach to manage data assets and treat data as a valuable resource that supports business strategy. Data governance frameworks consist of policies, processes and standards that define how to collect, store, use and protect data throughout its lifecycle.

Establishing guidelines and best practices through a data governance framework helps organizations ensure regulatory compliance, improve data quality and develop transparent data management practices. A practical framework gives data users confidence in data reporting, analysis and planning. By demonstrating a commitment to responsible data management, organizations can build trust among their stakeholders, including customers, partners and regulators.

Advertisement

Pillars of a data governance framework

An effective data governance framework is built on four essential pillars: data quality, data security and privacy, data ownership and accountability, and data governance metrics.

Data quality

Data quality measures whether the data fits its intended purpose. High-quality data is accurate, complete, consistent and timely. It is the core of any data initiative.

A governance framework needs four essential procedures to establish and maintain good-quality data:

  1. Regular profiling to assess data and identify quality issues.
  2. Cleansing and validating to remove errors and inconsistencies.
  3. Defining data quality metrics to measure and monitor quality over time.
  4. Implementing data quality controls throughout the data lifecycle.

Data security and privacy

Organizations are susceptible to a wide range of security issues. IT teams might need a better handle on where their vulnerabilities lie. Organizations should ensure their data governance framework includes security and privacy measures to safeguard their data assets and comply with relevant regulations, such as GDPR and CCPA. A strong governance framework contains several elements to regulate data access and usage by members of the organization:

  • Implement strong access controls and authentication mechanisms.
  • Encrypt sensitive data at rest and as it moves around the network.
  • Monitor data access and usage to detect and prevent unauthorized activities.
  • Develop and enforce data privacy policies and procedures.
  • Conduct security audits and risk assessments to identify and address vulnerabilities.

Data ownership and accountability

Clear ownership and accountability ensures consistent and responsible data management throughout the data lifecycle. Well-defined data ownership and stewardship roles can prevent data silos, inconsistencies and errors that occur from uncoordinated or unassigned data management. A framework should support key aspects of accountability:

  • Establish data owners and stewards responsible for disparate data domains or assets.
  • Define transparent processes for data access, modification and sharing.
  • Create a data governance council or committee to oversee data governance initiatives.
  • Promote a culture of data responsibility and collaboration across the organization.

Data governance metrics

Measuring the effectiveness of data governance initiatives enables continuous management and demonstrates the value of data governance to stakeholders. Organizations should define key performance indicators or metrics to track the progress of their data governance efforts:

  • Develop dashboards and reports to monitor data quality, security and usage.
  • Track the adoption and compliance with data governance policies and standards.
  • Measure the relationship of data governance to business objectives.

Data governance framework examples

Organizations can adopt various approaches when implementing a data governance framework depending on their specific needs, structure and culture. Organizations should consider five standard data governance framework models.

Center-out model

The center-out model establishes a centralized data governance body, such as a data governance council or committee. They define and oversee governance policies and standards. The model balances enterprise-wide consistency with some flexibility for individual business units’ specific needs. It also promotes collaboration and communication between the central governance team and data stakeholders.

Advertisement
Diagram of a center-out governance framework.
Example of a center-out model.

The Data Governance Institute’s Data Governance Framework is a center-out model that emphasizes establishing a Data Governance Office and Board. The framework comprises 10 critical components that address the rules, people, organizational bodies and processes required for effective data governance. The components include defining the mission and vision, establishing goals and metrics, identifying data rules and definitions, assigning decision rights and accountabilities, and implementing controls. The framework also emphasizes the importance of data stakeholders and assigns data stewards to ensure the program’s success. It has hybrid flexibility for business units and data domains, but the Data Governance Office remains critical.

PwC, one of the world’s largest professional services networks, also recommends a centralized approach to meet compliance requirements and avoid business risks. However, PwC sees it as a step to help organizations better monetize their data assets. The PwC model emphasizes a centralized data governance program for consistency across business lines and to reduce the risk of data silos and missed opportunities.

Diagram of an example top-down governance framework.
Example of a top-down governance model.

Top-down model

In the top-down model, executive leadership and senior management drive data governance. The approach ensures that governance initiatives align with the organization’s strategic goals and priorities. It also provides the necessary authority and resources to enforce policies and standards across the enterprise. However, it might require more support from business units and stakeholders that feel disconnected from the governance process.

The global management consulting firm McKinsey describes a framework for effective governance that focuses on creating business value and enabling digital and analytics initiatives. The framework’s key elements involve securing top management’s attention and buy-in for data governance and integrating data governance with primary business transformation themes. McKinsey’s model proposes a central data management office, but emphasizes that executive leadership reinforces a solid top-down process.

Hybrid model

The hybrid approach combines concepts from other models to create a customized strategy that suits the organization’s unique needs and structure. For example, an organization might establish a central data governance council to set enterprise-wide policies and standards. They can also empower business units to implement local governance practices aligned with the central framework. It offers the benefits of centralized control and decentralized flexibility.

The Eckerson Group’s Modern Data Governance Framework is a hybrid model. It combines elements of centralized governance through methods, processes and technology with decentralized flexibility with an emphasis on people, culture and the need to adapt to the specific requirements of different stakeholders. The framework stresses the importance of involving a wide range of people, from sponsors and owners to stewards, curators, coaches and consumers. The goal is to build a roadmap for governance as a living document, that is revisited regularly to adapt to changes in priorities or needs.

Advertisement
Diagram of an example hybrid governance framework.
Example of a hybrid model.

Bottom-up model

The bottom-up model involves data stakeholders and subject matter experts from across the organization in the governance process. The model encourages collaboration and buy-in from people closest to the data, which ensures that governance policies and procedures are practical, relevant and effective.

Diagram of an example bottom-up governance framework.
Example of a bottom-up framework.

DAMA, a well-known community of data management professionals, developed the Data Management Body of Knowledge (DMBOK). The comprehensive model defines the standard functions and activities related to data management within an organization. Its functions cover the entire data lifecycle, including planning, architecture, development, operations and quality management.

The DAMA-DMBOK framework is flexible, but it’s commonly used as a bottom-up approach because DAMA members typically drive initiatives from the IT department. Over time, they gain executive support and a more formalized adoption as stakeholders can show their success.

Silo-in model

A silo-in allows individual business units or departments to establish data governance practices and standards for their specific needs and requirements.

However, it can lead to inconsistency, duplication of effort and a lack of enterprise-wide coordination. The model suits organizations with highly independent business units or limited enterprise-wide data integration needs.

Few consulting firms or software vendors recommend the silo-in model because data silos are seen as problematic for governance. In practice, many data governance programs start tentatively in isolated business units that have problems to deal with or data that is especially valuable or vulnerable.

Advertisement

When organizations do not adopt a formal data governance framework, they often default to a silo-in approach due to practical considerations. Business units may recognize the need for governance within their domain and take the initiative to establish practices and standards that address their specific pain points or opportunities. The factors that drive the need for governance within the specific unit can include the following:

  • Regulatory compliance requirements that apply to specific business units or data domains.
  • The need to improve data quality and consistency within a particular business function.
  • The desire to use data assets for competitive advantage or innovation within a specific market or product line.

Localized efforts can deliver benefits within their limited scope, but they often need to consider the broader enterprise context and can create challenges down the line. As the organization matures and seeks to use data assets holistically, siloed governance practices can become barriers to integration, interoperability and scale.

If a silo-in approach emerges as the starting point for data governance in an organization, it is essential to recognize its limitations and plan for a transition to a more enterprise-wide model over time:

  • Identify and prioritize data domains that cut across business units and require a more coordinated approach to governance.
  • Establish cross-functional data governance bodies and processes to align practices and standards across the organization.
  • Develop and communicate a shared vision and roadmap for data governance that balances local needs with enterprise goals.
  • Invest in data integration and master data management capabilities to break down silos and enable data sharing and collaboration.

Choosing a framework

Treating data as an asset involves giving it the same care and maintenance that the most valued corporate assets receive. Implementing a data governance framework requires more than technical effort. The cultural element around ownership and accountability requires consideration of an organization’s unique needs, structure and ethos.

The keys to success for any of the five models is involving the necessary stakeholders, securing executive buy-in for more comprehensive programs and developing a continuous improvement process. A data governance framework should integrate with the organization’s overall business strategy, ensuring that data management aligns with goals and objectives.

Donald Farmer is principal of TreeHive Strategy and advises software vendors, enterprises and investors on data and advanced analytics strategies. He has worked on some of the leading data technologies in the market and previously led design and innovation teams at Microsoft and Qlik.

Advertisement

Source link

Continue Reading

Science & Environment

Earth will get a second “mini-moon” for 2 months this year

Published

on

Earth will get a second "mini-moon" for 2 months this year


Earth will get a second moon for about two months this year when a small asteroid begins to orbit our planet. The asteroid was discovered in August and is set to become a mini-moon, revolving around Earth in a horseshoe shape from Sept. 29 to Nov. 25.

Researchers at the Asteroid Terrestrial-impact Last Alert System, an asteroid monitoring system funded by NASA, spotted the asteroid using an instrument in Sutherland, South Africa and labeled it 2024 PT5. 

Scientists from the Universidad Complutense de Madrid have tracked the asteroid’s orbit for 21 days and determined its future path. 2024 PT5 is from the Arjuna asteroid belt, which orbits the sun, according to their study published in Research Notes of the AAs

Advertisement

But Earth’s gravitational pull will draw 2024 PT5 towards it and, much like our moon, it will orbit our planet — but only for 56.6 days.

While other non-Earth objects, or NEOs, have entered Earth’s orbit before, some don’t complete full revolutions of Earth. Some, however, do and become so-called mini-moons.

An asteroid called 2020 CD3 was bound to Earth for several years before leaving the planet’s orbit in 2020 and another called 2022 NX1 became a mini-moon of Earth in 1981 and 2022 and will return again in 2051. 

2024 PT5, which is larger than some of the other mini-moons, will also return to Earth’s orbit — in 2055. 

Advertisement

Earth’s gravity will pull it into its orbit and the asteroid will have negative geocentric energy, meaning it can’t escape Earth’s gravitational pull. It will orbit around Earth in a horseshoe shape before reverting back to heliocentric energy, meaning it will rotate around the sun again, like the other planets and NEOs in our galaxy.

Even after it leaves orbit, it will stay near Earth for a few months, making its closest approach on Jan. 9, 2025. Soon after, it will leave Earth’s neighborhood until its path puts it back into our orbit in about 30 years.

The study’s lead author Carlos de la Fuente Marcos told Space.com the mini-moon will be too small to see with amateur telescopes or binoculars but professional astronomers with stronger tools will be able to spot it.

CBS News has reached out to Marcos for further information and is awaiting response.

Advertisement

Advertisement



Source link

Continue Reading

Technology

Netflix teases the next seasons of Avatar, Squid Game and Arcane at Geeked Week

Published

on

Netflix teases the next seasons of Avatar, Squid Game and Arcane at Geeked Week

At its in-person fan event for Geeked Week this year, Netflix has shown teasers and sneak peeks of its upcoming shows, including the second season of Avatar: The Last Airbender. In addition to revealing that the new season is already in production, Netflix has also announced that Miya Cech (Are You Afraid of the Dark?) is playing earthbending master Toph.

A teaser for Squid Game season 2 shows Lee Jung-jae wearing his player 456 uniform again to compete in another round of deadly games with other contestants hoping to win millions of dollars. The next season of Squid Game will start streaming on December 26.

The streaming giant has also revealed that One Piece live action’s Mr. 0 and Miss All-Sunday will be portrayed by Joe Mangianello and Lera Abova, respectively. And for Wednesday fans, Netflix has released a teaser for the second season of Wednesday that will arrive sometime in 2025.

For animation fans, Netflix has released a teaser for Tom Clancy’s Splinter Cell: Deathwatch, with Liev Schreiber voicing protagonist Sam Fisher. It has also given viewers a short look at a new Devil May Cry animated series by Korean company Studio Mir, which is coming in April 2025.

Advertisement

Netflix has teased a new Tomb Raider animated series that’s coming in October and a Rebel Moon game that’s arriving in 2025, as well. Finally, the company has given Arcane fans a clear schedule for the final chapter of the critically acclaimed show: Act 1 will be available to stream on November 9, followed by Act 2 on November 16. A third and final Act will close out the show with a proper ending on November 23.

Source link

Advertisement
Continue Reading

Science & Environment

Climate change is making days longer, according to new research

Published

on

Climate change is making days longer, according to new research


How melting glaciers fuel sea level rise

Advertisement

Advertisement


How melting Arctic glaciers contribute to rising sea levels

05:25

Advertisement

Climate change is making days longer, as the melting of glaciers and polar ice sheets causes water to move closer to the equator, fattening the planet and slowing its rotation, according to a recent study.

Research published in the Proceedings of the National Academy of Sciences used both observations and reconstructions to track variations of mass at Earth’s surface since 1900.

In the 20th century, researchers found that between 0.3 milliseconds per century and 1 millisecond per century were added to the length of a day by climate-induced increases. Since 2000, they found that number accelerated to 1.3 milliseconds per century.

“We can see our impact as humans on the whole Earth system, not just locally, like the rise in temperature, but really fundamentally, altering how it moves in space and rotates,” Benedikt Soja of ETH Zurich in Switzerland told Britain’s Guardian newspaper. “Due to our carbon emissions, we have done this in just 100 or 200 years, whereas the governing processes previously had been going on for billions of years. And that is striking.”

Advertisement

Researchers said that, under high greenhouse gas emission scenarios, the climate-induced increase in the length of a day will continue to grow and could reach a rate twice as large as the present one. This could have implications for a number of technologies humans rely on, like navigation.

“All the data centers that run the internet, communications and financial transactions, they are based on precise timing,” Soja said. “We also need a precise knowledge of time for navigation, and particularly for satellites and spacecraft.”

Advertisement



Source link

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.