Connect with us

Technology

Exploding interest in GenAI makes AI governance a necessity

Published

on

Panzura unveils first offering after Moonwalk acquisition

Enterprises need to heed a warning: Ignore AI governance at your own risk.

AI governance is essentially a set of policies and standards designed to mitigate the risks associated with using AI — including generative AI — to inform business decisions and automate processes previously carried out by human beings.

The reason it’s now needed, that it cannot be ignored, is that enterprise interest in generative AI is exploding, which has led to more interest in traditional AI as well.

Historically, AI and machine learning models and applications were developed by and used mostly by small data science teams and other experts within organizations, never leaving their narrow purview. The tools were used to do forecasting, scenario planning and other types of predictive analytics, as well as automate certain repetitive processes also overseen by small groups of experts.

Advertisement

Now, however, sparked by OpenAI’s November 2022 launch of ChatGPT, which represented significant improvement in large language model capabilities, enterprises want to extend their use of AI tools to more employees to drive more rapid growth. LLMs such as ChatGPT and Google Gemini enable true natural language processing that was previously impossible.

When combined with an enterprise’s data, true NLP lets any employee with an internet connection and the requisite clearance query and analyze data in ways that previously required expert knowledge, including coding skills and data literacy training. In addition, when applied to enterprise data, generative AI technology can be trained to relieve experts of repetitive tasks, including coding, documentation and even data pipeline development, thus making developers and data scientists more efficient.

That combination of enabling more employees to make data-driven decisions and improving the efficiency of experts can result in significant growth.

If done properly.

Advertisement

If not, enterprises risk serious consequences, including decisions based on bad AI outputs, data leaks, legal noncompliance, customer dissatisfaction and lack of accountability, all of which could lead to financial losses.

Therefore, as interest in expanding the use of both generative AI and traditional AI transitions to more actual use of AI, and as more employees with less expertise get access to data and AI tools, enterprises need to ensure their AI tools are governed.

Some organizations have heeded the warning, according to Kevin Petrie, an analyst at BARC U.S.

“It is increasing,” he said. “Security and governance are among the top concerns related to AI — especially GenAI — so the demand for AI governance continues to rise.”

Advertisement

However, according to a survey Petrie conducted in 2023, only 25% of respondents said their organization has the proper AI governance controls to support AI and machine learning initiatives, while nearly half said their organization lacks the proper governance controls.

Diby Malakar, vice president of product management at data catalog specialist Alation, similarly said he has noticed a growing emphasis on AI governance.

SingleStore customers, like those of most data management and analytics vendors, have expressed interest in developing and deploying generative AI-driven tools. And as they build and implement conversational assistants, code translation tools and automated processes, they are concerned with ensuring proper use of the tools.

“In every customer call, they are saying they’re doing more with GenAI, or at least thinking about it,” Malakar said. “And one of the first few things they talk about is how to govern those assets — assets as in AI models, feature stores and anything that could be used as input into the AI or the machine learning lifecycle.”

Advertisement

Governance, however, is hard. Data governance has been a challenge for enterprises for years. Now, AI governance is taking its place alongside data governance as a requirement as well as a challenge.

A graphic displays the components of an AI governance framework.

Surging need

Data has long been a driver of business decisions.

For decades, however, data stewardship and analysis were the domain of small teams within organizations. Data was kept on premises, and even high-level executives had to request that IT personnel develop charts, graphs, reports, dashboards and other data assets before they could use them to inform decisions.

The process of requesting information, developing an asset to analyze the information and reaching a decision was lengthy, taking at a minimum a few days and — depending on how many requests were made and the size of the data team — even months. With data so controlled, there was little need for data governance.

Advertisement

Then, a bit less than 20 years ago, self-service analytics began to emerge. Vendors such as Tableau and Qlik developed visualization-based platforms that enabled business users to view and analyze data on their own, with proper training.

With data no longer the sole domain of experts — and with trained experts no longer the only ones in control of their organization’s data, and business users empowered to take action on their own — organizations needed guidelines.

And with data in the hands of more people within an enterprise — still only about a quarter of all employees, but more than before — more oversight was needed. Otherwise, organizations risked noncompliance with government regulations and data breaches that could reveal sensitive information or cost a company its competitive advantage.

A similar circumstance is now taking place with AI — albeit at a much faster rate, given all that has happened in less than two years — that necessitates AI governance.

Advertisement

Just as data was once largely inaccessible, so was AI. And just as self-service analytics enabled more people within organizations to use data, necessitating data governance, generative AI is enabling more people within organizations to use AI, necessitating AI governance.

Donald Farmer, founder and principal of TreeHive Strategy, noted a similarity between the rising need for AI governance and events that necessitated data governance.

“That is a parallel,” he said. “It’s a reasonable one.”

However, what is happening with AI is taking place much more quickly and on a much larger scale than what happened with self-service analytics, Farmer continued.

Advertisement

AI has the potential to completely alter how businesses conduct themselves, if properly governed. Farmer compared what AI can do for today’s enterprises to what electricity did for businesses at the turn of the 20th century. At the time, widespread electrical use was dangerous. In response, organizations employed what were then known as CEOs — chief electricity officers — who oversaw the use of electricity and made sure safety was maintained.

“This is a very fundamental shift that we’re just seeing the start of,” Farmer said. “It’s almost as fundamental as [electricity] — everything you do is going to be affected by AI. The comparison with self-service analytics is accurate, but it’s even more fundamental than that.”

Alation’s Malakar similarly noted parallels to be drawn between self-service analytics and the surging interest in AI. Both are rooted in less-technical employees wanting to use technology to help make decisions and take action.

“What we see is that the business analyst who doesn’t know coding wants less and less reliance on IT,” Malakar said. “They want to be empowered to make decisions that are data-related.”

Advertisement

First, that was enabled to some degree by self-service analytics. Now, it can be enabled to a much larger degree by generative AI. Every enterprise has questions such as how to reduce expenses, predict churn or implement the most effective marketing campaign. AI can provide the answers.

And with generative AI, it can provide the answers to virtually any employee.

“They’re all AI/ML questions that were not being asked to the same degree 10 years ago,” Malakar said. “So now all these things like privacy, security, explainability [and] accountability become very important — a lot more important than it was in the world of pure data governance.”

Elements of AI governance

At its core, AI governance is a lot like — and connected to — data governance.

Advertisement

Data governance frameworks are documented sets of guidelines to ensure the proper use of data, including policies related to data privacy, quality and security. In addition, data governance includes access controls that limit who can do what with their organization’s data.

AI governance is linked to data governance and essentially builds on it, according to Petrie.

AI governance applies the same standards as data governance — practices and policies designed to ensure the proper use of AI tools and accuracy of AI models and applications. But without good data governance as a foundation, AI governance loses significance.

Before AI models and applications can be effective and used to inform decisions and automate processes, they need to be trained using good, accurate data.

Advertisement

“[Data governance and AI governance] are inextricably linked and overlap quite a bit,” Petrie said. “All the risks of AI have predecessors when it comes to data governance. You should view data governance as the essential foundation of AI governance.”

Most enterprises do have data governance frameworks, he continued. But the same cannot be said for AI governance, as Petrie’s 2023 survey demonstrated.

“That signals a real problem,” he said.

The problem could be one that puts an organization at a competitive disadvantage — that they’re not ready to develop and deploy AI models and applications and reap their benefits, while competitors are doing so. Potentially more damaging, however, is if an enterprise is developing and deploying AI tools, but isn’t properly managing how they’re used. Rather than simply holding back growth, this could lead to negative consequences.

Advertisement

But AI governance is about more than just protection from potential problems. It’s also about enabling confident use of AI tools, according to Farmer.

Good data governance frameworks strike a balance between putting limits on data use aimed to protect the enterprise from problems and supporting business users so that they can work with data without fearing that they’re going to unintentionally put their organization in a precarious position.

Good AI governance frameworks need to strike that same balance so that someone asking a question of an AI assistant isn’t afraid that the response they get and subsequent action they take will have a negative effect. Instead, that user needs to feel empowered.

People are beginning to come around to the idea that a well-governed system gives people more confidence in being able to apply it at scale. Good governance isn’t a restricting function. It should be an enabling function. If it’s well governed, you give people freedom.
Donald FarmerFounder and principal, TreeHive Strategy

“People are beginning to come around to the idea that a well-governed system gives people more confidence in being able to apply it at scale,” Farmer said. “Good governance isn’t a restricting function. It should be an enabling function. If it’s well governed, you give people freedom.”

Advertisement

Specific elements of a good framework for AI governance combine belief in the need for a system to manage the use of AI, guidelines that properly enforce the system and technological tools that assist in its execution, according to Petrie.

“AI governance is defining and enforcing policies, standards and rules to mitigate risks related to AI,” he said. “To do that, you need people and process and technology.”

The people aspect starts with executive support for an AI governance program led by someone such as a chief data officer or chief data and analytics officer. Also involved in developing and implementing the framework are those in supporting roles, such as data scientists, data architects and data engineers.

The process aspect is the AI governance framework itself — the policies that address security, privacy, accuracy and accountability.

Advertisement

The technology is the infrastructure. Among other tools, it includes data and machine learning observability platforms that look at data quality and pipeline performance, catalogs that organize data and include governance capabilities, master data management capabilities to ensure consistency, and machine learning lifecycle management platforms to train and monitor models and applications.

Together, the elements of AI governance should lead to confidence, according to Malakar. They should lead to conviction in the outputs of AI models and applications so that end users can confidently act. They should also lead to faith that the organization is protected from misuse.

“AI governance is about being able to use AI applications and foster an environment of trust and integrity in the use of those AI applications,” Malakar said. “It’s best practices and accountability. Not every company will be good at each one of [the aspects of AI governance], but if they at least keep those principles in mind, it will lead to better leverage of AI.”

Benefits and consequences

Confidence is perhaps the most significant outcome of a good AI governance framework, according to Farmer.

Advertisement

When the data used to feed and train AI models can be trusted, so can the outputs. And when the outputs can be trusted, users can take the actions that lead to growth. Similarly, when the processes automated and overseen by AI tools can be trusted, data scientists, engineers and other experts can use the time they’ve been given by being relieved of mundane tasks to take on new ones that likewise lead to growth.

“The benefit is confidence,” Farmer said. “There’s confidence to do more with it when you’re well governed.”

More tangibly, good AI governance leads to regulatory compliance and avoiding the financial and reputational harm that comes with regulatory violations, according to Petrie. Europe, in particular, has stringent regulations related to AI, and the U.S. is similarly expected to increase regulatory restrictions on exactly what AI developers and deployers can and cannot do.

Beyond regulatory compliance, good AI governance results in good customer relationships, Petrie continued. AI models and applications can provide enterprises with hyperpersonalized information about customers, efficiently enabling personalized shopping experiences and cross-selling opportunities that can increase profits.

Advertisement

“Those benefits are significant,” Petrie said. “[But] if you’re going to take something to customers — GenAI in particular — you better make sure you’re getting it right, because you’re playing with your revenue stream.”

If enterprises get generative AI — or traditional AI, for that matter — wrong, i.e., if the governance framework controlling how AI models and applications are developed and deployed is poor, the consequences can be severe.

“All sorts of bad things can happen,” Petrie said.

Some of them are the polar opposite of what can happen when an organization has a good AI governance framework. Instead of regulatory compliance, organizations can wind up with inquisitive regulators, and instead of strong customer relationships, they can wind up with poor ones.

Advertisement

But those are the end results.

First, what leads to inquisitive regulators and poor customer relationships, among other things, includes poor accuracy, biased outputs and mishandling of intellectual property.

“If those risks are not properly controlled and mitigated, you can wind up with … regulatory penalties or costs related to compliance, angry or alienated customers, and you can end up with operational processes that hit bottlenecks because the intended efficiency benefits of AI will not be delivered,” Petrie said.

Lack of data security, explainability and accountability are other results of poor AI governance, according to Malakar.

Advertisement

Without the combination of good data governance and AI governance, there can be security breaches as well as improperly prepared data — personally identifiable information that hasn’t been anonymized, for example — that seeps into models and gets exposed. In addition, without good governance, it can be difficult to explain and fix bad outputs in a timely fashion or know whom to hold accountable.

“You don’t want to build a model where it can’t be trusted,” Malakar said. “That’s a risk to the entire culture of the company and can drive morale issues.”

Ultimately, just as good AI governance engenders confidence, bad AI governance leads to a lack of confidence, according to Farmer.

If one competing company trusts its AI models and applications and another doesn’t, the one that can act with confidence will reap the benefits, while the other will be stuck in place and miss out on the growth opportunities enabled by generative AI’s significant potential.

Advertisement

“Given that the shift is so fundamental, not being well governed is really going to hold you back,” Farmer said. “Governance is the difference between the ability to move swiftly and with confidence, and being held back and taking dangerous risks.”

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

The true cost of your subscriptions

Published

on

Featured image for The true cost of your subscriptions

Nearly everything we consume today is through subscriptions. From streaming our favorite shows and movies to listening to music, accessing fitness content, or even keeping up with the latest news—subscriptions have become a way of life. But with this convenience comes an overwhelming sense of commitment, both financially and mentally.

Let’s explore how much we’re truly spending on subscriptions and how we can combat the rising issue of subscription fatigue.

The Rise of Streaming and Subscription Services

In recent years, streaming services like Netflix, Hulu, Disney+, and Amazon Prime have become essential parts of our daily entertainment. Gone are the days of waiting for your favorite show to air weekly or heading to the movie theater for the latest blockbuster. Now, with just a few clicks, we can binge-watch entire seasons, access exclusive content, and stream music and podcasts—all on demand.

The average person consumes hours of entertainment each week, more specifically, a U.S. adult spends more than 4 hours a day watching TV or streaming content, and this number is increasing as more content becomes available across multiple platforms. But as our media consumption grows, so does the number of subscriptions we’re managing—and paying for.

Advertisement

How Much Are People Spending on Subscriptions?

Subscription costs vary across the globe, but one thing is clear: many of us are spending more than we realize. According to a survey by ExpressVPN, people in different countries are spending significant amounts monthly on digital subscriptions:

– France: 23% of people spend between 11-20 EUR (12-22 USD) monthly on digital subscriptions, while a quarter spends 21-40 EUR (23-44 USD).
– Germany: 27% of respondents also spend between 21-40 EUR monthly.
– UK: Nearly 30% of Brits spend between 21-40 GBP (16-30 USD) every month on subscriptions.
– U.S.: In the U.S., 26% of respondents are spending between 21-40 USD each month, with a notable percentage exceeding 100 USD monthly to keep up with multiple subscriptions.

These numbers may seem manageable at first glance, but when combined with other household expenses, the monthly costs of these services can quickly add up. A few dollars for a music streaming service, a few more for fitness apps, and suddenly, you’re spending hundreds of dollars each year on digital content.

What is Subscription Fatigue?

The convenience of subscription services comes with a hidden cost: the mental strain of managing them all. This growing issue is known as subscription fatigue—the feeling of being overwhelmed by the number of services you’ve signed up for but no longer fully utilize.

Advertisement

Nearly 40% of ExpressVPN’s survey respondents feel burdened by the sheer number of subscriptions they manage. It’s not just the financial cost that causes stress; it’s also the mental load of juggling multiple accounts, remembering passwords, and keeping track of renewal dates.

Subscription fatigue often creeps up slowly. You start with one or two services, but before you know it, you’re subscribed to a long list of platforms that you barely use. Many people hold onto subscriptions, even if they aren’t making full use of them, because of the fear of missing out or not wanting to lose access to content they might enjoy later. But this can lead to a significant drain on both finances and mental energy.

How to Combat Subscription Fatigue

If you find yourself overwhelmed by the number of subscriptions you’re juggling, you’re not alone. However, there are effective strategies to manage the load and regain control of your digital life.

1. Consolidate Your Subscriptions

Instead of subscribing to multiple standalone services, look for bundled options. Some platforms offer comprehensive packages that combine music, video, and other content into a single subscription. For example, Amazon Prime offers streaming video and music and even free shipping on products. Consolidating your services can reduce both the mental burden and the total cost.

Advertisement

2. Use a Subscription Management App

Managing your subscriptions doesn’t have to be a headache. There are several apps, such as Rocket Money (formerly Truebill), that allow you to track all your subscriptions in one place. These platforms provide an overview of what you’re paying for each month, and some even allow you to cancel subscriptions with just a few clicks.

3. Prioritize Quality Over Quantity

It’s easy to accumulate subscriptions, but it’s important to regularly assess which services offer the most value. Instead of spreading your budget across a dozen platforms, focus on a few that you use most frequently and that provide comprehensive content.

4. Set Reminders for Renewal Dates

One of the most frustrating aspects of subscriptions is unexpected charges when a service auto-renews. Set reminders in your phone or calendar for each subscription’s renewal date. This will allow you to assess whether you’re still using the service and decide if you want to keep it before the next billing cycle.

5. Leverage Family or Group Plans

Many streaming services and apps offer family plans that provide access for multiple users under a single account. Family plans typically offer better value, allowing everyone in your household to enjoy content without paying for individual subscriptions. It’s a great way to reduce costs and streamline your subscriptions.

Advertisement

6. Evaluate Subscriptions Regularly

Take time to periodically review your subscriptions. Are you still using all of them? If not, it might be time to cancel or downgrade services that no longer fit your needs. Many platforms allow you to pause subscriptions, giving you time to decide if you really need the service before fully canceling it.

Take Control of Your Subscriptions

Subscription services have become a regular part of modern life, but managing them doesn’t have to be overwhelming. By consolidating services, using management apps, and regularly evaluating their value, you can reduce both financial and mental stress. If you’re feeling the weight of subscription fatigue, now is the time to take action.

Take control of your subscriptions today and enjoy the content and services that truly add value to your life without the added stress.

Source link

Advertisement
Continue Reading

Technology

Is Google Dominating AI After Google I/O?

Published

on

Is Google Dominating AI After Google I/O?

Stay up-to-date with the latest tech news as we dive into the top 5 must-know AI Google I/O 2024 announcements that you simply can’t miss!

Source link

Continue Reading

Technology

Understand Microsoft Copilot security concerns

Published

on

Understand Microsoft Copilot security concerns

Microsoft Copilot can improve end-user productivity, but it also has the potential to create security and data privacy issues.

Copilot streamlines workflows in Microsoft 365 applications. By accessing company data, it can automate repetitive tasks, generate new content and ideas, summarize reports and improve communication.

Productivity benefits depend on the data Copilot can access. But security and data privacy issues can arise if Copilot uses data that it shouldn’t have access to. Understanding and mitigating various Copilot security concerns requires a high-level understanding of how Copilot for Microsoft 365 works.

How Copilot accesses company data

Like other AI chatbots, such as ChatGPT, users interact with Copilot via prompts. The prompt is displayed within Microsoft Office applications, such as Microsoft Word or Excel, or within the Microsoft 365 Web portal.

Advertisement

When a user enters a request into the prompt, Copilot uses a technique called grounding to improve the quality of the response it generates. The grounding process expands the user’s prompt — though this expansion is not visible to the end user — based on Microsoft Graph and Microsoft Semantic Index. These components rewrite the user’s prompt to include key words and data references that are most likely to generate the best results.

After modifying the prompt, Copilot sends it to a large language model. LLMs use natural language processing to interpret the modified prompt and enable Copilot to converse in written natural language with the user.

Screenshot of Copilot open in Microsoft Word.
Users can open Microsoft Copilot in other Microsoft applications, such as Word, and interact with it via prompts.

The LLM formulates a response to the end user’s prompt based on the available data. Data can include internet data, if organization policies allow Copilot to use it. The response usually pulls from Microsoft 365 data. For example, a user can ask Copilot to summarize the document they currently have open. The LLM can formulate a response based on that document. If the user asks a more complex question that is not specific to one document, Copilot will likely pull data from multiple documents.

The LLM respects any data access controls the organization currently has in place. If a user does not have access to a particular document, Copilot should not reference that document when formulating a response.

Before the LLM sends a response to the user, Copilot performs post processing checks to review security, privacy and compliance. Depending on the outcome, the LLM either displays the response to the user or regenerates. The response is only displayed when it adheres to security, privacy and compliance requirements.

Advertisement

How Copilot threatens data privacy and security

Copilot can create data security or privacy concerns despite current safeguards.

Copilot uses any data that is available to it, even if it’s a resource that the user should not have access to.

The first potential issue is users having access to data that they shouldn’t. The problem tends to be more common in larger organizations. As a user gets promoted or switches departments, they might retain previous access permissions that they no longer need.

It’s possible that a user might not even realize they still have access to the data associated with their former role, but Copilot will. Copilot uses any data that is available to it, even if it’s a resource that the user should not have access to.

A second concern is Copilot referencing legitimately accessed data that it shouldn’t. For example, it might be better if Copilot is not able to formulate responses based upon documents containing your organization’s confidential information. Confidential or sensitive data might include plans for mergers or acquisitions that have not been made public or data pertaining to future product launches.

Advertisement

An organization’s data stays within its own Microsoft 365 tenant. Microsoft does not use an organization’s data for the purpose of training Copilot. Even so, it’s best to prevent Copilot from accessing the most sensitive data.

If a user has legitimate access to this sensitive data, it still can be harmful to let that user access it through Copilot. Some users who create and share Copilot-generated documents might not take the time to review them and could accidentally leak sensitive data.

Mitigate the security risks

Before adopting Copilot, organizations should engage in an extremely thorough access control review to determine who has access to what data. Security best practices stipulate that organizations should practice least user access. Normally, LUA is in response to compliance requirements or as a way of limiting the damage of a potential ransomware infection — ransomware cannot encrypt anything that the user who triggered the infection does not have access to. In the case of a Copilot deployment, adopting the principles of LUA is the best option to ensure Copilot does not expose end users to any data that they should not have access to.

Restricting Copilot from accessing sensitive data can be a tricky process. Microsoft recommends applying sensitivity labels through Microsoft Purview. Configure the sensitivity labels to encrypt sensitive data and ensure users do not receive the Copy and Extract Content (EXTRACT) permission. EXTRACT prevents users from copying sensitive documents and blocks Copilot from referencing the document.

Advertisement

Brien Posey is a 22-time Microsoft MVP and a commercial astronaut candidate. In his more than 30 years in IT, he has served as a lead network engineer for the U.S. Department of Defense and a network administrator for some of the largest insurance companies in America.

Source link

Continue Reading

Technology

Qualcomm is reportedly eyeing a takeover of Intel

Published

on

Qualcomm is reportedly eyeing a takeover of Intel

It seems that Qualcomm sees Intel’s struggling business as a potential opportunity. The San Diego-based chipmaker has reportedly expressed an interest in taking over Intel “in recent days,” according to a new in The Wall Street Journal.

Though the report cautions that such a deal is “far from certain,” it would be a major upheaval in the US chip industry. It would also, as The WSJ notes, likely raise antitrust questions. But Qualcomm’s reported interest in a takeover underscores just how much Intel’s business has struggled over the last year.

Intel announced plans to cut last month as its quarterly losses climbed to $1.6 billion. Its foundry business is also struggling, with an operating loss of $2.8 billion last quarter. CEO Pat Gelsinger announced plans earlier this week to separate its foundry business into a separate unit from the rest of Intel.

Intel declined to comment on the report. Qualcomm didn’t immediately respond to a request for comment.

Advertisement

Source link

Continue Reading

Technology

Dragon Age looks better, plays better than ever in The Veilguard

Published

on

Dragon Age looks better, plays better than ever in The Veilguard

Bioware hasn’t exactly had a good reputation for the last few years — between adding multiplayer to Dragon Age: Inquisition and the failure of Anthem, to say nothing of the rumors it was planning to make Dragon Age 4 a live-service game, fan confidence in the legendary studio has withered. Now all hopes are hanging on its next game, Dragon Age: The Veilguard. Coming out 10 years after Inquisition and 15 years after Origins, Veilguard brings with it the hopes that the studio can return to its strengths.

Well, having played seven hours of Veilguard during a special preview event, I’m happy to report that, thus far at least, Bioware seems to doing just that. Dragon Age: The Veilguard is everything fans of the series have been asking for in terms of lore and story, and everything they might not have known they needed in terms of gameplay.

While at the preview event, I had a glimpse of several different areas and characters within the game, and played a few different missions. I also got to toy with the character creator and built my own Rook from the ground up, testing several of the different options. I also got to visit several areas of the game and meet several of the characters players will encounter in the game.

Before I get to anything else in the game, I have to shout out Veilguard’s character creator. Whatever character you’ve ever wanted to make in the Dragon Age setting, you can make them here. You’ve got gorgeous hair options, asymmetry, makeup options that for once don’t feel gaudy and ridiculous. I don’t say this lightly: Veilguard has a better character creator than Baldur’s Gate 3.


Join us for GamesBeat Next!

Advertisement

GamesBeat Next is connecting the next generation of video game leaders. And you can join us, coming up October 28th and 29th in San Francisco! Take advantage of our buy one, get one free pass offer. Sale ends this Friday, August 16th. Join us by registering here.


Thedas: 10 Years Later

I’ll refrain, in this preview, from speaking about the story at length. To be clear, I don’t believe what I was shown in the preview consisted of the most spoiler-heavy parts of the game – in fact, I have my doubts that I saw any of the most shocking moments the game has in store. However, I also know how long the fans such as myself have been waiting for this game, and that they want to have as pure and unspoiled an experience as possible. To that end, I’ll keep this preview mostly focused on gameplay to avoid even tripping over spoilers.

All I’ll say about the story that I saw in the preview is that protagonist Rook feels like they’re privy to some of the parts of Thedas that have thus far only been hinted at. They tangle with factions that have only existed on the periphery in previous games, and visit places that have only been spoken of by other characters. And not one single one of these places or people disappoints when they’re revealed. Rook themself isn’t a figure of myth like the Inquisitor or the Hero of Ferelden, but they feel like they are familiar with and connected to more of the world.

One of the places I visited was Treviso a gorgeous city in Antiva, a land whose lush beauty has only been implied in the mellifluous accents of Zevran and Josephine. It features some stunningly beautiful art design, a hybrid between European romance and Near Eastern scale. Absurdly, my first though upon seeing it was that I now understood why so many characters saw Ferelden, the medieval-style country where Origins takes place, as the biggest shithole in Thedas.

My one complaint — and, again, I won’t go into specifics for fear of spoilers — is that the dialogue and choice system, a staple of Bioware games, sometimes feels a bit over-explain-y. When you’ve had an encounter with a character that has lasting impacts of some kind, a text box pops up onscreen telling you so. And it’s not just “X approves” or “Y will remember you said that.” It’s more like “X feels he let you down” or “You and Y traded banter while you were on this mission.” It’s okay to leave some of that to the player’s imagination, Bioware.

Advertisement

Becoming the hero Thedas needs

The first thing veteran players will notice is a difference in movement of every kind. Both in combat and out of it, characters are far more dexterous and nimble than they have been previously. If I could find a good point of comparison, it might be Horizon Forbidden West. Rook feels far less restrained than the Inquisitor, though the places they navigate are smaller or more contained.

Combat has shifted away from RPG tactical control over the player character and their companions to action-style focus on Rook. That said, the titular Veilguard can still sync with Rook mid-battle. In fact, the game rewards you for doing so, as setting off two complimentary abilities in rapid succession deals devastating combos. It’s certainly more fast-paced than any previous title, and that’s to its benefit: It means that Rook can fight bigger, tougher enemies without the game’s pace slowing to a crawl.

Perhaps my favorite part of the gameplay: The mage is actually fun to play now! No more are you relegated to just parking on the backlines wildly swinging a stick around while your companions get up close and personal (which felt like 90% of all mage gameplay in previous Dragon Age titles). Now you have options: The mage gameplay is split between the traditional staff and a dagger-and-orb combo that incorporates melee combat. Like the other classes, they also have a long-range attack that can best be described as a magical laser beam.

The focus on Rook does leave your companions sometimes feeling a little unfocused in combat. They can’t die, or if they could it never happened while I was playing, which robs the gameplay of a bit of complexity. It’s not a dealbreaker, by any means — but I suspect it might take some getting used to for game veterans.

Rook takes Queen, Checkmate

In short, Dragon Age: The Veilguard appears to be a return to form for Bioware. I got to speak with John Epler, the franchise’s creative director, who told me, “For us, Dragon Age: The Veilguard is about getting back to what the studio was built to build. We were always a studio about single-player RPGs and character-driven narratives. For The Veilguard, getting to go back to what we did and even deeper, it’s been really exciting. I’ve been at Bioware for 17 years, so it’s really been a great feeling to see that resurgence of excitement.”

Advertisement

It remains to be seen if Dragon Age: The Veilguard can live up to the potential it showed during this preview event and in the footage Bioware has thus far show. However, all signs are positive at the moment, so here’s hoping Halloween — when the Veil is thin — will deliver the Dragon Age experience fans have been craving for a decade.


Source link
Continue Reading

Technology

OceanGate’s ill-fated Titan sub relied on a hand-typed Excel spreadsheet

Published

on

OceanGate’s ill-fated Titan sub relied on a hand-typed Excel spreadsheet

A former OceanGate contractor, Antonella Wilby, testified before a U.S. Coast Guard panel on Friday that the company’s Titan submarine, which imploded last year during a dive to the Titanic’s wreckage, relied on an incredibly convoluted navigation system.

As Wilby described it during the US Coast Guard Marine Board of Investigation hearing, the Titan’s GPS-like ultra-short baseline (USBL) acoustic positioning system generated data on a sub’s velocity, depth, and position using sound pings.

That information is typically automatically loaded into mapping software to keep track of a sub’s position. But Wilby said that for the Titan, the coordinate data was transcribed into a notebook by hand and then entered into Excel before loading the spreadsheet into mapping software to track the sub’s position on a hand-drawn map of the wreckage.

The OceanGate team tried to perform these updates at least every five minutes, but it was a slow, manual process done while communicating with the gamepad-controlled sub via short text messages. When Wilby recommended the company use standard software to process ping data and plot the sub’s telemetry automatically, the response was that the company wanted to develop an in-house system, but didn’t have enough time.

Advertisement

Wilby was later taken off the team and flew home after telling supervisors, “This is an idiotic way to do navigation.” She also testified that after Dive 80 in 2022, a loud bang / explosion was heard during the Titan’s ascent and that it was loud enough to be heard from the surface.

This mirrors testimony given yesterday by OceanGate’s former scientific director, Steven Ross. Like Wilby, he said that the sound was attributed to a shifting of the pressure hull in its plastic cradle, although Wilby testified that there were only “a few microns” of damage.

According to Ross, six days before the Titan submarine imploded, the sub’s pilot and the company’s co-founder, Stockton Rush, crashed the vessel into a launch mechanism bulkhead while the vessel was attempting to resurface from Dive 87. The incident was caused by a malfunction with a ballast tank, which inverted the submarine, causing other passengers to “tumble about,” according to the Associated Press. No one was injured during the incident, but Ross said he did not know if an inspection of the sub was carried out afterward.

Source link

Advertisement

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.