Connect with us

Technology

Thought Bubble: Four Exciting Ideas from CES

Published

on

Thought Bubble: Four Exciting Ideas from CES

The ringing in of a new year was quickly followed by the ringing in of a new era of technological innovation at CES 2023. Brands across categories unveiled their latest, greatest, and most forward-thinking solutions to consumer pain points.

Here are a few innovations introduced at CES that stood out to Mintel’s tech and media experts:

Brian Benway, Research Analyst – Gaming and Entertainment

Sony showed several interesting new pieces of hardware at CES 2023, including their take on an accessibility controller, tentatively called Project Leonardo. However, the bigger reveal was the much anticipated PlayStation VR2. With Meta/Oculus/Facebook seemingly on a cooling-off period after laying off 11,000 workers shortly after revealing its Meta Quest Pro, Sony’s sequel product could be poised to take a more dominant position in the Virtual Reality space. Featuring 4k visuals, intelligent eye-tracking that almost makes a player’s vision another controller, and an adjustable wide field of view, the product is a feature-rich match for the PlayStation 5. With that said, it still requires a matched PlayStation 5, bringing the total price tag for this experience closer to the Meta Quest Pro cost level. Starting with a digital-only model PS5 at $399, adding the VR2 hardware for $549 brings the entry-level experience to $949. That’s without factoring in additional costs such as controller charging, high-quality audio options, or even games.

Image Source: PlayStation

Speaking of software, Horizon: Call of the Mountain is reportedly as amazing a VR experience as the current VR gaming king Half-Life: Alyx, but in order for PlayStation VR2 to reach these heights, Sony had to forgo backward compatibility with previously purchased titles. Sony, and the budding Virtual Reality industry in general, continue to reach great heights of technological achievement, but may continue to be brought down by the equally great costs for the foreseeable future.   

John Poelking, Research Manager – Tech, Media, and Telecom
One of the most important advancements at CES 2023 wasn’t a device or a service, but a new standard called Matter. Matter was developed in collaboration with many big brands (including Apple, Google, Samsung, and Amazon) to create a consistent standard making it easier to set up and connect a wide range of devices. Matter also connects devices locally so it doesn’t need to go through the cloud, cutting down on latency. A new standard like this should also encourage innovation from brands that won’t need to worry as much about cross-device compatibility.

Image Source: Connectivity Standards Alliance

Smart home systems have become more instrumental in consumers’ lives as the technology powering them has become more intuitive and less expensive. The promise of a more cohesive cross-brand ecosystem could lighten the pain points of a disjointed ecosystem, and create more justification for internet providers to include new smart home hardware in upcoming promotions.

Jenni Nelson, Research Analyst – Tech and Media
The most buzzed-about home health tracking devices at CES 2023 are going down the toilet – literally. At least four different monitors were on display that claimed to monitor biomarkers in urine in order to provide a nearly-instant snapshot of the user’s health. The most popular in terms of press coverage is Withings wifi-enabled U-scan prototype. U-Scan is a flat, circular device that hangs over the front edge of a toilet, much like toilet bowl cleaners. It can sense when someone is urinating, prompting a pump to pull a small amount of urine into the unit. Inside is a cartridge with many microfluidic assays which are then run in front of the device’s infrared scanner to check gravity (ie chemical particles), pH, Vitamin C and ketone levels. Results and actionable insights are shared via Withings’s Health Mate app. The cartridges are refreshed after each use, and the company claims the device is smart enough to distinguish between users in the same household. Currently, there are two cartridges in the works: one to measure nutrition and hydration, and another to track menstrual cycles.

Image Source: Withings

It hasn’t yet been approved by the FDA, so a US release will follow Europe’s, which is scheduled for Q2 2023. The most basic model will cost about $500 with replacement cartridges around $30. As personal health tracking matures, moving beyond a smartwatch or ring is the next step – wearable devices have reached their limit in terms of what biometrics they can track without piercing the skin. Given that urine analysis is already widely used to detect a plethora of illnesses, a home-based version isn’t far-fetched. It could be a game changer for those monitoring kidney and liver diseases or diabetes, either for themselves or for others within the home. It could also cut down on the number of clinic visits and lab tests among patients, freeing up time, effort and expense for both patients and the healthcare industry alike.  

Nicole Bond, Associate Director – Marketing Strategy
While new TVs with enhanced specs and capabilities are nothing new for CES, start-up Displace TV flipped the script by delivering a new kind of “wireless” TV that has opened the door to the next chapter of TV device advancements. The 55-inch 4K TV promises to alleviate the burden of wire clutter and deliver a truly innovative experience for at-home entertainment. The Displace TV will be powered by a proprietary hot-swappable battery system, weigh less than 20 lbs, bypass the need to permanently damage walls with a mount, and provide the flexibility of multiple screens operating at once to deliver a completely innovative role of TV in the home. The start-up’s innovation uses active-loop vacuum technology to stick to any wall, and can be moved around the home at the user’s preference. While sticking a TV to a wall and trusting that it stays put may make some consumers nervous, the appeal of not having to deal with cord management is likely universal.

Image Source: Displace TV

The device is also modular, meaning it can be used in a combination of multiple Displace displays to form customizable TV sizes. The flexibility and aesthetic appeal of the new device mark a shift in the role of TV in the home. Not only is it about having the best specs, but it is also now about the entire user experience from start to finish. It puts control back in the hands of consumers, and opens the door to immense opportunities to change the way TVs have been perceived. Displace has addressed common consumer pain points while innovating based on modern technology to truly build out a one-of-its-kind (for now) experience with a household staple. It pointed to how the future of TVs can and will evolve to better serve the needs of consumers at home. Notably, LG also presented M3—a wireless, port-less 97-inch OLED TV that would also operate on Wi-Fi 6 and offer a cordless appeal. Despite its teaser, LG did not mention the price or release date of its wireless model, which suggests it may be a ways off. Meanwhile, reservations for Displace TV are open, and are expected to be available to ship by late 2023.

If you are a Mintel client, please reach out to your Account Manager for additional information and key takeaways from CES, otherwise for more information on how Mintel experts can help your brand strategize future initiatives, click here.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Be the disruptor or be disrupted: The effects global tech layoffs could have on SEA

Published

on

Be the disruptor or be disrupted: The effects global tech layoffs could have on SEA

In November 2022, Meta cut 11,000 jobs (about 13% of its staff) in the biggest tech layoff of 2022. It wasn’t just Meta, many other tech industry giants made cuts and “right-sized” amidst uncertain economic conditions.

This means that many skilled immigrants were forced to quickly find a new job or leave the countries they were employed in. Considering the emerging tech and start-up scene in Southeast Asia (SEA), the region might appeal to many of these young, innovative, and determined individuals.

Let’s not forget that for some, it could be the perfect reason to go “home” – wherever it may be. 

What does this mean for SEA? How might its tech industry develop and how could its societies change? What would it mean for (non-tech industry) brands?

Advertisement

Let’s explore some possibilities.

SEA is going through a period of digitalisation

We’ve seen various unicorns – Bukalapak (Indonesia), Grab (Malaysia), Lazada (Singapore), etc – emerge from the region in the past decade, and we don’t expect it to end there. In many ways, SEA is still “playing catch up” with the modern world.

There remain many challenges to be solved throughout the region – in healthcare, education, banking, entertainment, and so on – and solutions often bring about new challenges (or opportunities).

While tech skills are growing among locals, there’s arguably still a lack of practical experience. SEA companies will be on the lookout for skilled individuals who have experienced failure and success – to help them avoid missteps, mentor local talent, and inspire new ideas for the future.

Advertisement

To attract these individuals, certain SEA nations – Thailand, Malaysia, and Indonesia – have started offering “digital nomad visas” to attract skilled foreign talent and other high-net-worth individuals.

To clarify, SEA’s tech ecosystem was not immune to layoffs in 2022 and in the current economic environment, funding appears constricted. However, experts continue to project economic growth in the region, even if it seems a little slow right now.

One reason for this is that, in our ageing world, the World Economic Forum predicts that Gen Zs and Millennials will make up 75% of ASEAN consumers by 2030. This is a change-driving demographic so expect many to start their own tech- and purpose-led businesses. 

So beyond e-commerce, Fin-tech, Edu-tech, and digital entertainment, expect to also see investments in Agri-tech, Food-tech, Health-tech, Green-tech, SaaS (Software as a Service), EV, Automation, etc. 

Advertisement

Will SEA consumers welcome these tech-led changes?

Depending on one’s perspective, tech can be viewed as a force for good or one for evil. It’s been called the great isolator but also the great connector. It has “stolen” jobs but created new ones. It has given us the power of anonymity but also a path to fame.

It won’t just be technically skilled individuals (eg engineers, programmers, designers) in high demand, but those with keen human understanding as well.   

With many new tech innovations, adoption is part of the product itself. Social media platforms cannot exist without users, and solutions are meaningless without the need (eg e-grocery).

How well a company or start-up understands the market, the community, their potential audience, and its needs are key to its success or failure – along with an offering that is convenient, safe, and pleasing to use. This means there will be immense opportunities for collaboration (eg between local and foreign talent or companies), but also heightened user expectations to meet.

Advertisement

What will this mean for brands in SEA that do not operate in the tech space?

Even if a brand does not directly operate in tech, it’s undeniable that they have to understand how to operate in a digital world. Tech is not limited to software, platforms, data, or devices. Technology drives innovation at all levels and changes how consumers interact with the world.

Advancements in technology (eg cell-cultured protein) will change how consumers think about the sustainability of their food, will change the approach taken in the agriculture industry, will change the food industry, will change the healthcare industry, and so on and so forth.

The proliferation of local start-ups will make collaboration between tech and non-tech brands easier. 

Hence, established brands should prepare themselves for the wave of anti-brand brands which promise – through technological innovation  – ways to improve the lives of consumers while saving them money. Similar to how start-ups like Airbnb, WhatsApp, and Groupon emerged in 2008 amidst the Great Recession of 2007-09.

Advertisement

These start-ups will be worthy, agile challengers that quickly respond to the latest technologies and changes in consumer behaviour.

Across all industries, brands will be the disruptor or be disrupted

But let’s bring it back to skilled individuals that may soon find themselves in SEA. Commanding relatively higher salaries, these consumers will likely have more disposable income than the majority of locals. Expect neighborhoods surrounding tech hubs to gentrify as they seek artisanal luxuries, spaces for leisure and community, and various other ways to improve their quality of life.

They will potentially bring about new opportunities for local and international brands, accelerate tech ecosystems, and hopefully benefit local communities.

Southeast Asia’s tech and start-up scene could dramatically change in the next five years, and influenced by their surroundings (ie changemakers and trendsetters), so could Southeast Asian consumers.  

Advertisement

Source link

Continue Reading

Technology

NYT Mini Crossword today: puzzle answers for Saturday, September 21

Published

on

NYT Mini Crossword today: puzzle answers for Saturday, September 21

The New York Times has introduced the next title coming to its Games catalog following Wordle’s continued success — and it’s all about math. Digits has players adding, subtracting, multiplying, and dividing numbers. You can play its beta for free online right now. 
In Digits, players are presented with a target number that they need to match. Players are given six numbers and have the ability to add, subtract, multiply, or divide them to get as close to the target as they can. Not every number needs to be used, though, so this game should put your math skills to the test as you combine numbers and try to make the right equations to get as close to the target number as possible.

Players will get a five-star rating if they match the target number exactly, a three-star rating if they get within 10 of the target, and a one-star rating if they can get within 25 of the target number. Currently, players are also able to access five different puzzles with increasingly larger numbers as well.  I solved today’s puzzle and found it to be an enjoyable number-based game that should appeal to inquisitive minds that like puzzle games such as Threes or other The New York Times titles like Wordle and Spelling Bee.
In an article unveiling Digits and detailing The New York Time Games team’s process to game development, The Times says the team will use this free beta to fix bugs and assess if it’s worth moving into a more active development phase “where the game is coded and the designs are finalized.” So play Digits while you can, as The New York Times may move on from the project if it doesn’t get the response it is hoping for. 
Digits’ beta is available to play for free now on The New York Times Games’ website

Source link

Advertisement
Continue Reading

Technology

Meta Quest 3S will be affordable, reveals price leak

Published

on

Meta Quest 3S will be affordable, reveals price leak

Meta has been working on a VR headset for the better part of the year. The brand’s upcoming VR headset is expected to be called the Meta Quest 3S. The official name of the device was recently confirmed by an official store. Now, the price of the Meta Quest 3S has leaked ahead of the launch. The leaked pricing suggests that Meta’s next VR offering will be a highly affordable one.

The leaked price suggests that the Meta Quest 3S will be an affordable offering.

The price of the Meta Quest 3S VR headset has appeared in an Amazon advert on the streaming platform Peacock. One Reddit user of the OTT platform saw the advertisement and managed to record it. He then shared the ad clip carrying Meta Quest 3S’ price on Reddit. The Meta Quest 3S VR headset will cost just $299. It will be one of the most affordable VR headsets on offer. Notably, the rumor mill had also suggested a similar pricing earlier.

The $299 Quest 3S variant will have 128GB of storage

The advert on Peacock has leaked the pricing of the 128GB storage variant of the VR headset. This will put it at the same price as the 64GB Oculus Quest 2 from 2020. The company will likely offer other variants of the VR headset with varying storage as well. Notably, the advertisement corroborates recent images of the device that leaked online. It shows full-color passthrough tech at a lower resolution than the Quest 3.

Meta Quest 3S will offer the Snapdragon XR2 Gen 2 SoC, just like the Quest 3

A Snapdragon XR2 Gen 2 chip will power the Meta Quest 3S.. The same chipset powers the Quest 3. The device will have the old Fresnel lenses from the Quest 2 to achieve the $299 affordable price tag. It could also have downwards-facing side cameras in the same positions as the Quest 3 model.

Advertisement

Furthermore, the reports indicate that the Meta Quest 3S will offer a slightly lower battery capacity than the Quest 3. It will also miss a headset jack. The brand is expected to unveil the new VR headset at its Connect event on September 25.

Meta Quest 3s price leaked

Source link

Continue Reading

Technology

News of The Week: eBay’s New Background Enhancing AI Tool

Published

on

News of The Week: eBay’s New Background Enhancing AI Tool

Discover how eBay’s new background enhancement tool can help sellers create stunning, professional-grade photos effortlessly.

Source link

Continue Reading

Technology

GenAI demands greater emphasis on data quality

Published

on

GenAI demands greater emphasis on data quality

Data quality has perhaps never been more important. And a year from now, then a year beyond that, it will likely be even more important than it is now.

The reason: AI, and in particular, generative AI.

Given its potential benefits, including exponentially increased efficiency and more widespread use of data to inform decisions, enterprise interest in generative AI is exploding. But for enterprises to benefit from generative AI, the data used to inform models and applications needs to be high-quality. The data must be accurate for the generative AI outputs to be accurate.

Meanwhile, generative AI models and applications require massive amounts of data to understand how to respond to a user’s query. Their outputs aren’t based on individual data points, but instead on aggregations of data. So, even if the data used to train a model or application is high-quality, if there’s not enough of it, the model or application will be prone to deliver an incorrect output called an AI hallucination.

Advertisement

With so much data needed to reduce the likelihood of hallucinations, data pipelines need to be automated. Therefore, with data pipelines automated and humans unable to monitor every data point or data set at every step of the pipeline, it’s imperative that the data be high-quality from the start and there be checks on outputs at the end, according to David Menninger, an analyst at ISG’s Ventana Research.

Otherwise, not only inaccuracies, but also biased and potentially offensive outputs could result.

As we’re deploying more and more generative AI, if you’re not paying attention to data quality, you run the risks of toxicity, of bias. You’ve got to curate your data before training the models, and you have to do some postprocessing to ensure the quality of the results.
David MenningerAnalyst, ISG’s Ventana Research

“Data quality affects all types of analytics, but now, as we’re deploying more and more generative AI, if you’re not paying attention to data quality, you run the risks of toxicity, of bias,” Menninger said. “You’ve got to curate your data before training the models, and you have to do some postprocessing to ensure the quality of the results.”

In response, enterprises are placing greater emphasis on data quality than in the past, according to Saurabh Abhyankar, chief product officer at longtime independent analytics vendor MicroStrategy.

Advertisement

“We’re actually seeing it more than expected,” he said.

Likewise, Madhukar Kumar, chief marketing officer at data platform provider SingleStore, said he is seeing increased emphasis on data quality. And it goes beyond just accuracy, he noted. Security is an important aspect of data quality. So is the ability to explain decisions and outcomes.

“The reason you need clean data is because GenAI has become so common that it’s everywhere,” Kumar said. “That is why it has become supremely important.”

However, ensuring data quality to get the benefits of AI isn’t simple. Nor are the consequences of bad data quality.

Advertisement

The rise of GenAI

The reason interest in generative AI is exploding — the “why” behind generative AI being everywhere and requiring that data quality become a priority — is that it has transformative potential in the enterprise.

Data-driven decisions have proven to be more effective than those not informed by data. As a result, organizations have long wanted to get data in the hands of more employees to enable them to get in on the decision-making process.

But despite the desire to broaden analytics use, only about a quarter of employees within most organizations use data and analytics as part of their workflow. And that has been the case for years, perhaps dating back to the start of the 21st century.

The culprit is complexity. Analytics and data management platforms are intricate. They largely require coding to prepare and query data, and data literacy training to analyze and interpret it.

Advertisement

Vendors have attempted to simplify the use of their tools with low-code/no-code capabilities and natural language processing features, but to little avail. Low-code/no-code capabilities don’t enable deep exploration, and the NLP capabilities developed by data management and analytics vendors have limited vocabularies and still require data literacy training to use.

Generative AI lowers the barriers that have held back wider analytics use. Large language models have vocabularies as large as any dictionary and therefore enable true natural language interactions that reduce the need for coding skills. In addition, LLMs can infer intent, further enabling NLP.

When generative AI is combined with an enterprise’s proprietary data, suddenly any employee with a smartphone and proper clearance can work with data and use analytics to inform decisions.

“With generative AI, for the first time, we have the opportunity to use natural language processing broadly in various software applications,” Menninger said. “That … makes technology available to a larger portion of the enterprise. Not everybody knows how to use a piece of software. You don’t have to know how to use the software; you just have to know how to ask a question.”

Advertisement

Generative AI chatbots — tools that enable users to ask questions using natural language and get responses in natural language — are not foolproof, Menninger added.

“But they’re a huge improvement,” he said. “Software becomes easier to use. More people use it. You get more value from it.”

Meanwhile, data management and analytics processes — integrating and preparing data to make it consumable; developing data pipelines; building reports, dashboards and models — require tedious, time-consuming work by data experts. Even more tedious is documenting all that work.

Generative AI changes that as well. NLP reduces coding requirements by enabling developers to write commands in natural language that generative AI can translate to code. In addition, generative AI can be trained to carry out certain repetitive tasks on its own, such as writing code, creating data pipelines and documenting work.

Advertisement

“There are a lot of tasks humans do,” Abhyankar said. “People are overworked, and if you ask them what they are able to do versus what they’d like to be able to do, most will say they want to do five or 10 times more. One benefit of good data with AI on top of it is that it becomes a lever and a tool to help the human being be potentially multiple times more efficient than they are.”

Eventually, generative AI could wind up being as transformational for knowledge workers as the industrial revolution was for manual laborers, he said. Just as an excavator is multiple times more efficient at digging a hole than a construction worker with a shovel, AI-powered tools have the potential to make knowledge workers multiple times more efficient.

Donald Farmer, founder and principal of TreeHive Strategy, likewise noted that one of the main potential benefits of effective AI is efficiency.

“It enables enterprises to scale their processes with greater confidence,” he said.

Advertisement

However, the data used to train the AI applications that enable almost anyone within an organization to ask questions of their data and use the responses to inform decisions had better be right. Similarly, the data used to train the applications that take on time-consuming, repetitive tasks that dominate data experts’ time had better be right.

The need for data quality

Data quality has always been important. It didn’t just become important in November 2022 when OpenAI’s launch of ChatGPT — which represented a significant improvement in LLM capabilities — initiated an explosion of interest in developing AI models and applications.

Bad data has long led to misinformed decisions, while good data has always led to informed decisions.

A graphic lists six elements of data quality: accuracy, completeness, consistency, timeliness, uniqueness and validity.

But the scale and speed of decision-making were different before generative AI. So were the checks and balances. As a result, both the benefits of good data quality and consequences of bad data quality were different.

Advertisement

Until the onset of self-service analytics spurred by vendors such as Tableau and Qlik some 15 years ago, data management and analytics were isolated to teams of IT professionals working in concert with data analysts. Consumers — the analysts — usually had to submit a request to data stewards, who would then take the request and develop a report or dashboard that could be analyzed to inform a decision.

The process could often take months and at least took days. And even when the report or dashboard was developed, it often had to be redone multiple times as the end user realized the question they asked wasn’t quite right or the resulting data product led to follow-up questions.

During the development process, IT teams worked closely with the data used to inform the reports and dashboards they built. They were hands-on, and they had time to make sure the data was accurate.

Self-service analytics altered the paradigm, removing some of the control from centralized IT departments and enabling end users with the proper skills and training to work with data on their own. In response, enterprises developed data governance frameworks to both set limits on what self-service users could do with data — to protect against self-service users going too far — and also give the business users freedom to explore within certain parameters.

Advertisement

The speed and scale of data management and analytics-based decision-making increased, but it was still limited to a group of trained users who, with their expertise, were usually able to recognize when something seemed off in the data and not hastily take actions.

Now, just as generative AI changes who within an organization can work with data and what experts can do with it, it changes the speed and scale of data-informed decisions and actions. To feed that speed and scale with good data, automated processes — overseen by humans who can intervene when necessary — are required, according to Farmer.

“It puts an emphasis on processes that can be automated, identifying data-cleaning processes that require less expertise than before,” Farmer said. “That’s where it’s changing. We’re trying to do things at much greater scale, and you just can’t have a human in the loop at that scale. Whether the process can be audited is very important.”

Abhyankar compared the past and present to the difference between a small, Michelin-starred gourmet restaurant and a fast-food chain.

Advertisement

The chef at the small restaurant, each day, can shop for the ingredients of every dish and then oversee the kitchen as each dish gets made. At a chain, the scale of what needs to be bought and the speed with which the food needs to be made make it impossible for a chef to oversee every detail. Instead, a process ensures no bad meat or produce makes it into meals served to consumers.

“[Data quality] is really important in a world where you’re going from hand-created dashboards and reports to a world where you want AI to do [analysis] at scale,” Abhyankar said. “But you can’t scale unless you have a system in place so [the AI application] can be precise and personalized to serve many more people with many more insights on the fly. To do that, the data quality simply has to be there.”

Benefits and consequences

The whole reason enterprise interest is rising in developing AI models and applications and using AI to inform decisions and automate processes — all of which need high-quality data as a foundation — is the potential benefits.

The construction worker who now has an excavator to dig a hole rather than a shovel can be multiple times more efficient. And in concert with a few others at the controls of excavators, they can dig the foundation for a new building perhaps a hundred times faster than they could by hand.

Advertisement

A construction worker with a cement mixer can follow up and pour the foundation multiple times faster than if they had to mix the cement and pour it by hand. Next, the girders can be moved into place by cranes rather than carried by humans, and so on.

It adds up to an exponentially more efficient construction process.

The same is true of AI in the enterprise. Just as construction teams can rely on the engines and controls in excavators, cement mixers, cranes and other vehicles that scale the construction process, if the data fueling AI models and applications is trustworthy, organizations can confidently scale business processes with AI, according to Farmer.

And scale in the business world — being able to do exponentially more without having to expand staff — means growth.

Advertisement

“Data quality enables enterprises to scale their processes with greater confidence,” he said. “It enables them to build fine-grained processes like hyperpersonalization with greater confidence. Next-best offers, recommendation engines, things that can be highly optimized for an individual — that sort of thing becomes very possible.”

Beyond retail, another common example is fraud detection, according to Menninger. Detecting fraud amid millions of transactions can be nearly impossible. AI models can check all those transactions, while not even teams of humans have the capacity to look at them all, much less find patterns and relationships between them.

“If accurate data is being fed into the models to detect fraud, and you can improve the detection even just slightly, that ends up having a large impact,” Menninger said.

But just as the potential benefits of good-quality data at the core of AI are greater than good data without AI, the consequences of bad data at the core of AI are greater than the consequences of bad data without AI. The speed and scale that AI models and applications enable result in the broader and faster spread of fallout from poor decisions and actions.

Advertisement

Back when IT teams controlled their organizations’ data and when a limited number of self-service users contributed to decisions, the main risk of bad data was lack of trust in data-informed decisions and the resulting loss of efficiencies, according to MicroStrategy’s Abhyankar. In rare cases, it could lead to something more severe, but there was usually time for someone to step in and stop something from happening before it spread.

Now, the potential exists to not only scale previous problems, but also create new ones.

If AI models and applications are running processes and making decisions without someone checking them before actions are taken, it could lead to significant ethical problems such as baselessly denying an applicant a credit card or mortgage. Similarly, if a human uses AI outputs to make decisions, but the output is misinformed, it could result in serious ethical issues.

“You scale the previous problems,” Abhyankar said. “But it’s actually worse than that. In scenarios where the AI is making decisions, you’re making bad decisions at scale. If you run into ethical problems, it’s catastrophically bad for an organization. But even when AI is just delivering information to a human being, you’re scaling the problems.”

Advertisement

Farmer noted that AI doesn’t deliver outputs based on single data points. AI models and applications are statistical, looking at broad swaths of data to inform their actions. As long as most of the data used to train a model or application is correct, the model or application will be useful.

“If a data set is poor quality, you’ll get poor results,” Farmer said. “But if one piece of data is wrong, it’s not going to make much difference to the AI because it’s looking at statistics as a whole.”

That is, unless it’s that fine-grained decision about an individual such as whether to approve a mortgage application. In that case, if the data is wrong, it can lead to serious ethical consequences. Even more catastrophically, in a healthcare scenario, bad data could lead to the difference between life and death.

“If we’re using AI to make decisions about individuals — are we going to give someone a mortgage — then having high-quality individual data becomes extremely important, because then we have given this system over,” Farmer said. “If we’re talking about AI making fine-grained decisions, then the data has to be very high-quality.”

Advertisement

Ensuring data quality

With data quality so critical to the success of AI, as well as reaping the benefits of broader use of technologies and exponentially increased efficiency, the obvious question is how enterprises can ensure good data goes into models and applications so that good outputs result.

There is, unfortunately, no simple solution — no fail-safe.

Data quality is difficult. Enterprises have always struggled to ensure only good-quality data is used to inform decisions. In the era of AI, including generative AI, that’s no different.

“The problem is still hard,” Abhyankar said.

Advertisement

But there are steps that organizations can take to lessen the likelihood of bad data slipping through the cracks and affecting the accuracy of models and applications. There are technologies they can use and processes they can implement.

Ironically, many of the technologies that can detect bad data use AI to do so.

Vendors such as Informatica and Oracle offer tools designed specifically to monitor data quality. These tools can look at data characteristics such as metadata and data lineage, sometimes have master data management capabilities, and in general are built to detect problematic data. Other vendors such as Alation and Collibra provide data catalogs that help enterprises organize and govern data, including descriptions of data, to provide users with information before they operationalize any data.

Still other vendors including Acceldata and Monte Carlo offer data observability platforms that use AI to monitor data as it moves through data pipelines, detecting irregularities as they occur and automatically alerting customers to potential problems. But unlike data quality tools and data catalogs that address data quality while data is at rest before being used to train AI models and applications, observability tools monitor data while it is in motion on its way to a model or application.

Advertisement

“Increasingly, AI is actually in a sense running its own data quality,” Farmer said. “Many of those tools work on inferences, work on discovering patterns of the data. It turns out that AI is very good at that and doing it at scale.”

More important than any tooling, however, is that humans always remain involved and check any output before it is used to take action.

Just as a hybrid approach emerged as ideal for cloud computing — including on-premises, private cloud and public cloud — a hybrid approach that uses technology to augment humans is emerging as the ideal approach to working with the data used to train AI, according to SingleStore’s Kumar.

“First and foremost is to allow humans to have control,” he said.

Advertisement

Humans simply know more about their organization’s data than machines and can better spot when something seems off. Humans have been working with their organization’s data from their organization’s founding, which in some cases means there are decades’ worth of code used to develop and inform dashboards and reports that humans can perfectly replicate, but a machine might not know.

Humans, in a simple example, know whether their company’s fiscal year starts on Jan. 1 or some other date, while a model might assume it starts on Jan. 1.

“Hybrid means human plus AI,” Kumar said. “There are things AI is really good at, like repetition and automation, but when it comes to quality, there’s still the fact that humans are a lot better because they have a lot more context about their data.”

If there’s a human at the end of the process to check outputs, organizations can better ensure actions taken will have their intended results, and some potentially damaging actions can be avoided.

Advertisement

If there’s a person to make sure a mortgage application should be rejected or approved, it will benefit their organization’s bottom line. The approved mortgage will result in profits, as well as avoid the serious consequences of mistakenly declining someone’s application based on biased data, while the declined mortgage will avoid potential losses related to a default.

If there’s a healthcare worker to check whether a patient is allergic to a recommended medication or that medication might interact badly with another medication the patient is taking, it could save a life.

The AI models and applications, fueled by data, can be left to do their work. They can automate repetitive processes, generate code to develop applications, write summaries and documentation, respond to user questions in natural language and so on. They’re good at those tasks, when informed by good-quality data.

But they’re not perfect, even when the data used to train them is as good as possible.

Advertisement

“There always has to be human intervention,” Menninger said.

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Source link

Advertisement
Continue Reading

Technology

Cards Against Humanity is suing SpaceX for trespassing and filling its property with ‘space garbage’

Published

on

Cards Against Humanity is suing SpaceX for trespassing and filling its property with ‘space garbage’

Cards Against Humanity is the latest entity to take on Elon Musk in court. The irreverent party game company filed a $15 million lawsuit against SpaceX for trespassing on property it owns in Texas, which happens to sit near SpaceX facilities.

According to filed in a federal court in Texas, Musk’s rocket company began using its land without permission for the last six months. SpaceX took what was previously a “pristine” plot of land “and completely fucked that land with gravel, tractors, and space garbage,” CAH wrote in a .

As you might expect from the card game company known for its raunchy sense of humor and headline-grabbing stunts, there’s an amusing backstory to how it became neighbors with SpaceX in Texas in the first place. , the company bought land along the US-Mexico border as part of a crowdfunded effort to protest then President Donald Trump’s plan to build a border wall. Since then, the company writes, it has maintained the land with regular mowing, fencing and “no trespassing” signs.

SpaceX later purchased adjacent land and, earlier this year, allegedly began using CAH’s land amid some kind of construction project. From the lawsuit (emphasis theirs):

Advertisement

The site was cleared of vegetation, and the soil was compacted with gravel or other substance to allow SpaceX and its contractors to run and park its vehicles all over the Property. Generators were brought in to run equipment and lights while work was being performed before and after daylight. An enormous mound of gravel was unloaded onto the Property; the gravel is being stored and used for the construction of buildings by SpaceX’s contractors along the road. Large pieces of construction equipment and numerous construction-related vehicles are utilized and stored on the Property continuously. And, of course, workers are present performing construction work and staging materials and vehicles for work to be performed on other tracts. In short, SpaceX has treated the Property as its own for at least six (6) months without regard for CAH’s property rights nor the safety of anyone entering what has become a worksite that is presumably governed by OSHA safety requirements.

SpaceX, according to the filing, “never asked for permission” to use the land and “and hasnever reached out to CAH to explain or apologize for the damage.” The rocket company did, however, give “a 12-hour ultimatum to accept a lowball offer for less than half our land’s value,” according to a statement posted online. A spokesperson for CAH said the land in question is “about an acre” in size.

What CAH's Texas land looked like prior to SpaceX's alleged trespassing.

What CAH’s Texas land looked like prior to SpaceX’s alleged trespassing. (Christopher Markos / Cards Against Humanity)

In response to the ultimatum, CAH filed a $15 million lawsuit against SpaceX for trespassing and damaging its property. The game company, which originally was funded via a Kickstarter campaign, says that if it’s successful in court it will share the proceeds with the 150,000 fans who helped originally purchase the land in 2017. It created where subscribers can sign-up for a chance to get up to $150 of the potential $15 million payout should their lawsuit succeed. (A disclaimer notes that “Elon Musk has way more money and lawyers than Cards Against Humanity, and while CAH will try its hardest to get me $100, they will probably only be able to get me like $2 or most likely nothing.)

SpaceX didn’t immediately respond to a request for comment. But CAH isn’t the only Texas landowner that’s raised questions about the company’s tactics. SpaceX has been aggressively growing its footprint in Southern Texas in recent years. The expansion, which has resulted in many locals selling their land to SpaceX, has rankled some longtime residents, according to an investigation Reuters.

CAH says that Musk’s past behavior makes SpaceX’s actions “particularly offensive” to the company known for taking a stance on social issues.

Advertisement

“The 2017 holiday campaign that resulted in the purchase of the Property was based upon CAH undertaking efforts to fight against ‘injustice, lies, [and] racism,” it states. “Thus, it is particularly offensive that these egregious acts against the Property have been committed by the company run by Elon Musk. As is widely known, Musk has been accused of tolerating racism and sexism at Tesla and of amplifying the antisemitic ‘Great Replacement Theory.’ Allowing Musk’s company to abuse the Property that CAH’s supporters contributed money to purchase for the sole purpose of stopping such behavior is totally contrary to both the reason for the contribution and the tenets on which CAH is based.”

Source link

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.