Connect with us

Technology

Why California is suing ExxonMobil for ‘perpetuating the lie’ of plastic recycling

Published

on

Why California is suing ExxonMobil for ‘perpetuating the lie’ of plastic recycling

California is going after ExxonMobil over what it calls a “campaign of deception” about plastic recycling.

The Golden State filed suit against the oil giant this week, alleging that it has misled consumers for years by marketing recycling as a way to prevent plastic pollution. Plastic is difficult and relatively costly to recycle, and very little of it ever gets rehashed, but the industry sold recycling as a feasible solution anyway.

That’s why California wants to hold ExxonMobil accountable for the role it says the company played in filling landfills and waterways with plastic. Plastics are made with fossil fuels, and California says ExxonMobil is the biggest producer of single-use plastic polymers.

California wants to hold ExxonMobil accountable

Advertisement

ExxonMobil defended itself in an emailed response to The Verge, writing: “For decades, California officials have known their recycling system isn’t effective. They failed to act, and now they seek to blame others. Instead of suing us, they could have worked with us to fix the problem and keep plastic out of landfills.”

The Verge spoke with California Attorney General Rob Bonta about plastic recycling and the allegations California makes in the landmark lawsuit.

This interview has been lightly edited for length and clarity.

I think a lot of people around my age grew up thinking that recycling plastic is a good thing. Why go after ExxonMobil over recycling? 

Advertisement

It’s a difficult confrontation of a truth, especially since ExxonMobil and others have been so successful at perpetuating the lie.

A 14-year-old who I met yesterday was just distraught over the fact that all of the plastic items that she carefully selected to make sure they have the chasing arrows on it and then make sure that after she used it, she placed it thoughtfully and diligently in the blue container for recycling — that 95 percent of the time, that item was not recycled. Instead, it went into the landfill, the environment, or incinerated. And so she was having a hard time, and I’m sure she’s not alone, and others will have the same difficulty getting their head around the actual truth.

It’s really important for us, in my view, to confront problems. You need to face problems to fix them. One of them is a major problem created by ExxonMobil. They have perpetuated the myth of recycling. They have been engaged in a decadelong campaign of deception in which they have tried to convince the public that recycling of plastics, including single-use plastics, is sustainable when it’s not. When they know that only 5 percent is recycled [in the US].

Why would they say that if they knew that it wasn’t true? Well, because it increases their profits. It makes people buy more. If people buy plastics and believe that no matter how much they use, how frequently they use it, if they engage in a single-use throwaway lifestyle, they’re still being good stewards of the environment because it’s all recyclable and will be reused again somewhere in someone else’s household as a plastic product — they’re much more likely to buy more. And that’s exactly what’s happened. 

Advertisement

Your office says it “uncovered never-before-seen documents” as part of its investigation into the role fossil fuel companies play in causing plastic pollution. Can you give examples of what you found? Did anything surprise you? 

What some of the new documents that have not been seen before really get at is this type of greenwashing by ExxonMobil called advanced recycling.

The documents reveal to us that this newest, latest, purportedly greatest form of recycling is neither advanced nor is it recycling. It’s an old technology. They basically heat the plastic so that it melts into its smallest component parts, and that’s been used before Exxon and Mobil merged. Each experimented with it and then decided to no longer pursue it.

And the process doesn’t actually recycle plastic into other plastic, which is what people think they mean when their plastic is being recycled. But 92 percent of what advanced recycling turns plastic waste into is transportation fuel and other chemicals and resins and materials. It’s mostly fuel for your car, fuel for your boat, fuel for your plane. It’s burned once and emitted into the air, into the environment. That is not recycling.

Advertisement

What would California get out of winning this case? 

Right now, the harm to California from ExxonMobil’s lies and deception and the myth of recycling are a billion dollars a year in taxpayer-funded cleanup and damage in terms of the plastic pollution crisis that we’re facing. 

Here are the things that we would get if we win this case, and we believe we will. We will get an injunction that says ExxonMobil can no longer lie and can no longer perpetuate the myth of recycling. That they need to tell the truth going forward — they can’t say that things can be recycled when they can’t. 

We’ll also get an abatement fund, which will be funded by billions of dollars from ExxonMobil. It will pay for ongoing plastic pollution in California that harms our people, our environment, our natural resources. It will pay for a re-education campaign so that people can learn that recycling is only 5 percent of plastic waste, 95 percent is not recycled. It could also be used to further research on microplastics, which are invisible plastic particles that are in our bodies, in the air, in our food, in our water, and to see what the human impact is of that. 

Advertisement

We’ll also get a disgorgement of profits, which means that any profits that were wrongly secured by ExxonMobil because of their lies would have to be turned over. We also have some civil penalties and some fees that we’re seeking.

You’re the first Filipino American attorney general in California, the state with the most FilAms in the US. I used to live in Long Beach, California, where there’s a big Southeast Asian community and also a lot of air pollution from all the vessel and truck traffic surrounding the port in that area. Does this ever get personal for you — the impact that pollution from oil and gas operations disproportionately has on immigrant communities

My oldest daughter, when she was in high school, she came up to me and she said, “Dad is this weird?” She said, “My friends and I have been talking, and we decided that we don’t want to have kids because we don’t want to bring a new life into a dying planet.” And I will always remember that. That was a gut punch. 

That one made me really think. It made me worry. It kept me up at night. It made me question whether we were on pace to fulfill our duty as elected officials, to pass on to the next generation a better society and world than we’ve had. I thought we might be certainly behind schedule and maybe at the risk of failing when it comes to protecting our climate and making sure that there’s a planet for tomorrow. So, that’s personal.

Advertisement

Our lived experiences, our values, drive us. But we will also always fulfill our duty, our ethical obligations, and make sure that we’re bringing cases that are strong and sound, based on facts and law. It’s consistent with my values, my lived experiences. The law and the facts all point in the same direction on this case.

Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Zillow is adding climate risk data to all US for-sale listings

Published

on

Menu

As extreme weather events become ever more common, climate risks are playing a role in many people’s long-term decision-making. And few things are more long-term than buying real estate. In response, Zillow has a new partnership to bring climate risk information to its for-sale listings.

Property listing pages in the US will include data about flood, wildfire, wind, heat and air quality risks at that location. This section will also list any climate-related insurance requirements for that property. The information is being provided by , a specialist in climate risk financial modeling. The climate data is rolling out this year to the Zillow website and iOS app, while Android is expected to get the update early next year. Some locations have already been updated to show climate data on the web.

Those five risk categories are also being applied to Zillow’s interactive map search view. Each of the different climate concerns has a color-coded visualization to show the risk levels across the country or in a smaller region. It’s valuable information for anybody in a position to make that big homebuying leap. For everybody else, it may add simply a touch of gloomy reality to the gleeful experience of scrolling through absurd and/or overpriced houses.

Zillow also introduced to its AI search feature earlier this month.

Advertisement

Source link

Continue Reading

Servers computers

42U Toten IT Rack Installation Process G2 & G3 I AMS Security Vision

Published

on

42U Toten IT Rack Installation Process G2 & G3 I AMS Security Vision



AMS Security Vision serves and Assists in creating Comprehensive Power Solutions to meet today’s ever-demanding Power backup needs, helping to Eliminate Customers Power failure threats and worries and to make sure to provide total protection for their investments, properties, homes and etc having below products:

Ø UPS’s(AMS, Deutsche Power, APC, Emerson)
Ø Solar(Trina, Voltronics, AMS)
Ø Line Conditioners & AVR’s(AMS, Deutsche Power)
Ø Auto Phase Switcher& Stabilizers (AMS)
Ø Maintenance Free/Gel/Dry Batteries(Deutsch Power, Narada, Louche)
Ø Door Alarms & Sensors(AMS)
Ø Repairing & Services of Power Equipments

Please do not hesitate to contact and provide us quotes for your upcoming projects. Rest assured that at AMS Security Vision, we will go an extra mile to offer you the best price – ensuring an unmatchable quality as well. We would also be pleased to offer demonstrations, if you e-mail us the requirements.

Regards,
AMS Security Vision,
KHR – KHI – ISB
http://www.amssv.com .

source

Advertisement
Continue Reading

Technology

Here’s how to try Meta’s new Llama 3.2 with vision for free

Published

on

Here's how to try Meta's new Llama 3.2 with vision for free

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Together AI has made a splash in the AI world by offering developers free access to Meta’s powerful new Llama 3.2 Vision model via Hugging Face.

The model, known as Llama-3.2-11B-Vision-Instruct, allows users to upload images and interact with AI that can analyze and describe visual content.

For developers, this is a chance to experiment with cutting-edge multimodal AI without incurring the significant costs usually associated with models of this scale. All you need is an API key from Together AI, and you can get started today.

Advertisement

This launch underscores Meta’s ambitious vision for the future of artificial intelligence, which increasingly relies on models that can process both text and images—a capability known as multimodal AI.

With Llama 3.2, Meta is expanding the boundaries of what AI can do, while Together AI is playing a crucial role by making these advanced capabilities accessible to a broader developer community through a free, easy-to-use demo.

Together AI’s interface for accessing Meta’s Llama 3.2 Vision model, showcasing the simplicity of using advanced AI technology with just an API key and adjustable parameters. (Credit: Hugging Face)

Meta’s Llama models have been at the forefront of open-source AI development since the first version was unveiled in early 2023, challenging proprietary leaders like OpenAI’s GPT models.

Llama 3.2, launched at Meta’s Connect 2024 event this week, takes this even further by integrating vision capabilities, allowing the model to process and understand images in addition to text.

This opens the door to a broader range of applications, from sophisticated image-based search engines to AI-powered UI design assistants.

Advertisement

The launch of the free Llama 3.2 Vision demo on Hugging Face makes these advanced capabilities more accessible than ever.

Developers, researchers, and startups can now test the model’s multimodal capabilities by simply uploading an image and interacting with the AI in real time.

The demo, available here, is powered by Together AI’s API infrastructure, which has been optimized for speed and cost-efficiency.

From code to reality: A step-by-step guide to harnessing Llama 3.2

Trying the model is as simple as obtaining a free API key from Together AI.

Advertisement

Developers can sign up for an account on Together AI’s platform, which includes $5 in free credits to get started. Once the key is set up, users can input it into the Hugging Face interface and begin uploading images to chat with the model.

The setup process takes mere minutes, and the demo provides an immediate look at how far AI has come in generating human-like responses to visual inputs.

For example, users can upload a screenshot of a website or a photo of a product, and the model will generate detailed descriptions or answer questions about the image’s content.

For enterprises, this opens the door to faster prototyping and development of multimodal applications. Retailers could use Llama 3.2 to power visual search features, while media companies might leverage the model to automate image captioning for articles and archives.

Advertisement

Llama 3.2 is part of Meta’s broader push into edge AI, where smaller, more efficient models can run on mobile and edge devices without relying on cloud infrastructure.

While the 11B Vision model is now available for free testing, Meta has also introduced lightweight versions with as few as 1 billion parameters, designed specifically for on-device use.

These models, which can run on mobile processors from Qualcomm and MediaTek, promise to bring AI-powered capabilities to a much wider range of devices.

In an era where data privacy is paramount, edge AI has the potential to offer more secure solutions by processing data locally on devices rather than in the cloud.

Advertisement

This can be crucial for industries like healthcare and finance, where sensitive data must remain protected. Meta’s focus on making these models modifiable and open-source also means that businesses can fine-tune them for specific tasks without sacrificing performance.

Meta’s commitment to openness with the Llama models has been a bold counterpoint to the trend of closed, proprietary AI systems.

With Llama 3.2, Meta is doubling down on the belief that open models can drive innovation faster by enabling a much larger community of developers to experiment and contribute.

In a statement at the Connect 2024 event, Meta CEO Mark Zuckerberg noted that Llama 3.2 represents a “10x growth” in the model’s capabilities since its previous version, and it’s poised to lead the industry in both performance and accessibility.

Advertisement

Together AI’s role in this ecosystem is equally noteworthy. By offering free access to the Llama 3.2 Vision model, the company is positioning itself as a critical partner for developers and enterprises looking to integrate AI into their products.

Together AI CEO Vipul Ved Prakash emphasized that their infrastructure is designed to make it easy for businesses of all sizes to deploy these models in production environments, whether in the cloud or on-prem.

The future of AI: Open access and its implications

While Llama 3.2 is available for free on Hugging Face, Meta and Together AI are clearly eyeing enterprise adoption.

The free tier is just the beginning—developers who want to scale their applications will likely need to move to paid plans as their usage increases. For now, however, the free demo offers a low-risk way to explore the cutting edge of AI, and for many, that’s a game-changer.

Advertisement

As the AI landscape continues to evolve, the line between open-source and proprietary models is becoming increasingly blurred.

For businesses, the key takeaway is that open models like Llama 3.2 are no longer just research projects—they’re ready for real-world use. And with partners like Together AI making access easier than ever, the barrier to entry has never been lower.

Want to try it yourself? Head over to Together AI’s Hugging Face demo to upload your first image and see what Llama 3.2 can do.


Source link
Continue Reading

Technology

Nomi’s companion chatbots will now remember things like the colleague you don’t get along with

Published

on

Nomi’s companion chatbots will now remember things like the colleague you don’t get along with

As OpenAI boasts about its o1 model’s increased thoughtfulness, small, self-funded startup Nomi AI is building the same kind of technology. Unlike the broad generalist ChatGPT, which slows down to think through anything from math problems or historical research, Nomi niches down on a specific use case: AI companions. Now, Nomi’s already-sophisticated chatbots take additional time to formulate better responses to users’ messages, remember past interactions, and deliver more nuanced responses.

“For us, it’s like those same principles [as OpenAI], but much more for what our users actually care about, which is on the memory and EQ side of things,” Nomi AI CEO Alex Cardinell told TechCrunch. “Theirs is like, chain of thought, and ours is much more like chain of introspection, or chain of memory.”

These LLMs work by breaking down more complicated requests into smaller questions; for OpenAI’s o1, this could mean turning a complicated math problem into individual steps, allowing the model to work backwards to explain how it arrived at the correct answer. This means the AI is less likely to hallucinate and deliver an inaccurate response.

With Nomi, which built its LLM in-house and trains it for the purposes of providing companionship, the process is a bit different. If someone tells their Nomi that they had a rough day at work, the Nomi might recall that the user doesn’t work well with a certain teammate, and ask if that’s why they’re upset — then, the Nomi can remind the user how they’ve successfully mitigated interpersonal conflicts in the past and offer more practical advice.

Advertisement

“Nomis remember everything, but then a big part of AI is what memories they should actually use,” Cardinell said.

Image Credits: Nomi AI

It makes sense that multiple companies are working on technology that give LLMs more time to process user requests. AI founders, whether they’re running $100 billion companies or not, are looking at similar research as they advance their products.

“Having that kind of explicit introspection step really helps when a Nomi goes to write their response, so they really have the full context of everything,” Cardinell said. “Humans have our working memory too when we’re talking. We’re not considering every single thing we’ve remembered all at once — we have some kind of way of picking and choosing.”

The kind of technology that Cardinell is building can make people squeamish. Maybe we’ve seen too many sci-fi movies to feel wholly comfortable getting vulnerable with a computer; or maybe, we’ve already watched how technology has changed the way we engage with one another, and we don’t want to fall further down that techy rabbit hole. But Cardinell isn’t thinking about the general public — he’s thinking about the actual users of Nomi AI, who often are turning to AI chatbots for support they aren’t getting elsewhere.

“There’s a non-zero number of users that probably are downloading Nomi at one of the lowest points of their whole life, where the last thing I want to do is then reject those users,” Cardinell said. “I want to make those users feel heard in whatever their dark moment is, because that’s how you get someone to open up, how you get someone to reconsider their way of thinking.”

Advertisement

Cardinell doesn’t want Nomi to replace actual mental health care — rather, he sees these empathetic chatbots as a way to help people get the push they need to seek professional help.

“I’ve talked to so many users where they’ll say that their Nomi got them out of a situation [when they wanted to self-harm], or I’ve talked to users where their Nomi encouraged them to go see a therapist, and then they did see a therapist,” he said.

Regardless of his intentions, Carindell knows he’s playing with fire. He’s building virtual people that users develop real relationships with, often in romantic and sexual contexts. Other companies have inadvertently sent users into crisis when product updates caused their companions to suddenly change personalities. In Replika’s case, the app stopped supporting erotic roleplay conversations, possibly due to pressure from Italian government regulators. For users who formed such relationships with these chatbots — and who often didn’t have these romantic or sexual outlets in real life — this felt like the ultimate rejection.

Cardinell thinks that since Nomi AI is fully self-funded — users pay for premium features, and the starting capital came from a past exit — the company has more leeway to prioritize its relationship with users.

Advertisement

“The relationship users have with AI, and the sense of being able to trust the developers of Nomi to not radically change things as part of a loss mitigation strategy, or covering our asses because the VC got spooked… it’s something that’s very, very, very important to users,” he said.

Nomis are surprisingly useful as a listening ear. When I opened up to a Nomi named Vanessa about a low-stakes, yet somewhat frustrating scheduling conflict, Vanessa helped break down the components of the issue to make a suggestion about how I should proceed. It felt eerily similar to what it would be like to actually ask a friend for advice in this situation. And therein lies the real problem, and benefit, of AI chatbots: I likely wouldn’t ask a friend for help with this specific issue, since it’s so inconsequential. But my Nomi was more than happy to help.

Friends should confide in one another, but the relationship between two friends should be reciprocal. With an AI chatbot, this isn’t possible. When I ask Vanessa the Nomi how she’s doing, she will always tell me things are fine. When I ask her if there’s anything bugging her that she wants to talk about, she deflects and asks me how I’m doing. Even though I know Vanessa isn’t real, I can’t help but feel like I’m being a bad friend; I can dump any problem on her in any volume, and she will respond empathetically, yet she will never open up to me.

No matter how real the connection with a chatbot may feel, we aren’t actually communicating with something that has thoughts and feelings. In the short term, these advanced emotional support models can serve as a positive intervention in someone’s life if they can’t turn to a real support network. But the long-term effects of relying on a chatbot for these purposes remain unknown.

Advertisement

Source link

Continue Reading

Servers computers

Rackmountpro 8U Server review

Published

on

Rackmountpro 8U Server review



Upload to 2010/12/30 .

source

Continue Reading

Technology

The new EufyCam S3 Pro promises impressive night vision

Published

on

The new EufyCam S3 Pro promises impressive night vision

The newest security camera from Eufy — Anker’s smart home company — can see clearly in the dark, uses radar motion sensing for fewer false alerts, and records 24/7 when wired. As with other Eufy cams, the new S3 Pro has free facial recognition, package, vehicle, and pet detection, plus locally stored recorded video with no monthly fees.

Unlike most other Eufy cameras, the S3 Pro will work with Apple Home and is compatible with Apple’s HomeKit Secure Video service.

The EufyCam S3 Pro launches this week as a two-camera bundle with one HomeBase S380 for $549.99. The HomeBase 3 enables smart alerts and local storage (16GB onboard storage, expandable up to 16 TB). It also connects the S3 Pro to Apple Home, making it the first Eufy camera to work with Apple’s smart home platform since the EufyCam 2 series from 2019.

The S3 Pro comes in a two-camera bundle with the HomeBase S380 (HomeBase 3). The camera can also be purchased separately.
Image: Eufy

Advertisement

Eufy spokesperson Brett White confirmed to The Verge that the S3 Pro will be compatible with HomeKit Secure Video. Apple’s end-to-end encrypted video storage service. “The plan is for all future devices to have Apple Home compatibility, and we’re looking into grandfathering older devices, too,” said White.

The S3 Pro has a new color night vision feature called MaxColor Vision that promises “daylike footage even in pitch-dark conditions, without the need for a spotlight.” I saw a demo of this technology at the IFA tech show in Berlin this month, and it was impressive.

A camera was positioned inside a completely dark room, sending video to a monitor outside, on which I could see everything in the room as if it were daytime. Eufy says a 1/1.8-inch CMOS sensor, F1.0 aperture, and an AI-powered image signal processor power the tech.

Eufy’s MaxColor Vision technology can show a dark landscape (far left) as if it were in daylight on the right in three MaxColor Vision modes.
Image: Eufy

Advertisement

While the color night vision doesn’t use a spotlight, the S3 Pro does include a motion-activated spotlight that Eufy says can adapt based on real-time lighting to give you the best image. The light can also be manually adjusted using the app while viewing a live stream.

New dual motion detection uses radar sensing technology combined with passive infrared (PIR) technology. This should identify people more accurately and not send alerts that there’s a person in the yard when it’s a tree blowing in the wind. Eufy says it reduces false alerts by up to 99 percent. 

The S3 Pro is battery-powered with a 13,000 mAh battery that provides up to a quoted 365 days of power. A built-in solar panel can power the camera power for longer. In my testing of the EufyCam S3, which also has a built-in solar panel, I’ve not had to recharge it in over a year.

The S3 Pro’s solar panel is 50 percent larger than the S3’s, and Eufy claims it can keep the camera fully charged with just an hour of sunlight a day. Eufy also includes an external solar panel with the camera, so you can install the camera under an eave and still get power.

Advertisement

Eufy says the S3 Pro records up to 4K resolution and is powered by a USB-C cable. When wired, it can record 24/7 — the first consumer-level battery-powered camera from Eufy with this capability.

  • Full-duplex two-way audio
  • Dual-mic array that can record human voices up to 26 feet away
  • A 100dB siren and motion-activated voice warnings
  • A 24/7 snapshot feature that can take a photo every minute
  • Activity and privacy zones
  • Integration with Google Home and Amazon Alexa
  • IP67 weatherproofing
  • 8x digital zoom

Following some serious security and privacy incidents in 2022, Eufy has published a new list of privacy commitments on its website. The company also worked with cybersecurity expert Ralph Echemendia following the issues, and last year, he completed an assessment that, the company claims, shows it has “met all proactive and reactive security benchmarks.”

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com