Connect with us

Technology

Android Auto support abruptly stopped for older phones

Published

on

Android Auto support abruptly stopped for older phones

Android Auto support has abruptly stopped for older phones. Google had previously revised the minimum version of Android required for the platform, but the company had allowed it to run on older versions of Android.

Google stopped support for Android Auto on older versions of Android

Android Auto is one of the most powerful platforms for in-car navigation. It allows using an Android phone’s functions via the infotainment console.

Modern versions of Android have Android Auto baked in. However, older versions of Android and the platform had a dedicated and optional app. In other words, before Android 9, users had to download the Android Auto app from the Google Play Store and then sync it with their vehicle’s infotainment system.

Google had ensured Android Auto worked all the way back on Android 6. However, back in July 2022, Google revised the minimum Android version required for Android Auto.

Advertisement

From Android 6, the search giant mandated Android 8 as the lowest version of the OS. The same happened recently. Google has reportedly stopped supporting Android Auto on Android 8 and lower versions of the OS.

What is the minimum version of Android needed to run Android Auto?

Earlier this year, Google raised the minimum requirement for Android Auto to Android 9. It is, however, interesting to note that several Android 8 users were able to use Android Auto on their devices. This concession seems to have stopped.

According to a Reddit thread, Google seems to be strictly enforcing the minimum requirement rule. In other words, smartphones running Android 8 are being greeted by a notification that says, “This phone no longer supports Android Auto.”

It seems Google had been lenient, and it was not enforcing the requirement, presumably to offer people more time for an upgrade. Although the chances are rather slim, there could be a few Android smartphones running Android 8 in 2024. After all, the version arrived way back in 2019.

Advertisement

Google could be enforcing the requirement because Android 9 and newer versions of the OS have Android Auto integrated within. This offers a streamlined experience. Additionally, it is possible that Google might be removing Android Auto from the Play Store as a standalone app, and keeping it as an updatable system app.

Android Auto has had multiple bugs recently, and it even stopped functioning in one of the beta versions of Android 15. Moreover, the app could be breaching EU rules. Google could be enforcing the rule to limit such issues as ensuring backward compatibility is often very complex.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Playdate is officially getting a Season Two with ‘about a dozen games’ next year

Published

on

Playdate is officially getting a Season Two with ‘about a dozen games’ next year

Panic slipped some major news into its fall Playdate Update: Season Two is happening, and we’ll see it next year. Can I get a “hell yeah!”? It’s been over two years since Season One dropped, and in the time since, it’s remained unclear whether another would ever follow. But in today’s livestream, Panic’s Video & Podcast host Christa Mrgan confirmed that Season Two is a go, and it’s “happening next year.” Consolation for killing the Stereo Dock, perhaps?

There are a lot of details we still don’t know about Season Two, like how much it’ll cost (Season One was included with the purchase of a Playdate), but a PR person for Panic confirmed to Engadget that Playdate owners will have to buy it from the Catalog. Information on pricing and the exact number of games will be released in 2025. The first season brought two games per week over the course of 12 weeks, amounting to 24 games in all. According to Mrgan, Season Two so far includes “about a dozen games.” There’s also apparently another “really cool surprise thing” that we aren’t allowed to know the details about just yet, and my curiosity is definitely piqued.

To view this content, you’ll need to update your privacy settings. Please click here and view the “Content and social-media partners” setting to do so.

In addition to the Season Two announcement, the fall update also highlighted some upcoming Catalog games to look out for in the coming weeks and into 2025: Owlet’s Embrace, a metroidvania about an owl who is scared to fly; Comet, a puzzle-adventure game about a girl who is forced to face her fear of the dark after her brother goes missing; Office Chair Curling, which is exactly what it sounds like and looks absolutely absurd in the best way; Bwirds, a cute word puzzle game; a pinball game called Devils on the Moon from the makers of the Tetris-like, Pullfrog; and the top-down boat racing game, RowBot Rally.

There’s also a huge Catalog sale going on right now that runs through November 14. Some of our favorite Playdate games are deeply discounted, so if you’ve been waiting for the right moment to scoop up all the titles on your “to play” list, now would be the time.

Advertisement

Update, October 31 2024, 2:45PM ET: This story has been updated to include additional information from Panic/Playdate’s PR.

Source link

Continue Reading

Technology

Microsoft’s agentic AI OmniParser rockets up open source charts

Published

on

Microsoft’s agentic AI OmniParser rockets up open source charts

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Microsoft’s OmniParser is on to something.

The new open source model that converts screenshots into a format that’s easier for AI agents to understand was released by Redmond earlier this month, but just this week became the number one trending model (as determined by recent downloads) on AI code repository Hugging Face.

It’s also the first agent-related model to do so, according to a post on X by Hugging Face’s co-founder and CEO Clem Delangue.

Advertisement

But what exactly is OmniParser, and why is it suddenly receiving so much attention?

At its core, OmniParser is an open-source generative AI model designed to help large language models (LLMs), particularly vision-enabled ones like GPT-4V, better understand and interact with graphical user interfaces (GUIs).

Released relatively quietly by Microsoft, OmniParser could be a crucial step toward enabling generative tools to navigate and understand screen-based environments. Let’s break down how this technology works and why it’s gaining traction so quickly.

What is OmniParser?

OmniParser is essentially a powerful new tool designed to parse screenshots into structured elements that a vision-language model (VLM) can understand and act upon. As LLMs become more integrated into daily workflows, Microsoft recognized the need for AI to operate seamlessly across varied GUIs. The OmniParser project aims to empower AI agents to see and understand screen layouts, extracting vital information such as text, buttons, and icons, and transforming it into structured data.

Advertisement

This enables models like GPT-4V to make sense of these interfaces and act autonomously on the user’s behalf, for tasks that range from filling out online forms to clicking on certain parts of the screen.

While the concept of GUI interaction for AI isn’t entirely new, the efficiency and depth of OmniParser’s capabilities stand out. Previous models often struggled with screen navigation, particularly in identifying specific clickable elements, as well as understanding their semantic value within a broader task. Microsoft’s approach uses a combination of advanced object detection and OCR (optical character recognition) to overcome these hurdles, resulting in a more reliable and effective parsing system.

The technology behind OmniParser

OmniParser’s strength lies in its use of different AI models, each with a specific role:

  • YOLOv8: Detects interactable elements like buttons and links by providing bounding boxes and coordinates. It essentially identifies what parts of the screen can be interacted with.
  • BLIP-2: Analyzes the detected elements to determine their purpose. For instance, it can identify whether an icon is a “submit” button or a “navigation” link, providing crucial context.
  • GPT-4V: Uses the data from YOLOv8 and BLIP-2 to make decisions and perform tasks like clicking on buttons or filling out forms. GPT-4V handles the reasoning and decision-making needed to interact effectively.

Additionally, an OCR module extracts text from the screen, which helps in understanding labels and other context around GUI elements. By combining detection, text extraction, and semantic analysis, OmniParser offers a plug-and-play solution that works not only with GPT-4V but also with other vision models, increasing its versatility.

Open-source flexibility

OmniParser’s open-source approach is a key factor in its popularity. It works with a range of vision-language models, including GPT-4V, Phi-3.5-V, and Llama-3.2-V, making it flexible for developers with a broad range of access to advanced foundation models.

Advertisement

OmniParser’s presence on Hugging Face has also made it accessible to a wide audience, inviting experimentation and improvement. This community-driven development is helping OmniParser evolve rapidly. Microsoft Partner Research Manager Ahmed Awadallah noted that open collaboration is key to building capable AI agents, and OmniParser is part of that vision.

The race to dominate AI screen interaction

The release of OmniParser is part of a broader competition among tech giants to dominate the space of AI screen interaction. Recently, Anthropic released a similar, but closed-source, capability called “Computer Use” as part of its Claude 3.5 update, which allows AI to control computers by interpreting screen content. Apple has also jumped into the fray with their Ferret-UI, aimed at mobile UIs, enabling their AI to understand and interact with elements like widgets and icons.

What differentiates OmniParser from these alternatives is its commitment to generalizability and adaptability across different platforms and GUIs. OmniParser isn’t limited to specific environments, such as only web browsers or mobile apps—it aims to become a tool for any vision-enabled LLM to interact with a wide range of digital interfaces, from desktops to embedded screens. 

Challenges and the road ahead

Despite its strengths, OmniParser is not without limitations. One ongoing challenge is the accurate detection of repeated icons, which often appear in similar contexts but serve different purposes—for instance, multiple “Submit” buttons on different forms within the same page. According to Microsoft’s documentation, current models still struggle to differentiate between these repeated elements effectively, leading to potential missteps in action prediction.

Advertisement

Moreover, the OCR component’s bounding box precision can sometimes be off, particularly with overlapping text, which can result in incorrect click predictions. These challenges highlight the complexities inherent in designing AI agents capable of accurately interacting with diverse and intricate screen environments. 

However, the AI community is optimistic that these issues can be resolved with ongoing improvements, particularly given OmniParser’s open-source availability. With more developers contributing to fine-tuning these components and sharing their insights, the model’s capabilities are likely to evolve rapidly. 


Source link
Continue Reading

Technology

Amazon’s supercharged Alexa won’t arrive this year

Published

on

Amazon’s supercharged Alexa won’t arrive this year

In the week’s least surprising news, Amazon’s reinvention of its Alexa voice assistant has reportedly fallen even further behind. According to Bloomberg, the launch of a new Alexa — billed as a smarter, more capable AI-powered voice assistant — has been pushed back. Again. “A person familiar with the matter said Alexa AI teams were recently told that their target deadline had been moved into 2025,” writes Bloomberg

The revamped voice assistant, first announced last September, was expected to arrive this year, toting ChatGPT-style intelligence and more natural, conversational interactions. But earlier this summer, Fortune reported that the new Alexa might never be ready. Then, for the first time in half a decade, fall came and went without a big splashy Amazon event, and the rumors appeared to be true.

It seems we can’t have a smarter Alexa and a more capable Alexa.

As further evidence that the company is retrenching, Amazon has cut off access to the beta of the new Alexa. You used to be able to request access by saying, “Alexa, let’s chat” to an Echo device. Now, the assistant responds with, “Let’s Chat is no longer available. For now, you can ask me questions or do things like set a timer, play music, turn on a connected light, and more.”

Advertisement

Bloomberg’s sources say those beta users who did get to chat have been unimpressed (I requested access several times but with no luck). Responses were slow, sounded stiff, and weren’t “all that useful,” they said. Plus, the new Alexa messes up smart home integrations, hallucinates, and apparently tries to show off. Bloomberg reports:

One tester says the ongoing hallucinations aren’t always wrong, just uncalled for, as if Alexa is trying to show off its newfound prowess. For instance, before, if you asked Alexa what halftime show Justin Timberlake and Janet Jackson performed at, it might say the 2004 Super Bowl. Now, it’s just as likely to give a long-winded addendum about the infamous wardrobe malfunction.

The challenge appears to lie in integrating large language models with the command and control method of today’s voice assistants. It seems we can’t have a smarter Alexa and a more capable Alexa. According to Bloomberg’s sources, using pre-trained AI models allows Alexa to answer more complicated questions but makes it more likely to fail at setting a kitchen timer or controlling smart lights.

Old Alexa may have its issues, but it can (mostly) reliably control my smart lights. No one is asking for a digital assistant they can chat with at home, but who won’t get off the couch to turn out the lights. I have my husband for that.

Bloomberg reports that Amazon CEO Andy Jassy has yet to convey a compelling vision for an AI-powered Alexa to the company. While he’s said publicly, “We continue to re-architect the brain of Alexa … ”, there’s been scant information about what an LLM-powered Alexa will bring to its millions of users — beyond being able to converse more naturally. More importantly, it seems Amazon has yet to prove it can do this without diminishing the features customers use the assistant for every day.

Advertisement

No one is asking for a digital assistant they can chat with, but who won’t get off the couch to turn out the lights.

While the company searches for its vision, Jassy has installed a new head of the devices and services division under which Alexa falls. Panos Panay has been at the company for a year now, and Bloomberg reports the former head of Microsoft’s Surface division has “brought a focus on higher-quality design to a group adept at utilitarian gadgets.”

As I wrote this week, Amazon’s prior tact of making copious amounts of cheap hardware at the expense of better software is partly why Alexa hasn’t gotten measurably smarter over the last decade. However, with better hardware and a focus on building on Alexa’s strength, rather than simply turning it into a chatbot, the company could recapture Jeff Bezos’s original vision of creating Star Trek’s “Computer.” But whatever the plan is for a new Alexa, it looks like it won’t be here anytime soon.

Source link

Advertisement

Continue Reading

Technology

Quordle today – hints and answers for Friday, November 1 (game #1012)

Published

on

Quordle on a smartphone held in a hand

Quordle was one of the original Wordle alternatives and is still going strong now more than 1,000 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.

Enjoy playing word games? You can also check out my Wordle today, NYT Connections today and NYT Strands today pages for hints and answers for those puzzles.

Source link

Continue Reading

Technology

iOS 18.1 is not just about Apple Intelligence; here are bug fixes that iPhone users need ASAP- The Week

Published

on

iOS 18.1 is not just about Apple Intelligence; here are bug fixes that iPhone users need ASAP- The Week

Apple has released the much-anticipated iOS 18.1 recently and users with iPhone 15 Pro, iPhone 15 Pro Max and iPhone 16 lineup are going gaga about the new Apple Intelligence features.

ALSO READ: Here’s how it’s different from latest iOS 18.0.1 that is crucial for iPhone 16 owners

The Apple Intelligence features would not be available for iPhone 15, iPhone 15 Plus or older models. But that does not mean iOS 18.1 is unnecessary for these users. Because iOS 18.1 is loaded with 28 security fixes for all iPhone users.

Major security fixes

Advertisement

Issues in the iOS Kernel and Safari browser WebKit are said to be fixed in the update. An issue in the Kernel, titled CVE-2024-44239, that allows apps to leak sensitive kernel state has been fixed.

Two WebKit issues have also been patched. The first issue, CVE-2024-44261, would have allowed an attacker to view restricted content from the lock screen while the latter, CVE-2024-44244, prevented the Content Security Policy from being enforced when a maliciously crafted web content gets downloaded.

Another issue was CVE-2024-44255, which would let a malware app to run shortcuts without user consent. Three privacy bugs in Siri and a flaw that allows hackers to breat out of Web Content sandbox have also been fixed.

Other bug fixes

Advertisement

In Podcasts, there was an issue with unplayed episodes being marked as played. This will be fixed with iOS 18.1.

Videos recorded at 4K 60 fps while the device is warm have been experiencing stutter while scrubbing the video playback in Photos. This issue has been fixed.

Another issue that has been fixed is the digital car keys not unlocking or start a vehicle with passive entry after restoring from a backup or transferring directly from another iPhone.

There were complaints that iPhone 16 and iPhone 16 Pro models failed to restart on certain occasions. This has also been fixed.

Advertisement

Call recordings

This allows you to record live phone calls. There will be an automatic announcement that the call is being recorded.

Camera Control

This new feature allows iPhone 16 series users to switch to the front TrueDepth Camera using Camera Control

Advertisement

Spatial camera mode

For users with iPhone 15 Pro and above, a new Spatial camera mode will be available allowing them to capture spatial photos and videos

Hearing Test and Hearing Aid

Hearing Test and Hearing Aid features require AirPods Pro 2 with firmware version 7B19 or later. The Hearing Test feature gives you scientifically validated hearing test results while the Hearing Aid feature provides personalised, clinical-grade assistance.

Advertisement

The Hearing Aid feature is intended for those with perceived mild to moderate hearing loss. It is automatically applied to sounds in your environment as well as music, videos and calls.

Other improvements

Control Centre will have new options to add connectivity controls individually and reset your configuration. RCS Business Messaging allows you to connect with businesses over RCS. This needs network provider support.

Source link

Advertisement

Continue Reading

Technology

Venom 3 to win weekend box office, second spot up for grabs

Published

on

Venom 3 to win weekend box office, second spot up for grabs

Venom: The Last Dance is set to win the North American box office for the second weekend in a row, according to a forecast by Boxoffice Pro.

Sony Pictures’ sci-fi action movie is projected to earn between between $17 million and $23 million at theaters across the U.S. and Canada after raking in $51 million on its debut a week ago.

Starring Tom Hardy (Venom, Mad Max: Fury Road), Chiwetel Ejiofor (12 Years a Slave, The Martian), and Juno Temple (Ted Lasso, Killer Joe), Venom: The Last Dance currently has an audience score of 80% on Rotten Tomatoes, but a paltry 39% rating from more than 160 reviews by professional critics. It also has a less-than-stellar 6.2 rating on IMDb, while Digital Trends’ gave it only 1.5/5.

The movie’s official logline reads: “Eddie and Venom, on the run, face pursuit from both worlds. As circumstances tighten, they’re compelled to make a heart-wrenching choice that could mark the end of their symbiotic partnership.”

Check out the trailer below:

Advertisement

VENOM: THE LAST DANCE – Official Trailer (HD)

Eyeing second spot at this weekend’s domestic box office is a new movie starring Tom Hanks called Here.

Forecast to earn between $3 million and $7 million, Here is described as “a generational story about families and the special place they inhabit, sharing in love, loss, laughter, and life.” The movie is directed by Robert Zemeckis and is notable for its use of generative AI technology to face-swap and de-age the actors.

Also starring Robin Wright (Forrest Gump, Unbreakable) and Paul Bettany (WandaVision, A Beautiful Mind), Here hasn’t got off to a great start on Rotten Tomatoes, currently scoring only 38% from just over 50 reviews by professional critics (the Guardian’s 1/5 review calls it “a total horror show”), and just 5.6 on IMDb.

Advertisement

Watch the trailer below:

Here – Official Trailer (HD)

There’s a chance that Smile 2, on its third weekend, could nab second spot from Here, as the horror movie is forecast to take between $3 million and $5 million. Its ratings are certainly better, scoring 85% among professional critics and 81% among audience-goers on Rotten Tomatoes, and 7.2 on IMDb. Check out the trailer below:

Smile 2 | Official Trailer (2024 Movie) – Naomi Scott, Lukas Gage

Advertisement






Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com