Last week, OpenAI released its highly anticipated search product, ChatGPT Search, to take on Google. The industry has been bracing for this moment for months, prompting Google to inject AI-generated answers into its core product earlier this year, and producing some embarrassing hallucinations in the process. That mishap led many people to believe that OpenAI’s search engine would truly be a “Google killer.”
But after using ChatGPT Search as my default search engine (you can, too, with OpenAI’s extension) for roughly a day, I quickly switched back to Google. OpenAI’s search product was impressive in some ways and offered a glimpse of what an AI-search interface could one day look like. But for now, it’s still too impractical to use as my daily driver.
ChatGPT Search was occasionally useful for surfacing real-time answers to questions which I would have otherwise had to dig through many ads and SEO-optimized articles to find. Ultimately, it presents concise answers in a nice format: You get links to the information’s sources on the right side, with headlines and a short snippet that confirms that the AI-generated text you just read is correct.
However, it often just felt impractical for everyday use.
In its current form, ChatGPT Search is unreliable for what people use Google for the most: Short, navigational queries. Queries shorter than four words represent the bulk of searches on Google; these are often just a few keywords that get you to the right webpage. They’re the kind of searches most people are barely even conscious they’re making all day, and it’s what Google tends to do very well.
Advertisement
I’m talking about “Celtics score,” “cotton socks,” “library hours,” “San Francisco weather,” “cafes near me,” and other queries that make Google the doorstep to the internet for billions of people.
My test run with ChatGPT Search was quite frustrating at times, and it made me conscious of just how many keyword searches I perform in a day. I couldn’t reliably find information using short queries, and for the first time in years, I actually longed for Google Search.
Don’t get me wrong, Google has declined in quality for the last decade or so, largely because it’s been flooded with ads and SEO. Still, I kept opening Google in a separate window during my test because ChatGPT Search couldn’t get me a correct answer or webpage.
Who would win: ChatGPT Search or short queries?
I typed in “Nuggets score” to check how a live NBA game between the Denver Nuggets and the Minnesota Timberwolves was going. ChatGPT told me the Nuggets were winning even though they were actually losing, and showed a Timberwolves score that was 10 points lower than it really was, according to a Google result at the same time.
Advertisement
Another time, I tried “earnings today,” to check the companies reporting quarterly results that could affect stock prices on Friday. ChatGPT told me that Apple and Amazon were reporting their results on Friday, even though both companies had already reported a day earlier. In other words, it hallucinated and made up information.
In another test, I typed in a tech executive’s name to find their contact information. ChatGPT showed me a summary of the person’s Facebook profile, and hallucinated a link to their LinkedIn page, which produced an error message when I clicked it.
Another time, I typed in “baggy denim jeans,” hoping to shop. ChatGPT Search described to me what baggy denim jeans were in the first place (a definition I didn’t need), and recommended I go to Amazon.com for a nice pair.
I could go on, but you get the idea. Broken links, hallucinations and random answers defined my first day using ChatGPT Search.
Maybe a ‘Google killer’ someday, but not today
This was not an insignificant launch for OpenAI. Sam Altman praised the feature for being “really good,” even though he’s known for downplaying his startup’s AI capabilities. The reason this time is different may have something to do with search being one of the biggest businesses on the internet, and OpenAI’s version could be a real threat to its biggest competitor, Google.
To be fair, Google Search is a 25-year-old product and ChatGPT Search is brand new. In a blog post, OpenAI says it plans to improve the feature based on user feedback in the coming months, and it seems more than likely this could be a significant area of investment for the startup.
Advertisement
To its credit, ChatGPT Search is rather good at answering long, written-out research questions. Something like, “What American professional sports league has the most diversity?” isn’t a question you could easily answer with Google, but ChatGPT Search is pretty good at scraping multiple websites and getting you a decent answer in just a couple of seconds. (Perplexity is also pretty good at these questions, and its search product has been around for well over a year.)
Compared to the traditional version of ChatGPT, which already had web access, the search feature feels like a better interface for browsing the web. There are more clear links to the sources where ChatGPT gets its information now — for news stories, ChatGPT will be tapping the media companies that it’s been striking all those licensing deals with.
The problem is that most searches on Google are not such long questions. To really replace Google, OpenAI needs to improve these more practical, short searches people are already making throughout their day.
OpenAI is not shy about the fact that ChatGPT Search struggles with short queries.
“With ChatGPT search, we’ve observed that users tend to start asking questions in more natural ways than they have in the past with other search tools,” said OpenAI spokesperson Niko Felix in a statement emailed to TechCrunch. “At the same time—web navigational queries—which tend to be short, are quite common. We plan to improve the experience for these types of queries over time.”
That said, these short keyword queries have made Google indispensable, and until OpenAI gets them right, Google is still going to be the mainstay for many people.
Advertisement
There are a couple reasons why OpenAI might be struggling with these short queries. The first is that ChatGPT relies on Microsoft Bing, which is widely regarded as an inferior engine compared to Google. The second reason is that large language models may not be well suited to these short prompts. LLMs typically need fully written out questions to produce effective answers. Perhaps there needs to be some re-prompting — running short queries through an LLM as a longer question — before ChatGPT Search can do such searches well.
Though OpenAI has only now released its search product, Perplexity’s own AI search tool is already serving 100 million search queries a week. Perplexity has also been touted as a “Google killer,” but it runs into the same problems with short queries.
Aravind Srinivas, the CEO of Perplexity, discussed how people use his product differently compared to Google Search at TechCrunch Disrupt earlier this week: “The median number of words in a Google query is somewhere between two and three. In Perplexity, it’s around 10 to 11 words. So clearly, more of the usage in Perplexity is people coming and directly being able to ask a question. On the other hand, at Google, you’re typing in a few key words to instantly get to a certain link.”
I think the fact that people are not using these products for web navigation presents a bigger problem than OpenAI or Perplexity are letting on. It means that ChatGPT Search and Perplexity are not replacing Google Search for the task it’s best at: web navigation.
Advertisement
Instead, these AI products are filling a new niche, surfacing information that gets buried in traditional search. Don’t get me wrong, that’s valuable in its own right.
OpenAI and Perplexity both claim they will work on getting better at these short queries. Until then, I don’t think either of these products can fully replace Google. If OpenAI wants to replace the doorstep to the internet, it has to create a better one.
The Internet Archive is continuing the recovery process after a series of that took down its servers in early October. On Monday, the nonprofit digital library on X that its ‘Save Page Now’ service has been restored to the Wayback Machine.
To view this content, you’ll need to update your privacy settings. Please click here and view the “Content and social-media partners” setting to do so.
The Wayback Machine resumed operation on October 14; now users can upload new web pages to record their information and access them later. As the X post notes, the Wayback Machine will begin collecting web pages that have been archived since October 9 when the entire site was taken down.
The October DDoS attacks coincided with the Internet Archive’s move to disclose a data breach that saw more than 31 million records taken. Security researcher Troy Hunt, who runs the service for monitoring compromised accounts, that the two actions against the Internet Archive were “entirely coincidental” and likely taken by “multiple parties.”
For this week’s episode of Found we’re taking you backstage at TechCrunch Disrupt 2024. Becca Szkutak had the chance to talk with Dan Lorenc, the CEO and co-founder of cybersecurity startup Chainguard, following their conversation onstage with prominent investors, The Chainsmokers.
The pair discuss how the EDM duo’s venture fund MANTIS went from being viewed skeptically by traditional VCs to becoming a highly sought-after investment partner in the B2B space, how Lorenc scaled the company in a difficult time for cybersecurity, and what value celebrity investors can add to a startup.
In this conversation they also discuss:
Navigating tricky market timing after the SolarWinds attack in 2021
How luck can play a major role when it comes to fundraising
Pitching the value of this product to CISOs and CFOs
The unique value that MANTIS adds to the company as they scale and work to stand out from other security tech companies
The Mozilla Foundation laid off 30 percent of its workforce and completely eliminated its advocacy and global programs divisions, TechCrunch reports.
While Mozilla is best known for its Firefox web browser, the Mozilla Foundation — the parent of the Mozilla Corporation — describes itself as standing up “for the health of the internet.” With its advocacy and global programs divisions gone, its impact may be lessened going forward.
“The Mozilla Foundation is reorganizing teams to increase agility and impact as we accelerate our work to ensure a more open and equitable technical future for us all. That unfortunately means ending some of the work we have historically pursued and eliminating associated roles to bring more focus going forward,” Brandon Borrman, the Mozilla Foundation’s communications chief, said in an email to TechCrunch.
This is Mozilla’s second round of layoffs this year. In February, the Mozilla Corporation laid off around 60 workers said it would be making a “strategic correction” that would involve involve cutting back its work on a Mastodon instance. Mozilla shut down its virtual 3D platform and refocused its efforts on Firefox and AI. The Mozilla Foundation had around 120 employees before this more recent round of layoffs, according to TechCrunch.
Advertisement
In an email sent to all employees on October 30th, Nabhia Syed, the foundation’s executive director, said that the advocacy and global programs divisions “are no longer part of our structure.”
“Navigating this topsy-turvy, distracting time requires laser focus — and sometimes saying goodbye to the excellent work that has gotten us this far because it won’t get us to the next peak,” wrote Syed, who previously worked as the chief executive of The Markup, an investigative news site. “Lofty goals demand hard choices.”
The Mozilla Foundation did not immediately respond to The Verge’s request for comment.
Criminals are adding hundreds of malicious packages to npm
The packages try to fetch a stage-two payload to infect the machines
The crooks went to lengths to hide where they host the malware
Software developers, especially those working with cryptocurrencies, are once again facing a supply chain attack via open source code repositories.
Cybersecurity researchers from Phylum have warned a threat actor has uploaded hundreds of malicious packages to the open source package repository npm. The packages are typosquatted versions of Puppeteer and Bignum.js. Developers who are in need of these packages for their products, might end up downloading the wrong version by mistake, since they all come with similar names.
If used, the package will connect to a hidden server, fetch the malicious second-stage payload, and infect the developers’ computers. “The binary shipped to the machine is a packed Vercel package,” the researchers explained.
Hiding the IP address
Furthermore, the attackers wanted to execute something else during package installation, but since the file wasn’t included in the package, the researchers couldn’t analyze it. “An apparent oversight by the malicious package author,” they say.
What makes this campaign stand out from other similar typosquatting supply chain campaigns is the lengths the crooks went to hide the servers they controlled.
Advertisement
“Out of necessity, malware authors have had to endeavor to find more novel ways to hide intent and to obfuscate remote servers under their control,” the researchers said. “This is, once again, a persistent reminder that supply chain attacks are alive and well.”
The IP cannot be seen in the first-stage code. Instead, the code will first access an Ethereum smart contract, where the IP is stored. This ended up being a double-edged sword, since the blockchain is permanent and immutable, and thus allowed the researchers to observe all of the IP addresses the crooks ever used.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Since the targets are developers working with cryptocurrency, the goal was most likely to steal their seed phrases, and gain access to their wallets.
Advertisement
Software developers, particularly those working in the Web3 space, are often targets of such attacks. Therefore, double-checking the names of all downloaded packages is a must.
When you buy a Google Pixel 9 Pro or iPhone 16 Pro, you know how much you’re paying. Both phones have retail prices of $1,000. They’re expensive, but they’re in line with other flagship smartphones.
But is that the real price of the phones? That’s how much you pay, but how much do Google and Apple pay to make the handsets? Thanks to some new data, we finally have an answer.
Recent data indicates that the production costs for Google’s Pixel 9 Pro are lower than many expected. According to Nikkei, the Google Pixel 9 Pro costs Google approximately $406 to manufacture. This includes $80 for the device’s Tensor G4 chipset, $75 for the Samsung M14 display panel, and $61 for the camera components. Jukanlosreve on X (formerly Twitter) provided this breakdown.
The manufacturing cost of the Pixel 9 Pro is about 11% lower than that of the Pixel 8 Pro. However, the newer model features a smaller display and battery. The Pixel 9 Pro XL, not the Pixel 9 Pro, is more comparable to the Pixel 8 Pro. This year’s lineup includes three models — the standard Pixel 9, the Pixel 9 Pro, and the Pixel 9 Pro XL — marking the first time since the Pixel 4 XL was launched in 2019 that the Pixel series has featured three models.
Advertisement
The same Nikkei report revealed that Apple’s cost to produce the iPhone 16 Pro is $568 per unit. This includes $110 for the M14 display, $91 for the camera components, and $135 for the A18 chipset. The total cost is slightly lower than that of the iPhone 15 Pro.
The Pixel 9 Pro and the iPhone 16 Pro feature a 6.3-inch OLED display with a dynamic refresh rate of 120Hz. The Pixel 9 Pro has a 50-megapixel primary camera, a 48MP ultrawide camera, and a 48MP telephoto lens. In contrast, the iPhone 16 Pro offers a 58MP primary camera, a 48MP ultrawide camera, and a 12MP telephoto lens.
The Pixel 9 Pro and the iPhone 16 Pro start at $1,000 in the U.S. According to the bill of materials, Google appears to profit more per unit than Apple.
You must be logged in to post a comment Login