A new report from Heimdal has revealed jobseekers across the world are being targeted by scams exploiting individuals looking for work in sectors such as finance, IT, and healthcare.
Based on an analysis of over 2,670 social media posts and comments from victims in 2023 and 2024, the report highlights the common tactics used by scammers, the industries most affected, and the emotional toll these scams take on their victims.
The finance and IT sectors are the most targeted by job scams, with 35.45% and 30.43% of reported cases, respectively, with Healthcare accounting for 15.41% of incidents.
These industries, especially those offering remote positions, have become prime targets for fraudsters, the report says, with nearly half (43%) of scam-related posts involved remote jobs, compared to 42% for on-site roles and 15% for hybrid positions.
Advertisement
High-value roles such as managers and entry-level candidates are also heavily targeted as 35% of scams are directed at managers while 34% point towards entry-level job seekers. These roles are particularly attractive to scammers because of the volume of candidates and the appeal of potentially lucrative job offers.
Several tactics are commonly used by scammers to defraud unsuspecting victims, but suspicious contact information is the most frequent red flag, representing 41.1% of cases. Unrealistic salary offers (25.7%) and misleading job descriptions (10.6%) are also used to lure victims.
Email is the most popular method scammers use to reach their targets, responsible for 30.75% of cases, followed by social media (20.19%) and websites (19.79%). The convenience of digital communication platforms has made it easier for scammers to impersonate legitimate companies and deceive job seekers.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Advertisement
The report also outlines several warning signs job seekers should be aware of to avoid falling into scam traps. Requests for upfront payments, cited in 25.08% of cases, are a common tactic used by scammers. Phishing attempts (18.81%) and requests for confidential information (17.49%) also signal potential fraud. Additionally, a lack of an interview process (15.84%) or receiving a job offer without applying (12.21%) are major red flags. Furthermore, poorly written job descriptions, often containing spelling errors or inconsistencies, are another sign of a potential scam. These descriptions, present in 10.56% of the cases, can indicate a lack of professionalism and authenticity.
Beyond the financial damage, job scams leave a lasting emotional toll on victims. The report shows that 35.29% of victims reported distress, 23.53% experienced anxiety, and 9.41% felt anger. Victims often feel ashamed and question their value as candidates, particularly after facing multiple rejections in their job search. Many victims also feel a deep sense of injustice, believing that regulatory bodies and law enforcement are not adequately equipped to protect them. This lack of closure can lead to lingering emotional scars that persist long after the scam.
To avoid falling for job scams, checking the company reviews and verifying company information are crucial steps, with 26.96% and 22.87% of victims citing these as helpful strategies. Also, consulting trusted friends and verifying email domains are recommended to ensure job offers are legitimate.
“It’s clear that job platforms are struggling to keep up with the growing number of scammers,”said Valentin Rusu, Lead Machine Learning Engineer at Heimdal Security.
Advertisement
“That’s why job seekers must adopt a cybersecurity-first mindset—approach every email and job offer with caution. Verify email domains, check company websites, read reviews, and consult with trusted friends before proceeding. And most importantly, never disclose personal information unless you’re absolutely certain of the company’s legitimacy.”, Rusu added.
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The introduction of ChatGPT has brought large language models (LLMs) into widespread use across both tech and non-tech industries. This popularity is primarily due to two factors:
LLMs as a knowledge storehouse: LLMs are trained on a vast amount of internet data and are updated at regular intervals (that is, GPT-3, GPT-3.5, GPT-4, GPT-4o, and others);
Emergent abilities: As LLMs grow, they display abilities not found in smaller models.
Does this mean we have already reached human-level intelligence, which we call artificial general intelligence (AGI)? Gartner defines AGI as a form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks and domains. The road to AGI is long, with one key hurdle being the auto-regressive nature of LLM training that predicts words based on past sequences. As one of the pioneers in AI research, Yann LeCun points out that LLMs can drift away from accurate responses due to their auto-regressive nature. Consequently, LLMs have several limitations:
Limited knowledge: While trained on vast data, LLMs lack up-to-date world knowledge.
Limited reasoning: LLMs have limited reasoning capability. As Subbarao Kambhampati points outLLMs are good knowledge retrievers but not good reasoners.
No Dynamicity: LLMs are static and unable to access real-time information.
To overcome LLM’s challenges, a more advanced approach is required. This is where agents become crucial.
Agents to the rescue
The concept of intelligent agent in AI has evolved over two decades, with implementations changing over time. Today, agents are discussed in the context of LLMs. Simply put, an agent is like a Swiss Army knife for LLM challenges: It can help us in reasoning, provide means to get up-to-date information from the Internet (solving dynamicity issues with LLM) and can achieve a task autonomously. With LLM as its backbone, an agent formally comprises tools, memory, reasoning (or planning) and action components.
Components of AI agents
Tools enable agents to access external information — whether from the internet, databases, or APIs — allowing them to gather necessary data.
Memory can be short or long-term. Agents use scratchpad memory to temporarily hold results from various sources, while chat history is an example of long-term memory.
The Reasoner allows agents to think methodically, breaking complex tasks into manageable subtasks for effective processing.
Actions: Agents perform actions based on their environment and reasoning, adapting and solving tasks iteratively through feedback. ReAct is one of the common methods for iteratively performing reasoning and action.
What are agents good at?
Agents excel at complex tasks, especially when in a role-playing mode, leveraging the enhanced performance of LLMs. For instance, when writing a blog, one agent may focus on research while another handles writing — each tackling a specific sub-goal. This multi-agent approach applies to numerous real-life problems.
Role-playing helps agents stay focused on specific tasks to achieve larger objectives, reducing hallucinations by clearly defining parts of a prompt — such as role, instruction and context. Since LLM performance depends on well-structured prompts, various frameworks formalize this process. One such framework, CrewAI, provides a structured approach to defining role-playing, as we’ll discuss next.
Advertisement
Multi agents vs single agent
Take the example of retrieval augmented generation (RAG) using a single agent. It’s an effective way to empower LLMs to handle domain-specific queries by leveraging information from indexed documents. However, single-agent RAG comes with its own limitations, such as retrieval performance or document ranking. Multi-agent RAG overcomes these limitations by employing specialized agents for document understanding, retrieval and ranking.
In a multi-agent scenario, agents collaborate in different ways, similar to distributed computing patterns: sequential, centralized, decentralized or shared message pools. Frameworks like CrewAI, Autogen, and langGraph+langChain enable complex problem-solving with multi-agent approaches. In this article, I have used CrewAI as the reference framework to explore autonomous workflow management.
Workflow management: A use case for multi-agent systems
Most industrial processes are about managing workflows, be it loan processing, marketing campaign management or even DevOps. Steps, either sequential or cyclic, are required to achieve a particular goal. In a traditional approach, each step (say, loan application verification) requires a human to perform the tedious and mundane task of manually processing each application and verifying them before moving to the next step.
Each step requires input from an expert in that area. In a multi-agent setup using CrewAI, each step is handled by a crew consisting of multiple agents. For instance, in loan application verification, one agent may verify the user’s identity through background checks on documents like a driving license, while another agent verifies the user’s financial details.
Advertisement
This raises the question: Can a single crew (with multiple agents in sequence or hierarchy) handle all loan processing steps? While possible, it complicates the crew, requiring extensive temporary memory and increasing the risk of goal deviation and hallucination. A more effective approach is to treat each loan processing step as a separate crew, viewing the entire workflow as a graph of crew nodes (using tools like langGraph) operating sequentially or cyclically.
Since LLMs are still in their early stages of intelligence, full workflow management cannot be entirely autonomous. Human-in-the-loop is needed at key stages for end-user verification. For instance, after the crew completes the loan application verification step, human oversight is necessary to validate the results. Over time, as confidence in AI grows, some steps may become fully autonomous. Currently, AI-based workflow management functions in an assistive role, streamlining tedious tasks and reducing overall processing time.
Production challenges
Bringing multi-agent solutions into production can present several challenges.
Scale: As the number of agents grows, collaboration and management become challenging. Various frameworks offer scalable solutions — for example, Llamaindex takes event-driven workflow to manage multi-agents at scale.
Latency: Agent performance often incurs latency as tasks are executed iteratively, requiring multiple LLM calls. Managed LLMs (like GPT-4o) are slow because of implicit guardrails and network delays. Self-hosted LLMs (with GPU control) come in handy in solving latency issues.
Performance and hallucination issues: Due to the probabilistic nature of LLM, agent performance can vary with each execution. Techniques like output templating (for instance, JSON format) and providing ample examples in prompts can help reduce response variability. The problem of hallucination can be further reduced by training agents.
Final thoughts
As Andrew Ng points out, agents are the future of AI and will continue to evolve alongside LLMs. Multi-agent systems will advance in processing multi-modal data (text, images, video, audio) and tackling increasingly complex tasks. While AGI and fully autonomous systems are still on the horizon, multi-agents will bridge the current gap between LLMs and AGI.
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
CareYaya, a platform that matches people who need caregivers with healthcare students, is working to disrupt the caregiving industry. The startup, which exhibited as part of the Battlefield 200 at TechCrunch Disrupt, is looking to enhance affordable in-home support, while also helping students prepare for their future healthcare careers.
The startup was founded in 2022 by Neal Shah, who came up with the idea for the startup based on his own experience as a caregiver for his wife after she became ill with cancer and various other ailments. During this time, Shah was a partner at a hedge fund and had to wind down his fund to become a full-time caregiver for two years.
To get additional care for his wife, Shah hired college students who were studying healthcare to be caregivers for his wife. Shah learned that other families were doing the same thing informally by posting flyers at local campuses to find someone who was qualified to look after their loved one.
“I was like, wouldn’t it be nice to just build a formal system for them to do it, where you don’t have to go to your local nursing school or your local undergrad campus and post flyers,” Shah told TechCrunch. “This is what I was doing. So we were like, if you can bring that into a formal capacity through a tech platform, you can make a big impact.”
Advertisement
Fast-forward to 2024, and the platform now has over 25,000 students on its platform from numerous schools, including Duke University, Stanford, UC Berkeley, San Jose State, University of Texas at Austin, and more.
CareYaya performs background checks on students who want to join the platform and then completes video-based interviews with them. On the user side, people can join the platform and then detail the type of care their loved one needs. CareYaya then matches students to families, whether it’s for one-off sessions or continuous care. After the first session, both parties can leave ratings.
The startup says it can help families save thousands of dollars on recurring senior care. While at-home care costs an average of $35 per hour in the U.S., CareYaya charges between $17 and $20 per hour.
Since the students providing the care are tech savvy, CareYaya is equipping them with AI-powered technology to recognize and track disease progression in patients with Alzheimer’s and dementia. The company recently launched an LLM (large language model) that integrates with smart glasses to gather visual data to help students provide better real-time assistance and conduct early dementia screening.
In terms of the future, CareYaya wants to explore expanding beyond the United States, as the platform has seen interest from people in places like Canada, Australia, and the United Kingdom.
Some of the most beloved roguelikes are single-player — the likes of Hades, Balatro, and Dead Cells are all solo titles. But Windblown, the new roguelike from Motion Twin, the studio that created Dead Cells, showed me just how cool it can be to play a roguelike with other people.
In Windblown, your character, one of a few adorable animal adventurers like an axolotl or a bat, is shot out of a cannon into a mysterious giant tornado to fight your way through various zones. Like Dead Cells, you can equip up to two main weapons. I typically have one for close-range bouts and another for long-distance attacks. But with every weapon, you’re also able to pull off a combo that uses a special move from the other weapon called an “Alterattack.”
Here’s an example. I love using a crossbow to attack enemies from a distance, and I pair it with a giant heavy blade. I rarely use the blade on its own; instead, I use its Alterattack that cracks open the earth in a straight line to continue to wallop on enemies at range. That turns a run into a steady rhythm of slinging arrows and using the Alterattack at exactly the right time, and with my five hours so far with the game, I haven’t gotten tired of the pattern.
Windblown just launched in early access, and you can already unlock more than a dozen weapons, meaning there are a lot of combinations that I haven’t messed around with. And with four different biomes to get through on a run, there’s a lot to see, too.
Advertisement
The bosses are no joke.Image: Motion Twin
All of that would be enough to make Windblown part of my regular rotation of roguelikes I use to wind down at the end of a long day. But the game’s multiplayer is making Windblown the game I turn to every time I turn on my Steam Deck.
Windblown’s multiplayer lobbies, which you unlock fairly early on, let you play a full run with a team of three people. You can use voice and text chat to communicate, but it’s not required; I haven’t used those at all, instead relying on four in-game emoji. I also like that you can name your lobbies. I created one titled “help me get 1st win” and immediately had two helpful people join up to help me tackle the tornado. (Sadly, we did not get the win.)
When playing solo, I’ve found that I’m somewhat cautious and strategic as I think about how to use weapons and positioning to take on the game’s aggressive enemies and dodge their attacks. With the help of a team, battles are speedier and become delightful explosions of light, color, sound, and damage. It’s so fun to absolutely annihilate baddies with other people, and it’s comforting to know that they’ve got your back in a pinch.
Advertisement
There are a lot of great roguelikes to play right now; Hades II just got a huge update, Balatro is nearly impossible to put down (especially now that it’s on mobile), and I’ve wanted to get back into Shogun Showdown, which I think everyone is sleeping on. Windblown needed more than just its Motion Twin pedigree to stand out, but so far, the multiplayer is the hook that keeps me coming back.
Third-party alternative tools are already available
Quick Share on Android is the equivalent of AirDrop, enabling files to be easily transferred between Android devices, Chromebooks, and Windows – and there are signs that Google is planning to add support for iPhones, iPads, and Macs.
As spotted by the team at Android Authority, a comment left by a Google engineer on code essential to Quick Share mentions iOS and macOS specifically – a comment which would make more sense if an app for these platforms was in the works.
It’s not the biggest of hints, but there could be something to it. There’s already an official Quick Share for Windows app, so it’s not too much of a stretch to imagine Google rolling out additional apps for Apple devices along the same lines.
As yet there’s no official word from Google about this, but we’ll let you know if that changes. Quick Share was originally called Nearby Share, and having launched out of beta in 2020, is now available on the majority of Android phones and tablets.
It’s good to share
Apple wouldn’t build something like this into its own software of course, so it would require iPhone, iPad, and Mac owners to download a separate app for this to work – which might be something of a stumbling block.
Advertisement
However, users with an Android phone who also make use of an iPad or Mac, for example, would most likely be happy to download and install another app if it meant more seamless file sharing between all of their devices.
As pointed out over on 9to5Google, there are already third-party tools available to do the same job. An official option from Google would probably offer a more convenient and seamless experience though (like Google’s tools for switching to iOS).
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You might remember that earlier this year Google and Samsungcombined both of their sharing protocols to improve file transfers between Pixel phones and Galaxy phones – and iPhones could be next on the list.
EV drivers may relish that charging networks are climbing over each other to provide needed juice alongside roads and highways.
But they may relish even more not having to make many recharging stops along the way, as their EV soaks up the bountiful energy coming straight from the sun.
That’s the bet from Aptera Motors, a crowdfunded, California-based maker of solar-powered electric vehicles.
Aptera says it just completed a successful test drive of ‘PI-2’, the first production-intent version of its futuristic-looking two-seater, three-wheel solar electric vehicle. The EV’s latest version was engineered to rigorously test performance metrics such as range, solar charging capability, and efficiency, Aptera says.
“Driving our first production-intent vehicle marks an extraordinary moment in Aptera’s journey,” said Steve Fambro, Aptera’s Co-Founder and Co-CEO in a statement. “It demonstrates real progress toward delivery a vehicle that redefines efficiency, sustainability, and energy independence.”
Advertisement
Aptera says it already has over 50,000 reservations for its EV, which are scheduled to start being delivered in the second quarter of 2025. Last year, it unveiled a $33,200 launch version featuring an under 6-seconds 0-60 mph acceleration time, a battery pack providing a range of 400 miles, and a solar charge range of 40 miles per day.
The Aptera EV also features Tesla’s North American Charging Standard (NACS) port to charge its battery.
The company said its production-intent models will continue to evolve over time as they undergo further tests, including for key metrics such as solar charging rates and watt-hours per miles.
Other versions of the Aptera EV were said to provide as much as 1,000 miles of range with a 0-60 mph acceleration in 3.5 seconds.
Advertisement
Aptera has so far raised over $100 million since launching a crowdfunding program three years ago.
Solar-powered electric vehicles are also being developed by the likes of Germany’s Sono Motors and the Netherlands’ Lightyear, and by big automakers such as Hyundai and Mercedes-Benz.
We first got news about this feature a few months ago, and people who use ChatGPT often for information will love this feature. If you’re a free user, then we have some bad news. The ChatGPT Search feature is only for ChatGPT Plus users for the moment. OpenAI will make this functionality available for its free and Enterprise users over the next couple of weeks. So, you’ll need to wait a bit if you want to use this feature.
ChatGPT now has a search feature
Since the beginning of this whole AI explosion, one of the things that companies fantasized about was the AI-powered search engine. The AI search engine already exists, thanks to Perplexity. Well, OpenAI’s search engine is similar to that one.
When you search for something, you’ll see an AI-generated explanation of what you searched for. This section will take up most of the screen. That’s not very different from what we’ve seen so far. However, off to the right side, you’ll see a Citations section. This will house the sources where ChatGPT got its information. In the image provided by The Verge, we see a list of five sources listed to the side. We’re not sure if the list includes more sources off-screen.
Advertisement
Five sources is not a bad amount, and they’re shown pretty prominently. ChatGPT isn’t hiding them behind a button. This shows that the company is thinking about the sources it’s surfacing.
In the screenshot, we see image results as well. This is good, as it shows that ChatGPT is trying to be a proper search engine.
Another way this feature is great is that ChatGPt can now access current events. Before, if you used the chatbot, you’d have to deal with a knowledge cut-off date. For example, when ChatGPT first launched, the model it used was more than a year out of date. However, if you’re using ChatGPT for research, you’ll have access to more modern events.
Should Google be worried? Probably not yet. However, with ChatGPT’s massive user base, it may only be a matter of time.
You must be logged in to post a comment Login