Let’s take a look at job scams. What is it? In this blog post, we will look into the reasons behind this surge, dig a little deeper into the challenges job boards face, the scope of the problem, and how a Hybrid Content Moderation solution can help mitigate the risk.

Undoubtedly, the rapid advances in Artificial Intelligence (AI) and technology fuelled increased productivity and efficiency on a global scale. On the flip side, though, as technology evolves rapidly, so do frauds.

Job board scams can be phishing for your personal details.
Job board scams are phishing for your personal details.

A closer look at the data

The pandemic generated a record-high unemployment rate, with an ever-growing pool of applicants searching for new job opportunities. In the wake of Covid-19, Job scams rose to an unprecedented rate, with Americans having lost a whopping $68 million to fraudulent job offers, in the first quarter of 2022, according to a report issued by CNBC.

The Federal Trade Commission (FTC) states that Job frauds tripled over the last three years, while a study published by Better Business Bureau (BBB) revealed that 14 million victims fell prey to job scams in 2022, with financial losses amounting to $2 billion.

Breaking down the anatomy of job scams

Scams have existed for a long time. They take various forms and can equally affect big corporations and less tech-savvy individuals. In fact, scammers have also targeted high-profile corporations, with Google and Facebook losing more than $100 million to business email compromise (BEC) frauds.

The turbulent job landscape is being exploited by fraudsters, who employ various deceitful tactics to defraud victims and steal or sell the data to the dark web. Scraping is a method fraudsters use to obtain personally identifiable information (PII) to create fake passports, driving licenses, or even new bank accounts.

Scammers pose as legitimate employers in popular job boards using a variety of sophisticated scam schemes, ranging from cloned company websites to spear phishing attacks that spread malware, which is then used to commit identity fraud or extract large sums of money.

Fake job listings on social media

Besides job boards, fraudsters share fake job offers on popular and trustworthy social media networks, capitalizing on the fact that 59% of the population uses social media and spends, on average, 2 hours and 29 minutes daily on social networks.

Let’s take a look at how this affects companies of all sizes.

LinkedIn

LinkedIn boasts more than 700 million users across the globe, but so does the potential for fake job ads. Alarmingly, over half of billion LinkedIn users have been targeted by fraudsters through scraping. This method is used by fraudsters to gain access to publicly viewable data such as:

  • job titles
  • emails
  • former colleagues
  • accolades
  • names
  • phone numbers

This data is then sold to hackers for phishing scams.

They can also use PDF attachments for the job descriptions through links that contain malware.

Another technique recently used in LinkedIn is online fraud impersonation. Fraudsters use spoofing, a method where they steal companies’ logos and hide their actual webmail accounts.

“You appeared in [number] searches this week” is one common phishing LinkedIn technique and is designed to steal users’ login credentials through fake LinkedIn landing pages.

A 232% spike in email phishing attacks that impersonate LinkedIn has been reported since February 2022.

Twitter

Twitter is another social media platform not immune to fake job offers. In particular, scammers use shortened URL links (i.e. bitly), which lead users outside of the platform to unverified web pages.

The bottom line is that fake accounts can be created quite easily, from either real or fake accounts, and social media platforms still struggle to verify user profiles that appear legitimate but are, in fact, populated with fake connections.

Content moderation matters

In-house moderation often lacks the technological know-how and expertise to detect fake job listings, cloned company websites, or URLs, among other phishing methods.

To successfully mitigate the ever-increasing risk of scams, job platforms need a highly- sophisticated hybrid moderation solution powered by artificial intelligence (AI), machine learning, and human workforce.

At Besedo, we pride ourselves on adapting to your platform’s moderation needs and goals. Our filter specialists are committed to supporting you in creating customized rules and filters that align with your platform’s specific guidelines.

Implio is the ultimate Hybrid solution, which will automate the bulk volume of fake job listings and flag any other questionable listings that require manual review. Besedo offers a pool of highly-trained and experienced moderators who will work as an extension of your in-house team and thus save time and free up resources.

Besedo’s offering is unique because it leverages AI and human intelligence to protect your job platform from fraudulent listings and identify fake employer profiles, spoofed company websites, trojan horse attacks, and malicious links in real time.

Written by

Anamela Agrodimou

Sales And Marketing Specialist at Besedo

Anamela is currently based in Athens, Greece where she works with marketing and sales. She speaks many languages and is keen to learn even more. She got her Master’s degree in marketing at Jönköping University in Sweden and she maintains she liked the snow during the cold Nordic winter.

Start using content moderation today

Form background

It is no surprise that during the festive season, record numbers of people use the internet. Customers turn to e-commerce for all their Christmas shopping needs, dating apps see spikes in users as people swipe right to find new people to talk to and online gaming proves a popular option for those with extra time off during the holidays, looking to relax and take part in some online recreational fun.

It is a huge opportunity for online retailers, especially for online marketplaces, where people will be looking for the best deals available – but this also presents an opportunity for scammers and fraudsters, which is too good to miss, looking to take advantage of what can only be described as a chaotic season online.

Safety of the user, not profit for the platform

For brands, this is the time of year when the hard work put into creating good experiences to strengthen reputation really comes home. No one wants to be the brand to ruin Christmas or fall short at a critical moment. In online business, it should be the safety of the user, and not the profit for the platform, that comes first, putting user experience at the heart of their priorities to increase customer trust, acquisition, and retention.

Christmas is a high-pressure moment in the calendar, however, it’s also a time where good experiences will be more impactful, and negative ones will have greater consequences. Whilst individuals should naturally take more additional care online and remain vigilant, we should also expect platforms – whether shopping, dating, or gaming, to protect users from potential harm, even more so at times of peak traffic.

It’s the most fraudulent time of the year

All marketplaces need to take precautions to prevent fraud on their platforms in the lead up to Christmas and onwards, keeping user experience front of mind. Larger marketplaces may see a smaller percentage of fraudulent posts, but considering that they have a much larger user base, even small percentages can lead to thousands of users becoming victims and associating a negative experience with the site. These sorts of harmful experiences could risk long-term reputational damage and the potential for fraud on their platform to spiral out of control.

As passions run, it’s not surprising that bad actors take the opportunity to engage in scams and fraud during the festive period. Last year, we decided to prove it. Our moderators investigated nearly three thousand listings of popular items on six popular UK online marketplaces, in order to understand whether marketplaces have content moderation pinned down, or, whether a fraudulent activity was still slipping through the net. The findings revealed that 15% of items reviewed showed signs of being fraudulent or dangerous.

This year, the risk is set to be higher than ever: a survey from an independent UK parcel carrier Yodel suggests that almost a third of people plan to do the entirety of their festive shopping online this year, more than a fourfold increase from last Christmas.

Good user experience, not just for Christmas

While this spike in usage means that brands have to be more vigilant than ever, it’s also the case that these trends are unlikely to reverse. This makes this Christmas an important learning opportunity, which will stress-test businesses’ moderation systems at activity levels, which might soon become the norm.

A positive and seamless customer experience at Christmas will not only drive sales in the short term but will also help to engage customers, building an emotional bond with the brand and in turn, increasing customer loyalty. But it should be remembered – the importance of a positive customer experience this Christmas is not only essential for the festive season but throughout the year.

With seasonal greetings

Axel Banér

Sales Director – EMEA

The term ‘sharing economy’ is famously difficult to define. For some, it refers to any digital platform that connects people more directly than traditional business models. For others, a business is only true in the sharing economy if it enables people to make money out of things they would buy and own anyway.

What all forms of the sharing economy share, though, is a reliance on trust. Whether you are hailing a ride, staying in someone’s spare room, borrowing a lawn mower, or paying someone to do a small one-off job, you’re entering into a transaction that starts with a decision to trust a stranger.

The difficulty of encouraging that decision is exacerbated by the fact that, from the user’s perspective, getting this wrong is potentially a high-stakes issue: while sometimes it might mean merely getting a dissatisfying product, interactions like borrowing a car or renting a room can pose serious risks to health and wellbeing.

Content’s double-edged sword

The question for platforms, then, is what kinds of structures and tools best encourage both positive outcomes and – almost as importantly – a sense of trust amongst the userbase.

Alongside approaches like strict rules on what can be listed and physical checks of users’ offerings, many sharing economy platforms turn to user-generated content (UGC) for this purpose. User reviews, photos of what is on offer, communication options, and even selfies can all help to humanize the platform, validate that users are real people, and generate a sense of trustworthiness.

At the same time, however, allowing UGC can open the door to specific risks. Low-quality images, for example, can worry people and erode trust, while giving users more control over how listings are presented creates greater potential for scams, fraud, and fake profiles. A permissive approach to content can also lead to users conducting business off-platform, side-stepping both safety, and monetization systems.

This is why there is such a variety of approaches to UGC in the sharing economy. Where some platforms, like Airbnb, encourage users to share as much about themselves and their property as possible, others, like Uber, allow only a small selfie and the ability to rate riders and drivers out of five stars. In between these open and defensive approaches, there are any number of combinations of content and sharing permissions a business might choose – but what delivers the best outcome?

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Using the carrot, not just the stick

Intuitively, many might assume that the platforms which feel safest will be those with the strictest rules, only allowing interaction between users when absolutely necessary and banning those who engage in damaging behavior. In recent research, a group of organizational psychologists described this as ‘harsh’ regulation, as opposed to the ‘soft’ regulation of supporting users, encouraging interaction, and influencing them to engage in positive behavior.

Perhaps surprisingly, the research found that soft regulation has a stronger positive impact than harsh regulation. The sharing economy, after all, digitalizes something humans have always done in the physical world: try to help one another in mutually beneficial ways. Just as we take our cues on how to behave in everyday life from the people around us, seeing positive engagements on platforms sets a standard for how we treat each other – and trust each other – in digital spaces. Being able to talk, share, and humanize helps people to engage, commit, and trust.

This suggests that we may need to shift how we think about managing content in order to make the most of its potential to drive long-term growth. Content moderation is seen, first and foremost, as a way of blocking unwanted content – and that’s certainly something it achieves. At the same time, though, having clear insight into and control over how, when, and where content is presented gives us a route towards lifting the best of a platform into the spotlight and giving users clear social models of how to behave. In essence, it’s an opportunity to align the individual’s experience with the best a platform has to offer.

Ultimately, creating high-trust sharing economy communities is in everyone’s best interest: users are empowered to pursue new ways of managing their daily lives, and businesses create communities where people want to stay, and promote to their friends and family, for the long term. To get there, we need to focus on tools and approaches which enable and promote positive interactions.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

The Christmas season is here and while the festivities kick off online retailers hold their breath and wait to see whether all of the preparations they have diligently made will pay off in revenue and sales during this ‘Golden Quarter.’ Will the website be able to handle extra demand? Will all orders be able to be shipped before Christmas?

Yet, The National Cyber Security Centre (NCSC) has highlighted another pressing concern which can have a lasting impact on revenue. Last week it launched a major awareness campaign called Cyber Aware advising potential customers to be aware of an increase in fraud on online platforms this year. This is because millions of pounds are stolen from customers through fraud every year – including a loss of £13.5m from November 2019 to the end of January 2020 – according to the National Fraud Intelligence Bureau.

Fraud is a major concern for marketplaces who are aware of the trust and reputational damage that such nefarious characters on their platform can create. While consumer awareness and education can help, marketplaces know that only keeping one eye on the ball when it comes to fraud, especially within User Generated Content (UGC), is not enough. Fraudulent activity deserves full attention and careful monitoring. Trying to tackle fraud is not a one-off activity but a dedication to constant, consistent, rigorous, and quality moderation where learnings are continuously applied, for the on-going safety of the community.

With that in mind, our certified moderators investigated nearly three thousand listings of popular items on six popular UK online marketplaces, in order to understand whether marketplaces have content moderation pinned down, or, whether fraudulent activity is still slipping through the net. After conducting the analysis during the month of November, including the busy Black Friday and Cyber Monday shopping weekend, we found that:

· 15% of items reviewed showed signs of being fraudulent or dangerous, this rose to 19% on Black Friday and Cyber Monday

· Pets and popular consumer electronics are particular areas of concern, with 22% of PlayStation 5 listings likely to be scams, rising to more than a third of PS5 listings being flagged over the Black Friday weekend

· 19% of listings on marketplaces for the iPhone 12 were also found to show signs of being scams

· Counterfeit fashion items are also rife on popular UK marketplaces, with 15% of listings found to be counterfeits.

The research demonstrates that, even after any filtering and user protection measures marketplaces have a significant number of the products for sale on them are leaving customers open to having their personal details stolen or receiving counterfeit goods. We know that many large marketplaces have a solution in place already, but are still allowing scams to pass through the net, while smaller marketplaces may not have thought about putting robust content moderation practices and processes in place.

Both situations are potentially dangerous if not tackled. While it is certainly a challenging process to quickly identify and remove problematic listings, it is deeply concerning that we are seeing such a high rates of scams and counterfeiting in this data. Powerful technological approaches, using AI in conjunction with human analysts, can very effectively mitigate against these criminals. Ultimately, it should be the safety of the user placed at the heart of every marketplace’s priorities. It’s a false dichotomy that fail safe content moderation is too expensive a problem to deal with – in the longer term, addressing even small amounts of fraud that is slipping through the net can have a large and positive long term impact on the financial health of the marketplace through increased customer trust, acquisition and retention.

2020 was a year we would not want to repeat from a fraud perspective – we have not yet won the battle against criminals. As we move into 2021, we’ll be hoping to help the industry work towards a zero-scam future, one where we take the learnings and lessons together from 2020 to provide a better, safer community for users and customers, both for their safety, but also for the long term, sustainable and financial health of marketplaces.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

From ancient Greek merchants attempting to claim insurance by sinking their ships to Roman armies selling the Emperor’s throne. Since time began, fraudsters have been willing to try their luck whenever a system is open to exploitation.

And succeeding.

However, most historical crimes were essentially isolated events. While a single fraudster could be a repeat offender, compared to the number of people who can be duped across different digital channels, fraud has become much more of an everyday concern for all of us.

But how did we get to this point? What risks do we need to be aware of right now? What can we do about it?

A history of digital deviance

Similar to other forms of fraud, digital scams date back further than you might think. Email phishing allegedly first took place in the early 1970s, although it’s generally accepted that the term was coined, and the practice became commonplace in the mid-1990s.

Since then, the online world has seen con artists try their hand at everything from fake email addresses to using information gleaned from massive data breaches, with $47 million being the most amount of money a single person has lost to an email scam.

Incidentally, the most famous email scam, the 419, aka Advance-fee, aka The Nigerian Prince scam, surfaced as mail fraud some 100 years ago.

But email isn’t the only digital channel that’s been hijacked. The very first mobile phone scams came about during the high-flying 80s when they became available – way before they were popular.

Text messages requesting funds be “quickly sent” to a specific account by a family member began soon after, though again didn’t surge in number until well into the 90s when uptake soared.

Of course, these aren’t the only forms of online fraud that surfaced at the start of the Internet’s popularity. Password theft, website hacks, and spyware – among others – proliferated at an alarming rate worldwide at a similar time.

One of the biggest problems we face today is the ease with which online fraud can take place.

Hackers continue to evolve their skills in line with advances in tech. But when you consider the number of websites that anyone can access; the marketplaces and classified websites/apps that rely on user-generated content – pretty much anyone can find a way to cheat these systems and those that use them.

Fraud follows trends

Scammers operate with alarming regularity all year round. However, they’re much more active around specific retail events:

However, while 2020 was shaping up to be a landmark year for fraudsters, given the many different sporting and cultural events like the Euro 2020 and Copa Americas football tournaments, and of course, the Summer Olympics – it seems that fate had very different plans for all of us: in the form of the COVID-19 pandemic.

But true to form, scammers are not above using an international healthcare crisis to cheat others. COVID-19 has given rise to different challenges and opportunities for online businesses. For example:

  • Video conferencing services
  • Delivery apps
  • Dating websites
  • Marketplaces

These have largely been financially advantageous, given they are digital services.

However, given the knock-on economic factors of coronavirus, there may be wider-reaching behavioral shifts to consider.

Fraudulent behavior simply seems to adapt to any environment we find ourselves in. In the UK, research shows that over a third (36%) of people have been the target of scammers during the lockdown. Nearly two-thirds said they were concerned that someone they knew could be targeted.

Examples playing on the fear of contamination include the sale of home protection products, while other more finance-focused scams include fake government grants (requesting personal information), help with credit applications (for a fee), and even investment opportunities promising recession-proof returns for those with enough Bitcoin to put into such schemes.

The gap is closing

It’s clear that online fraud is closely aligned with wider trends. In fact, the newer something is, the more likely scammers are likely to step in. Looking at the various timelines of many of these scams against when the technology was first invented, it’s clear that the gap between the two is closing as time progresses.

There’s a very good reason for this: the pace of adoption. Basically, the more people there use a particular device or piece of software, the more prolific specific types of scams have become.

Consider the release of the latest iPhone an event that gives fraudsters enough incentive to target innocent users.

The launch of the new iPhone preys upon consumer desire and manipulates Apple fans into being able to get their hands on the latest technology. As with most eCommerce scams, this is often with promises of delivery before the official launch date.

Not only that, but the accompanying hype around the launch proved the perfect time for fraudsters to offer other mobile phone-orientated scams.

Catch ’em all

In general, new tech and trends lead to new scams. For instance, Pokémon Go introduced new scams such as the Pokémon Taxi. So-called expert drivers literally were taking users for a ride to locations where rare Pokémon were said to frequent.

The fact it was so new and its popularity surged in such a short period of time made it a whole lot easier for fraud to materialize. Essentially, there was no precedent set – no history of usage. No one knew what exactly to anticipate. As a result, scams materialized as quickly as new users signed up.

In one case, players were being tricked into paying for access as new server space was needed. Not paying the $12.99 requested would result in their Pokemon Go accounts being frozen. While that might be a small price to pay for one person, it would mean a significant amount at scale.

Moderation methods

Regardless of their methods, fraudsters are ultimately focused on one thing. Whether they’re using ransomware to lock out users from their data, posting too-good-to-be-true offers on an online marketplace, or manipulating lonely and vulnerable people on a dating app – the end result is cold hard cash through scalable methods.

Hackers will simply barge their way into digital environments. Those using online marketplaces and classifieds sites essentially need to worm their way into their chosen environment. In doing so, they often leave behind a lot of clues that experienced moderators can detect.

For example, the practice of ad modification or the use of Trojan ads on public marketplaces follows particular patterns of user behavior that can cause alarm bells to ring.

Take appropriate steps

So what can marketplace and classified site owners do to stay ahead of fraudsters? A lot, in fact. Awareness is undoubtedly the first step to countering scams, but that alone will only raise suspicion than act as a preventative measure.

Data analysis is another important step. But, again, the biggest issue is reviewing and moderating at scale. So how can a small moderation team police every single post or interaction when thousands are created daily?

This is where moderation technology – such as filters – can help weed out suspicious activity and flag possible fraud.

To stay ahead of fraudsters, you need a combination of human expertise, AI, and filters. While it’s possible for marketplace owners to train AI to recognize these patterns at scale, completely new scams won’t be picked up by AI (as it relies on being trained on a dataset). This is where experienced and informed moderators can really add value.

People who follow scam trends and spot new instances of fraud quickly are on full alert during big global and local events. They can very quickly create and apply the right filters and begin building the dataset for the AI to be trained on.

Conclusion

Ultimately, as tech advances, so too will scams. And while we can’t predict what’s around the corner, adopting an approach to digital moderation that’s agile enough to move with the demands of your customers – and with fast intervention when new scam trends appear – is the only way to future-proof your site.

Prevention, after all, is much better than a cure. But a blend of speed, awareness, and action is just as critical where fraud is concerned.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

The outbreak of COVID-19 or Coronavirus has thrown people all over the world into fear and panic for their health and economic situation. Many have been flocking to stores to stock up on some essentials, emptying the shelves one by one. Scammers are taking advantage of the situation by maliciously playing on people’s fear. They’re targeting items that are hard to find in stores and make the internet – and especially online marketplaces – their hunting ground, to exploit desperate and vulnerable individuals and businesses. Price gouging – or charging unfairly high prices – fake medicine or non-existent loans are all ways scammers try to exploit marketplace users.

In this worldwide crisis, now is a great time for marketplaces to step up and show social responsibility by making sure that vulnerable individuals don’t fall victim to corona related scams and that malicious actors can’t gain on stockpiling and selling medical equipment sorely needed by nurses and doctors fighting to save lives.

Since the start of the Covid-19 epidemic we’ve worked closely with our clients to update moderation coverage to include Coronavirus related scams and have helped them put in place new rules and policies.

We know that all marketplaces currently will be struggling to get on top of the situation and to help we’ve decided to share some best practices to handle moderation during the epidemic.

Here are our recommendations on how to tackle the Covid-19 crisis to protect your users, your brand and retain the trust users have in your platform.

Refusal of coronavirus related items

Ever since the outbreak started, ill-intentioned individuals have made the price of some items spike to unusually high rates. Many brands have already taken the responsible step of refusing certain items they wouldn’t usually reject, and some have set bulk-buying restrictions (just like some supermarkets have done) on ethical and integrity grounds.

Google stopped allowing ads for masks, and many other businesses have restricted the sale or price of certain items. Amazon removed thousands of listings for hand sanitizer, wipes and face masks and has suspended hundreds of sellers for price gouging. Similarly, eBay banned all sales of hand sanitizer, disinfecting wipes and healthcare masks on its US platform and announced it would remove any listings mentioning Covid-19 or the Coronavirus except for books.

In our day to day work with moderation for clients all over the world we’ve seen a surge of Coronavirus related scams and we’ve developed guidelines based on the examples we’ve seen.

To protect your customers from being scammed or victim of price-gouging and to preserve your user trust, we recommend you refuse ads or set up measures against stockpiling for the following items.

  • Surgical masks and face masks (type ffp1, ffp2, ffp3, etc.) have been scarcely available and have seen their price tag spike dramatically. Overall, advertisements for all kinds of medical equipment associated with the Covid-19 should be refused.
  • Hands sanitizer and disposable gloves are also very prone to being sold by scammers at incredibly high prices. We suggest either banning the ads altogether or setting regular prices on these items.
  • Empty supermarket shelves of toilet paper have caused this usually cheap item to be sold online at extortionate prices, we suggest you monitor and ban these ads accordingly.
  • Any ads with the mention of Coronavirus or Covid-19 in the text should be manually checked to ensure that they aren’t created with malicious intends.
  • The sale of magic medicines pretending to miraculously cure the virus.
  • Depending on the country and its physical distancing measures, ads for home services such as hairdressers, nail technicians and beauticians should be refused.
  • In these uncertain times, scammers have been selling loans or cash online, preying on the most vulnerable. Make sure to look for these scams on your platform.
  • Similarly, scammers have been targeting students talking about interest rates being adjusted.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Optimize your filters

Ever since the crisis started, scammers have become more sophisticated as days go by, finding loopholes to circumvent security measures. By finding alternative ways to promote their scams, they use different wordings such as Sars-CoV-2 or describing masks by their reference numbers such as 149:2001, A1 2009 etc. Make sure your filters are optimized and your moderators continuously briefed and educated to catch all coronavirus-related ads.

Right now, we suggest that tweak your policies and moderation measures daily to stay ahead of the scammers. As the crisis evolves malicious actors will without doubt continue to find new ways to exploit the situation. As such it’s vital that you pay extra attention to your moderation efforts over the following weeks.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

The fraud landscape is continually shifting with fraudsters’ methods growing more sophisticated by the day. With cybercrime profits rising to $1.5 trillion last year, scammers are getting increasingly creative. Fraudsters are always improving on their deceptive and lucrative scamming methods alongside the continuous development of the Internet.
Let’s have a closer look at some of the most sophisticated scams surfacing on the Internet currently.

Fake website scams

Scammers might be getting more elaborate in their techniques, yet they take more precautions than ever before to avoid being exposed and caught by the authorities.

A current trend making the headlines involves scammers setting up fake websites to replicate reputable online stores or marketplaces while carefully executing their scams.

A news story broke out in Spain earlier this year and is considered to be the largest cyber-scam in the history of the country. A 23-year old, along with his accomplices, managed more than 30 fraudulent websites generating an income amounting to an estimated 300,000 € per month.

The fraudsters sold mainly consumer electronics by directing potential buyers to their own fraudulent pages which looked like replicas of reputable websites using companies’ logos and names to dupe users.

To avoid suspicion from the authorities, these sites would only stay live for a few days while being intensely advertised on search engines and social media before disappearing into thin air. The aim was to attract as many potential customers as possible in the shortest amount of time.

Victims were asked to send bank transfers for their orders, but of course, the goods would never arrive. The money received would be deposited into bank accounts set up by mules recruited to transfer money illegally on behalf of others. The scammers then only needed to withdraw the money at cashpoints.

Another technique used by fraudsters plays on the fact that most of us have been warned to check for secure websites when browsing the web. The presence of “https” and the lock icon are supposed to indicate that the network is secure, and users can share their data safely. However, this can be easily replicated. By using those security symbols, scammers make it easier for them to fool potential victims.

Fake website scams, also popping up on sharing economy websites such as Airbnb, shows that since people have become warier of online scams, fraudsters have taken their methods to the next level of sophistication.

Romance mule scams

Online dating sites have been one of romance scammers’ preferred hunting grounds to find their next victim.

The FBI’s online crime division recently flagged an alarmingly growing trend in the romance scam department. Online dating scammers have expanded their romance scam strategies by adding a new dark twist to it and taking advantage of their victims in a whole new way. This dating scam turns love-seekers into unwittingly recruited criminals, also known as ‘money mules’.

Fraudsters, under the guise of a relationship, are using online dating sites to recruit their next victim and dupe them into laundering stolen money.

Here’s how they work. The scammers, pretending to be American or European citizens living or working abroad, start grooming their victims for several months by establishing a supposedly trustworthy relationship with them. They then proceed to have their victim act as a financial middleperson in a variety of fraudulent activities. There are many ways victims can unwittingly help fraudsters. For instance, they can print and post packages or letters often containing counterfeit checks, or in other cases pick up money at Western Union and forward it to another place. They could even open a bank account under the pretense of sending and receiving payment directly, except that these accounts, unknowingly, are then used to aid criminal activities.

A study estimates that 30% of romance scam victims in 2018 have been used as money mules.

Indeed, romance scam victims are the perfect mules for these sorts of frauds. Blindly in love, victims will not doubt their lovers’ intentions and likely never steal the money or ask for anything in exchange for their services, unlike professional fraudsters.

This romance mule scam makes it complicated for local authorities to follow the money, and even if the victims get caught, they do not know the actual identity and location of the scammers. Tracking the fraudsters’ movements becomes an intricate or impossible task.

Romance mule victims often do not know they are a part of these fraud schemes or criminal activities until it’s too late. Despite the victims not losing any money, the FBI warns that they might face legal or financial consequences for participating in such a scheme.

The Telegraph reported a story about a 61-year-old British man who was tricked by fraudsters to fund terrorists in this exact way. He believed he was corresponding with a wealthy businesswoman he met on an online dating site. The woman claimed she needed her money in the UK so she could pay her European employees. The man would then send checks on her behalf, becoming inadvertently and unknowingly part of a fraud scheme.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Impersonation scams

Phishing scams are very well-known, however a variation, the ‘impersonation scam’ has boomed over the past few years impacting both users’ online safety and companies’ reputation.

These phishing emails might seem completely genuine as they look nearly identical to those of reputable and reliable websites, including Google or Apple, and often end up bypassing spam filters. Like many fraud schemes, impersonation scams are based on trust.

Last year, the popular streaming service Netflix was the target of an email phishing scam in Ireland which was sent to thousands of subscribers on the pretext of a maintenance and verification issue. In a similar way to the fake website scams previously mentioned, these malicious emails, looking like perfect replicas of Netflix emails featured a link to update credit card information or login credentials. However, the link did not direct users to the Netflix website but to a site managed by scammers.

Scams often target popular brands with a large user-base to lure subscribers into giving out personal information. Not only is this dangerous for the customer, but it also threatens brands’ reputation.

Staying ahead of the scammers

With fraudsters persistently refining their scamming techniques, companies must always be one step ahead in the prevention of these scams to protect their users and keep them safe on their site.

Marketplace leakage, which we wrote about in a previous article, refers to losing your users’ from operating on your platform and continue their conversations beyond your site’s security measures. This technique of luring users away from your site is used in the scam examples mentioned in this article, and dramatically increases the risk of your users being scammed. To ensure the safety of yours users, online marketplaces need to keep interactions on their platforms by preventing the display of personal details and hidden URLs.

Marketplace leakage can be avoided by improving your content moderation efforts. By ensuring accurate automated moderation you can instantly spot and prevent any content aimed at drawing users away from your platform.

To learn more about setting up efficient automated moderation filters to protect your users, check out our Filter Creation Masterclass.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Internet fraud is getting more and more sophisticated. And for online marketplaces – or any other platform that relies on User Generated Content – it’s often well-hidden or undetectable. Scammers are an increasingly resourceful bunch, so if there’s a system to be gamed, you can bet they’ll find a way to work it.

However, with the right insight, awareness, and detection processes in place, site owners can keep their users safe – and stop scams before they endanger anyone.

Let’s have a look at some of the most common online scam types to be aware of on your online marketplace, how you stay on top of them, and ultimately how to prevent them.

Photo by Dylan Gillis on Unsplash

Online Shopping Scams

One of the most common types of fraudsters plaguing digital marketplaces, online shopping scammers usually advertise high-ticket items for sale at low prices. Typically, these include:

  • Mobile phones
  • Video game consoles
  • Laptops
  • Jewelry
  • Cars
  • Commercial vehicles and heavy equipment are on the rise.

They may be advertised along with a believable yet fabricated story. This can be something like the fact they’re selling “excess stock” or that goods have “minor damages,” – for example.

The reason scammers do this is simply to give some degree of credibility to their request for partial payment upfront. Of course, they have no intention of selling any goods at all. They simply aim to dupe users.

As a marketplace owner, it’s important to advise your users that if something sounds too good to be true, it usually is. It is also vital to warn them against sending any form of payment before obtaining any goods. They should also be wary of paying by direct transfer, using prepaid cards, or any requests to pay for goods using cryptocurrencies.

How scammers operate on online marketplaces

Here are a few tricks scammers use to cover their tracks.

Purchase old accounts

Scammers look for accounts that have a good reputation and buy them – to publish fraudulent ads.

Ad modification

Scammers submit mundane ads with real photos, prices, and reliable data. But once the ads are accepted, they change them – lowering prices and deleting/adding photos.

Trojan ads

Scammers create accounts and begin publishing conventional ads. Once a reliable precedent is set, they begin posting fraudulent ads instead.

Change IP addresses

This is when scammers continually change IP addresses to avoid being tracked.

Multiple ad publications

Scammers submit multiple ads in big quantities; to disguise their fraudulent ones.

Dating and Romance Scams

Dating scams are probably the best-known kind of online fraud – a topic we have covered extensively on our blog.

While many of us have used flattering photos of ourselves in online dating profiles, there’s a big difference between presenting ourselves in our best light and creating a fake online identity.

While TV shows and high-profile cases of this practice – known as catfishing – have raised awareness, it still remains a common issue on many dating sites.

Essentially, romance scams work when a scammer (posing as an attractive man or woman) reaches out to a user and builds a relationship with them exclusively online – sometimes over a period of months – before proceeding to either ask them for money or even to do favors for them: activities that could well be criminal in their nature.

Why does the scam work so well? Catfishers do whatever it takes to win their targets’ trust. And once that trust is established, the target is too emotionally invested in questioning the scammer’s motives.

While different official organizations – like the Online Dating Association – are doing more to raise awareness, dating sites themselves need to do more to highlight the dangers and behavior patterns fake users follow.

For example, catfishers use many keywords and phrases to make themselves sound more credible. They may claim to be religious or work in a trustworthy job – like the police or military.

A common struggle for many sites is that they’re not quick enough to remove scammers. Dating and romance scammers quickly move the conversation away from the site to avoid detection – sites need to prevent that from happening from scratch. Learn how you can create filters to detect and prevent personal details automatically.

Fake Charity Scams

Many of us are wary of so-called “chuggers” (charity + muggers) approaching us on the street asking for donations, and we’d be right to – given the recent news that one scam in London was so well-orchestrated that even those collecting cash didn’t know it was a shady operation.

However, online – where donation platforms are becoming increasingly popular owing to their ease of use – how can those donating be sure their money ends up where it’s supposed to?

Transparency is key. The more information a site offers about the charities they’re working with; how much (if anything) they take as commission; and how long donations take to reach each charity, the more trustworthy they’re likely to be.

But what about online marketplaces and classified sites? Charity scams are just as likely here – particularly in the wake of high-profile disasters.

As a result, site owners must advise their users to exercise caution when those requesting funds…

  • say they’re from a charity they’ve never heard of
  • won’t or can’t give all the details about who they’re collecting for
  • seem to be pushing users to donate quickly
  • say they only want cash or wire transfers (credit card is much safer)
  • claim donations are tax-deductible
  • offer sweepstakes prizes for donations

When working with charities, online marketplaces, and classified sites should ensure that rigorous security checks are in place. For example, as phishing is a common fake charity scam, it’s crucial that any relevant in-platform messages that provide a link to an external ‘charity site’ are detected early on.

Employment Scams

Online fraud and employment may sound like a fairly unlikely pairing, but it’s much more sophisticated than many might think.

There are numerous ways in which scammers abuse online marketplaces and classified sites, and most of the time, they’re looking to either extract money or steal your identity (more on that below too).

One of the most frequent employment-related scams is a fake job posting looking for people to handle “payment processing.” The scammer may find CVs/Resumes online, or they may post on credible boards – such as Craigslist.

The trick being played out here is one where the proceeds of crime are handled by the user (for a small commission) and transferred back to the fraudster – who is essentially using the ‘employee’ to launder money.

Another common job-related scam is one where “recruiters” coax candidates into paying for additional job training or career development courses – or when an “employer” asks candidates to cover the costs of a credit record check.

In all cases, employment-focused marketplace owners must be acutely aware of anyone asking users to impart finance-related information or money.

However, these requests may not materialize until the conversation has been moved to email – away from the site – so it’s critical for those operating job boards to put some form of prevention and moderation effort in place.

Identity Theft

News on BBC’s website that a young journalist had a job application withdrawn by someone pretending to be him – via email – is alarming but not uncommon. However, impersonation takes on a whole new meaning when linked to identity theft.

While the most likely scenario of identity theft is an online data breach, internet shopping also puts users at risk. According to Experian, 43% of online shopping identity theft happens during the peak holiday shopping season (Black Friday onward).

Many scammers use familiar tricks – like phishing – to steal personal details, debit and credit card details, and social security numbers, using them to buy goods (often high-priced items in bulk), to claim refunds from ‘faulty’ items, or to open accounts in other peoples’ names to mask other fraudulent activities.

Scammers can buy stolen identities on the dark web very cheaply. And it’s not uncommon for fraudsters to advertise usually high-priced items at low prices for quick sale on marketplaces… and then steal shoppers’ credit card details.

While general advice is routinely given to consumers – such as vigilance over website security, visiting preferred stores directly rather than clicking search engine links, and not storing card details online – online marketplaces must prioritize monitoring and prevention too.

Preventing Scams on Online Marketplaces

With so many ways in which scammers can benefit, it’s clear that they’re not going to stop anytime soon.

This means that in an environment where trust is a limited commodity, the pressure increases on e-commerce sites, online marketplaces, and classified websites to maintain it.

While official bodies, governments, and consumer rights groups, as well as Facebook (as reported in TechCrunch this week) and other tech champions with considerable clout, are informing and empowering users to recognize and take an active stance against suspicious activity, online marketplaces also have a responsibility to detect and eliminate fraud.

As marketplaces scale and begin to achieve a network effect, they need to adopt more stringent cybersecurity protocols to protect their users – multi-factor authentication, for example. Similarly, mapping user behavior can help site owners to identify how genuine customers navigate it – giving them the intelligence they can use to benchmark suspicious activity.

Essentially, the better you know your users and how they behave, the more emphasis you put on transparency as a prerequisite for joining your community, and the greater the deterrent. But as discussed, there are ways for scammers to mask their behavior.

Being a step ahead of scammers is important (as our trust and security expert explains in this article). Therefore, it’s essential to anticipate the different times of the year when certain scams manifest – as outlined in our Scam Spikes Awareness Calendar.

However, by far, the most effective way to prevent fraudulent activity on online marketplaces is to have a solid content moderation setup in place. While this could be a team or person manually monitoring the behaviors most likely to be scams, as a marketplace grows, this process needs to function and maintain at scale

Enter machine learning AI – a moderation solution trained to detect scammers before posting fraudulent content. Essentially this works by feeding the AI with data to recognize suspicious behavioral patterns and can, therefore, identify several possible fraud threats simultaneously.

At Besedo, we fight fraud by giving marketplace owners the tools – not just the advice – they need to stop it before it is published.

All things considered, scammers are merely opportunists looking for an easy way to make money. The harder it becomes to do this on online marketplaces, the less inclined they’ll be to target them.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Over the years of helping online marketplaces moderate their real estate sections, we’ve gathered a long list of best practices for content moderation rules and guidelines specific to this industry.

If you work with a real estate site or platform you know that you need to keep a close eye on the user-generated content published by users. If you don’t, your site quickly turns from a helpful tool for smooth interactions between renters and landlords to a scam infested spam forum.

Today we share some actionable guidelines on what to direct your content moderation efforts at. You can use them to increase the quality of listings, protect your users and hopefully increase the conversion rates of your real estate site.

Edit or reject ads with contact details

While the whole industry should slowly be moving towards sites that monetize and provide value through value-added services, most sites are not there yet.

Unless your site already has a unique and strong offering of additional value, it’s likely still relying on sellers who use it as a lead generator. If that’s the case you should remove all mention of phone nr, name, and email or physical addresses to prevent platform leakage.

Unconventional payment methods

Unless it’s your USP, all ads that mention unconventional payment methods should be removed. This is true both for swap and exchange suggestions, such as a car or cellphone in exchange for accommodation.

The rule applies to unorthodox payment methods such as Bitcoins and other electronic currencies.

We advise against allowing such reimbursement options as the value comparison can be hard to get right and there’s a risk one of the two parties will end up dissatisfied. You don’t want those negative feelings associated with your platform and you definitely do not want to get involved in disputes concerning unconventional payment methods.

Finally, there’s also the additional risk that some of the commodities, offered in exchange, has been acquired illegally and you don’t want your platform involved in what could essentially be seen as fence activities.

Whether you monetize your real estate platform by charging listing fees or not, you should remove listings with more than one item in.

If you charge a listing fee, sellers who post multiple items in one go are circumventing the fee and negatively impacting your revenue. If you don’t, listings with many different offerings are still really bad as they make it harder for users to find relevant results when searching for accommodation, decreasing the user experience.

Links or references to competitor sites

It goes without saying that it’s best practices for content moderation to remove or edit any mention of competitors immediately, particularly if they include outbound links.

It’s bad enough with bouncing visitors, it’s even worse if the content on your site is actively referring them to rivals in your space.

Pay attention to listing prices

We have an entire article focused on things to look out for to prevent scams on real estate sites, but one of the things we haven’t discussed in depth is listing prices.

Most areas will have a pretty baseline price range for similarly sized accommodations. For cities that are extra prone to be targeted by scammers, it’s a good idea to familiarize yourself with this range. Scammers often offer up apartments for rent at too good to be true prices. If you know the realistic range, it’s easier for you to catch them before they get to your customers.

We are currently working on building out a pricing database for some of the bigger cities in the world. If this project sounds interesting, be sure to subscribe to our blog and get informed when we have more information available.

Take a hard stance against discrimination

You’re probably already aware of the multiple lawsuits Airbnb has faced due to various instances of discrimination that’s occurred through their platform.

To avoid getting into the same legal trouble and the ensuing PR storm as well as to provide all your users the best possible experience through your site, we advise taking a hard stance against any discriminatory listings. Reject any listings that singles out people of specific race, religion, or sexual orientation etc.

Prohibit rentals that enable prostitution

For anyone who has followed the case of backpage.com and how its owners were indicted for earning over $500 million in prostitution-related revenue from the site, it should be second nature to have processes in place for finding and removing any content that facilitate prostitution.

Apart from the moral implications, allowing prostitution is illegal in many countries and could land your company (and you) in both legal and PR troubles.

If your platform isn’t offering hotel rooms or vacations homes, it’s often a good and safe practice to reject rooms-for-the-night type listings. That type of listings is often advertising accommodations used for indecent interactions.

Remove duplicate items

Users will sometimes submit the same listing multiple time. Why they submit duplicates may vary, but the most common reason is to try and bump up their ranking on your site. When users try circumventing rules, it’s never good as it usually impacts either user experience, violates legal commitments or, as in the case with duplicates, could get your site penalized in Google rankings.  

The best cause of action is to remove duplicates from your site directly before they get published, this way you ensure the quality of your site and avoid a messy search experience for other users.

Re-categorize listings placed in the wrong category

Vacation homes in the permanent residency category or for-sale houses in the for-rent section, all contribute to irrelevant search results and negative user experiences.

It’s important to remove or re-categorize misplaced listings quickly to ensure a good experience for users.

Reject listings with poor descriptions

Depending on the category, the required details on the commodity for rent or sale may differ. What’s always true though is that the description needs to be accurate and descriptive. Information like location, price, minimum rent time etc. should be a given. But sellers are often in a hurry and don’t want to spend too much time on the admin work that goes into writing a good listing that converts.

Make sure you educate your users to create proper descriptions and titles for their listings, otherwise both bounce rate and conversion rates may suffer. In a study we did on user behavior we found that irrelevant content leads to 73% of users never returning to the site again.

Following these guidelines outlined above will help you eliminate fraudulent listings and improve the quality of content as well as the overall user experience of your site.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

4 most common refusal reasons on real estate sites in 2019

Now that we have gone through the best practices for content moderation let’s also quickly disclose where we find the biggest content moderation challenges lie for real estate sites.

In 2018 these were the top 4 rejection reasons for the real estate sites we help.

  • Multiple products
  • Duplicates
  • Wrong category
  • Poor description

As you can see most of the rejected items affect either user experience, and as a result conversion rate, or they impact your revenue more directly, as is the case with multiple products, where users circumvent the listing fee.

Curious about which other listings we reject for real estate sites? Check out our presentation 9 listings to remove from your real estate site to optimize for monetization from Property Portal Watch in Bangkok 2018 where we go into more details.

Want to know more about best practices for content moderation or expert advice on how to set up and optimize the content moderation on your site? Get in touch and hear how we can help out.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

On a yearly basis, we deliver a scam awareness calendar to help online marketplaces prepare for scam spikes in the year to come. We base the scam calendar on trend observations from previous years and analysis of major happenings in the coming year. Our trust and safety team is working day-by-day with analyzing data to find fraudulent behaviors, and proactively supports our clients with information to help them stay ahead of scammers.

Fraudulent behaviors on marketplaces are constantly fluctuating, as we witness periods of increased and decreased scams. Scam spikes are typically triggered by holiday seasons, festivals, events and other large happenings in a year.

For you and your moderation team to stay on top of the scam spikes, you need to be aware of when and where scammers might appear. In this article, we will share some of the most common types of scam for 2019 and when you are likely to see them spike. If you want to learn more about the specific scam spikes, visit our scam awareness calendar where we predict spikes on a month-by-month basis.

Tech release scams

We are spoiled as consumers with new tech releases every year. In so many ways it’s neat that we continue to develop and outperform our technical developments. And often, we witness competing companies triggering each other to step up their game and drive development. One of the most reoccurring battles between brands is between the two phone giants Apple and Samsung. When Samsung releases their phone of the year, Apple can’t wait to release theirs.pan>

These two annual releases are considered some of the most important product launches of the year, by tech enthusiasts and consumers. Unfortunately, this also attracts scammers looking to deceive eager buyers.

As with previous years, we’re expecting the scam spike in the weeks leading up to the launch of a new iPhone or Samsung. To protect your users, make sure to be on the lookout for pre-order listings, cheap prices compared to market price, phrases such as ‘item is new’ or ‘as good as new’ or ‘brand new in box, as well as deceiving phrases used in the description.

Samsung is rumored to release Samsung Galaxy S10 on March 8th, with prices starting at $719. Rumors are also floating online, that Samsung will launch the world’s first foldable smartphone in March this year.

Apple, on the other hand, usually host their big annual product release in early/mid-September, and if they stick to their tradition, we’re expecting their new iPhone to be launched on September 10th this year. Visit this page to stay on top of the latest news surrounding the next iPhone release.

Holiday booking scams

One of the most common actions targeted by scammers is vacation and holiday bookings. When we’re dreaming ourselfves away to various destinations in front of our computer or phone, scammers strategically expose us to exclusive vacation deals that looks stunning, but which in reality doesn’t exist. At Besedo we witness these types of scams on a daily basis, but April and August are considered peak season for holiday scams – when we book our summer and winter vacations.

Make sure your users stay safe on your site. Be on the lookout for fraudulent holiday rental ads and offers that are ‘too good to be true’. And more concretely, your moderation team need to look out for high quality or stock pictures, free email domains, IP’s, large group rentals, price below market, full payment in advance etc.

Want to learn more about holiday scams?

Shopping scams

Shopping, shopping, shopping. We all do it, we all (most at least) love it. Phenomena like Black Friday, Cyber Monday, after Christmas sales, Singles day etc. are periods where consumers are rushing to get exclusive deals and discounts.

While offline consumers are in the risking to be trampled in packed stores, online shoppers need to be vary of scammers trying to capitalize on the shopping frenzy by deceiving consumers with ‘super deals’. Be ready for a period of increased scams during and after the shopping peaks. Your team needs to be on the lookout for things like “too good to be true prices”, stock photos and phishing emails.

Big events scams

Every year there are multiple events taking place, everything from sports events to concerts and festivals. Unfortunately, most large events also attract a wave of scammers. In 2019 there are two major sports events, the Asian Cup and Copa America. For these kinds of events, your moderation team should be pay extra attention to ads with many available tickets for sale, low prices, miscategorized tickets, ultra-cheap airline tickets, address and phone number are geographically disconnected, and requests for bank transfer payment only etc.

Besides the two football tournaments mentioned above, there’s a lot of concerts and festivals already sold out, which means tickets may be for sale on your marketplace. Stay ahead of the scammers, learn more about ticket scams and how to keep your users safe.

Back to school scams

Being a student often comes with a tight budget and a need to find new accommodation, often in very specific and possibly unfamiliar areas. This, naturally, makes them vulnerable to potential fraudulent rental deals and loan offers. Make sure your moderation team pays attention to new users posting flats/flat shares, pricing, emails, stock photos, and dodgy loan offers.

New courses usually start twice a year, every January and September, and it is during these months we typically see an increased number of scammers trying to trick students of their money.

Stay ahead of the scammers

Most of the scams we’ve listed will happen throughout the year and your team should always be looking out for them. However, by knowing when a spike is likely you can better prepare your team and you can staff accordingly.

By being aware of scam spikes and adjusting your moderation setup accordingly you can both keep your users safe, reduce time to site and shrinkage. If your team size isn’t flexible, a good way to manage spikes with minimal impact to the end user is to increase your automation levels when the volumes grow.

With the right setup you can automate up to 80% using filters alone and with tailored AI you can reach even better quality and levels.

Want to know more? Get in touch with a content moderation solution expert today or test our moderation tool, Implio, for free.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background