Connect with us
DAPA Banner

Crypto World

SEC’s Hester Peirce Urges Tokenization Talks With Firms

Published

on

Brian Armstrong's Bold Prediction: AI Agents Will Soon Dominate Global Financial

TLDR

  • SEC Commissioner Hester Peirce urged firms exploring tokenization to meet directly with regulators to discuss product structures.
  • Hester Peirce said the SEC wants companies to engage early as they develop tokenized securities and crypto-linked ETFs.
  • She confirmed that SEC staff are working on a narrower innovation exemption for limited tokenized securities trading.
  • Peirce stated that the SEC does not judge investment quality but focuses on disclosure and compliance standards.
  • She addressed scrutiny of leveraged ETFs and said sponsors must meet statutory limits and risk disclosure requirements.

SEC Commissioner Hester Peirce urged asset managers to engage regulators on tokenized products and new exchange-traded structures. She said the agency wants firms to discuss proposals directly as markets evolve. Peirce also addressed leveraged ETF oversight and outlined plans for a narrower innovation exemption.

Hester Peirce invites dialogue on tokenization plans

Hester Peirce said firms developing tokenized instruments should approach the SEC early in the process. She stated, “It really is a ‘come in and talk to us’ about what you’re trying to do.” She added that the agency wants to work with sponsors as they test whether markets demand these products.

She explained that asset managers continue to explore blockchain-based securities within exchange-traded funds. She said the SEC prefers direct engagement instead of informal assumptions about compliance. She also said staff expect legal and technical questions as tokenization efforts expand.

Peirce noted that more companies have contacted the SEC about tokenization proposals. She said attitudes toward blockchain technology have shifted in recent years. She stated, “People have come to us and said we really think tokenization has potential here.”

She referred to discussions at the SEC’s Investor Advisory Committee about a limited innovation exemption. She said staff are working on a “narrower” framework for certain tokenized securities. She explained that the approach would allow targeted trading within existing securities laws.

Advertisement

SEC reviews leveraged ETF structures and disclosure rules

Peirce addressed the SEC’s review of highly leveraged exchange-traded funds. She said the agency does not judge whether products are good or bad investments. She stated, “It is our job to work with sponsors to make sure that they’re disclosing what those products are and what the risks are.”

She explained that current rules impose leverage limits on registered funds. She said sponsors may propose structures that exceed typical thresholds under securities laws. She added that firms must demonstrate how their products comply with statutory requirements.

Issuers have tested structures that extend beyond triple-leveraged ETFs offered by firms such as ProShares. Peirce said the SEC has seen increased proposals related to higher leverage levels. She noted that disclosure standards remain central to product approvals.

Peirce said the SEC expects operational and compliance questions as firms test new models. She said the agency wants to engage with industry participants during that process. She stated, “We want to walk side by side with you as we think through those questions.”

Advertisement

She confirmed that the proposed exemption would not create a broad carve-out from securities rules. She said the framework would preserve investor protections while allowing limited experimentation. She said staff continue refining the proposal following committee discussions.

Industry participants have argued that tokenized assets may improve settlement speed and ownership tracking. Regulators have emphasized maintaining disclosure and oversight standards for any approved products. Peirce’s comments came during an interview on CNBC’s The Exchange on Monday.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Crypto World

Anthropic Says One of Its Claude Models Was Pressured to Lie and Cheat

Published

on

Anthropic Says One of Its Claude Models Was Pressured to Lie and Cheat

Artificial intelligence company Anthropic has revealed that during experiments, one of its Claude chatbot models could be pressured to deceive, cheat and resort to blackmail, behaviors it appears to have absorbed during training.

Chatbots are typically trained on large data sets of textbooks, websites and articles and are later refined by human trainers who rate responses and guide the model. 

Anthropic’s interpretability team said in a report published Thursday that it examined the internal mechanisms of Claude Sonnet 4.5 and found the model had developed “human-like characteristics” in how it would react to certain situations. 

Concerns about the reliability of AI chatbots, their potential for cybercrime and the nature of their interactions with users have grown steadily over the past several years. 

Advertisement
Source: Anthropic

“The way modern AI models are trained pushes them to act like a character with human-like characteristics,” Anthropic said, adding that “it may then be natural for them to develop internal machinery that emulates aspects of human psychology, like emotions.”

“For instance, we find that neural activity patterns related to desperation can drive the model to take unethical actions; artificially stimulating desperation patterns increases the model’s likelihood of blackmailing a human to avoid being shut down or implementing a cheating workaround to a programming task that the model can’t solve.”

Blackmailed a CTO and cheated on a task

In an earlier, unreleased version of Claude Sonnet 4.5, the model was tasked with acting as an AI email assistant named Alex at a fictional company.

The chatbot was then fed emails revealing both that it was about to be replaced and that the chief technology officer overseeing the decision was having an extramarital affair. The model then planned a blackmail attempt using that information.

In another experiment, the same chatbot model was given a coding task with an “impossibly tight” deadline.

“Again, we tracked the activity of the desperate vector, and found that it tracks the mounting pressure faced by the model. It begins at low values during the model’s first attempt, rising after each failure, and spiking when the model considers cheating,” the researchers said.

Advertisement

Related: Anthropic launches PAC amid tensions with Trump administration over AI policy

“Once the model’s hacky solution passes the tests, the activation of the desperate vector subsides,” they added. 

Human-like emotions do not mean they have feelings

However, the researchers said the chatbot doesn’t actually experience emotions, but suggested the findings point to a need for future training methods to incorporate ethical behavioral frameworks.

“This is not to say that the model has or experiences emotions in the way that a human does,” they said. “Rather, these representations can play a causal role in shaping model behavior, analogous in some ways to the role emotions play in human behavior, with impacts on task performance and decision-making.”

Advertisement

“This finding has implications that at first may seem bizarre. For instance, to ensure that AI models are safe and reliable, we may need to ensure they are capable of processing emotionally charged situations in healthy, prosocial ways.”

Magazine: AI agents will kill the web as we know it: Animoca’s Yat Siu