Connect with us

Technology

DJI says US customs is blocking its drone imports

Published

on

DJI says US customs is blocking its drone imports

DJI tells The Verge that it currently cannot freely import all of its drones into the United States — and that its latest consumer drone, the Air 3S, won’t currently be sold at retail as a result.

“A customs-related issue is hindering DJI’s ability to import select drones into the United States.”

That’s not because the United States has suddenly banned DJI drones — rather, DJI believes the import restrictions are “part of a broader initiative by the Department of Homeland Security to scrutinize the origins of products, particularly in the case of Chinese-made drones,” according to DJI.

DJI recently sent a letter to distributors with one possible reason why DHS is stopping some of its drones: the company says US Customs and Border Protection is citing the Uyghur Forced Labor Prevention Act (UFLPA) as justification for blocking the imports. In the letter, which has been floating around drone sites and Reddit for several days, DJI claims it doesn’t use any forced labor to manufacture drones.

Advertisement

Reuters reported on the letter earlier today; DJI spokesperson Daisy Kong confirmed the letter’s legitimacy to The Verge as well.

In a just-published official blog post, DJI is calling this all a “misunderstanding,” and writes that it’s currently sending documentation to US Customs to prove that it doesn’t manufacture anything in the Xinjiang region of China where Uyghurs have been forcibly detained, that it complies with US law and international standards, and that US retailers have audited its supply chain. DJI claims it manufacturers all its products in Shenzhen or Malaysia.

US Customs and Border Protection didn’t reply to a request for comment.

While the US House of Representatives did pass a bill that would effectively ban DJI drones from being imported into the US, that ban would also need to pass the Senate. Last we checked, the Senate had removed the DJI ban from its version of the must-pass 2025 National Defense Authorization Act (though it did get reintroduced as an amendment and could potentially still make it into the final bill).

Advertisement

DJI says the “customs-related issue” has “primarily impacted” the company’s enterprise and agricultural drones, but has also now “limited us from offering the Air 3S to US customers beyond DJI.com.”

“We are actively working with U.S. Customs and Border Protection to resolve this issue and remain hopeful for a swift resolution,” writes DJI.

The US government has cracked down on DJI drones before, but not in a way that would keep stores from buying them, consumers from purchasing them, or individual pilots from flying them in the United States. Primarily, the US Department of Commerce’s “entity list” keeps US companies from exporting their technology to the Chinese company, and the US has sometimes restricted certain government entities from purchasing new DJI drones.

Even if DJI imports do get banned by Congress, the proposed law suggests existing owners could still use their drones — but the FCC could no longer authorize DJI gadgets with radios for use in the United States, which would effectively block all imports.

Advertisement

Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Magic Leap founder is back with $20M funding round for SynthBee

Published

on

Magic Leap founder is back with $20M funding round for SynthBee

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


SynthBee, started by Magic Leap founder Rony Abovitz, has raised $20 million in funding to create a new kind of “computing intelligence.”

The company is based in Fort Lauderdale, Florida, the same place where Magic Leap grew up with the promise of mixed reality computing. The money is aimed at helping the company expand and build out the team around its proprietary computing intelligence platform.

Crosspoint Capital Partners (Crosspoint Capital) led the round.

Advertisement

“Crosspoint is excited to partner with SynthBee and back a multi-successful entrepreneur,” said Andre Fuetsch, managing director at Crosspoint Capital, in a statement. “SynthBee is poised to revolutionize the way enterprises innovate and deliver high-value solutions to market.”

Fuetsch added, “Rony Abovitz has a proven track record and tremendous talent in designing and building highly impactful capabilities to significantly improve human potential and outcomes. From robotic surgical applications to state of the art spatial computing and augmented reality platforms, Rony’s latest vision for SynthBee will accelerate and enhance humans’ creative and problem-solving capabilities to an entirely new level.”

Rony Abovitz is the founder of Sun and Thunder.
Rony Abovitz is the founder and CEO of SynthBee.

Abovitz previously started Mako Surgical (acquired by Stryker Corp. for $1.65 billion), and Magic Leap (a pioneer in spatial computing). Mako’s intelligent computing systems successfully evolved the complex surgical environment and positively impacted the lives of millions of patients.

SynthBee aims to apply its novel computing intelligence platform to safely accelerate human innovation across a wide array of industries.

“The SynthBee team is excited to partner with Andre and Crosspoint Capital to build out our novel computing intelligence platform,” said Abovitz, CEO of SynthBee, in a statement. “Current implementations of large-scale artificial intelligence systems have significant architectural challenges, security risks, and ethical flaws, leading to questionable governance and computational autocracies.”

Advertisement

Abovitz added, “SynthBee was founded to solve these problems and to forge alternate pathways for enterprise customers, the developer community, and eventually everyone, based on the philosophy of computational democracy. We are already commercially engaged with Fortune 500 customers with our solutions. We are also actively recruiting top tech talent to join our team.”

SynthBee said it is building safe, scalable, and reliable computing intelligence to amplify and accelerate human innovation.


Source link
Continue Reading

Technology

This backpack solar generator can help you ignore nature

Published

on

This backpack solar generator can help you ignore nature

Bluetti has taken portable power to absurd levels with its new Handsfree Backpack Power Stations. They’re available in big or bigger versions depending on how long — and how much gear — you want to keep powered in the great outdoors. 

They’re primarily aimed at outdoor photographers, but Bluetti also thinks they’ll appeal to hikers, rock climbers, campers, adventurers, bikepackers, and motorcyclists… any nerd that wants to move beyond the cubicle with their gadgets. A side panel in the backpacks provides access to all the inputs and outputs without having to first remove the solar generator. Those ports can also be managed and monitored over Bluetooth from the Bluetti app.

You can carry a ton of gear and a small solar panel alongside that giant battery.
GIF: Bluetti

The $299 Handsfree 1 solar generator includes a 42L BluePack 1 backpack and is capable of powering AC devices up to 300W with 268.8Wh of LFP storage capacity. That’s enough to recharge a DJI Mavic 3 (about 77Wh) or laptop about three times. The $399 Handsfree 2 bumps the battery to 700W / 512Wh inside a cavernous 60L pack. 

Advertisement

The solar generators themselves are tall and relatively thin, with five outputs (1x AC, 2x 100W USB-C, and 2x 15W USB-A). Both have XT60 solar inputs to help keep the batteries charged: 12V-28V / 200W max for the Handsfree 1 and 12V-45V / 350W max for Handsfree 2. The Handsfree 1 measures 11.3 x 3.7 x 11 inches (286.5 × 95 × 280mm) and weighs 11 pounds (5kg), while the larger Handsfree 2 comes in at 12 × 4.1 × 15.2 inches (305 × 105 × 385mm) and 16.5 pounds (7.5kg). 

The backpacks have tons of pockets to help organization all your gear, with Molle straps for external carry. They’re only water resistant, however, but Bluetti throws in a rain fly. There’s no weight given for the packs but they do look heavy.

Source link

Continue Reading

Technology

An upcoming iPhone feature will make it easier to detect spam calls

Published

on

iPhone 16 Pro Desert Titanium in hand

Spam and scam calls are an ever increasing nuisance, so a reliable caller ID service – and particularly one that can flag or auto block known scam numbers – is near essential. Sadly, this isn’t something Apple offers natively, and while there are some third-party caller ID services, these can be quite hit and miss, or cost extra. But finally, Apple is taking the first steps towards such a service.

The company has announced (via Engadget) that, next year, it will allow businesses enrolled in Apple Business Connect to register for Business Caller ID. With this, their company name, logo, and department will appear on the incoming call screen when they contact customers.

This should make it a lot easier to differentiate a legitimate call from a spam call since, if there’s no logo shown, there’s a high chance that it’s an unwanted call. If there is a logo, you can judge based on the company that’s calling whether it’s likely to be something you want to answer.

Apple Business Connect tools, including Caller ID

Some Apple Business Connect tools, including Business Caller ID (Image credit: Apple)

A good start

Source link

Continue Reading

Technology

NYT Connections: hints and answers for Thursday, October 17

Published

on

NYT Connections: hints and answers for Monday, October 7
New York Times' Connection puzzle open in the NYT Games app on iOS.
Sam Hill / Digital Trends

Connections is the latest puzzle game from the New York Times. The game tasks you with categorizing a pool of 16 words into four secret (for now) groups by figuring out how the words relate to each other. The puzzle resets every night at midnight and each new puzzle has a varying degree of difficulty. Just like Wordle, you can keep track of your winning streak and compare your scores with friends.

Some days are trickier than others. If you’re having a little trouble solving today’s Connections puzzle, check out our tips and hints below. And if you still can’t get it, we’ll tell you today’s answers at the very end.

How to play Connections

In Connections, you’ll be shown a grid containing 16 words — your objective is to organize these words into four sets of four by identifying the connections that link them. These sets could encompass concepts like titles of video game franchises, book series sequels, shades of red, names of chain restaurants, etc.

There are generally words that seem like they could fit multiple themes, but there’s only one 100% correct answer. You’re able to shuffle the grid of words and rearrange them to help better see the potential connections.

Each group is color-coded. The yellow group is the easiest to figure out, followed by the green, blue, and purple groups.

Advertisement

Pick four words and hit Submit. If you’re correct, the four words will be removed from the grid and the theme connecting them will be revealed. Guess incorrectly and it’ll count as a mistake. You only have four mistakes available until the game ends.

Hints for today’s Connections

We can help you solve today’s Connection by telling you the four themes. If you need more assistance, we’ll also give you one word from each group below.

Today’s themes

  • GRASSY AREA
  • DEAL WITH
  • MOVIES WITH “S” REMOVED
  • ___ LAW

One-answer reveals

  • GRASSY AREA – GREEN
  • DEAL WITH – ADDRESS
  • MOVIES WITH “S” REMOVED – CAR
  • ___ LAW – CRIMINAL
New York Times Connection game logo.
New York Times

Today’s Connections answers

Still no luck? That’s OK. This puzzle is designed to be difficult.  If you just want to see today’s Connections answer, we’ve got you covered below:

Connections grids vary widely and change every day. If you couldn’t solve today’s puzzle, be sure to check back in tomorrow.






Source link

Continue Reading

Technology

SpaceX is suing the California Coastal Commission for not letting it launch more rockets

Published

on

SpaceX is suing the California Coastal Commission for not letting it launch more rockets

Last week, the California Coastal Commission rejected a plan for SpaceX to launch up to 50 rockets this year at the Vandenberg Space Force Base in Santa Barbara County. The company responded yesterday with a lawsuit, alleging that the state agency’s denial was overreaching its authority and discriminating against its CEO.

The Commission’s goal is to protect California’s coasts and beaches, as well as the animals living in them. The agency has control over private companies’ requests to use the state coastline, but it can’t deny activities by federal departments. The denied launch request was actually made by the US Space Force on behalf of SpaceX, asking that the company be allowed to launch 50 of its Falcon 9 rockets, up from 36.

While the commissioners did raise about SpaceX CEO Elon Musk’s political screed and the spotty safety records at his companies during their review of the launch request, the assessment focused on the relationship between SpaceX and Space Force. The Space Force case is that “because it is a customer of — and reliant on — SpaceX’s launches and satellite network, SpaceX launches are a federal agency activity,” the Commission stated. “However, this does not align with how federal agency activities are defined in the Coastal Zone Management Act’s regulations or the manner in the Commission has historically implemented those regulations.” The California Coastal Commission claimed that at least 80 percent of the SpaceX rockets contain payloads for Musk’s Starlink company rather than payloads for government clients.

The SpaceX suit filed with the Central District of California court is seeking an order to designate the launches as federal activity, which would cut the Commission’s oversight out of its future launch plans.

Advertisement

Source link

Continue Reading

Technology

Microsoft’s Differential Transformer cancels attention noise in LLMs

Published

on

Microsoft's Differential Transformer cancels attention noise in LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Improving the capabilities of large language models (LLMs) in retrieving in-prompt information remains an area of active research that can impact important applications such as retrieval-augmented generation (RAG) and in-context learning (ICL).

Microsoft Research and Tsinghua University researchers have introduced Differential Transformer (Diff Transformer), a new LLM architecture that improves performance by amplifying attention to relevant context while filtering out noise. Their findings, published in a research paper, show that Diff Transformer outperforms the classic Transformer architecture in various settings.

Transformers and the “lost-in-the-middle” phenomenon

The Transformer architecture is the foundation of most modern LLMs. It uses an attention mechanism to weigh the importance of different parts of the input sequence when generating output. The attention mechanism employs the softmax function, which normalizes a vector of values into a probability distribution. In Transformers, the softmax function assigns attention scores to different tokens in the input sequence.

Advertisement

However, studies have shown that Transformers struggle to retrieve key information from long contexts.

“We began by investigating the so-called ‘lost-in-the-middle’ phenomenon,” Furu Wei, Partner Research Manager at Microsoft Research, told VentureBeat, referring to previous research findings that showed that LLMs “do not robustly make use of information in long input contexts” and that “performance significantly degrades when models must access relevant information in the middle of long contexts.”

Wei and his colleagues also observed that some LLM hallucinations, where the model produces incorrect outputs despite having relevant context information, correlate with spurious attention patterns.

“For example, large language models are easily distracted by context,” Wei said. “We analyzed the attention patterns and found that the Transformer attention tends to over-attend irrelevant context because of the softmax bottleneck.”

Advertisement

The softmax function used in Transformer’s attention mechanism tends to distribute attention scores across all tokens, even those that are not relevant to the task. This can cause the model to lose focus on the most important parts of the input, especially in long contexts.

“Previous studies indicate that the softmax attention has a bias to learn low-frequency signals because the softmax attention scores are restricted to positive values and have to be summed to 1,” Wei said. “The theoretical bottleneck renders [it] such that the classic Transformer cannot learn sparse attention distributions. In other words, the attention scores tend to flatten rather than focusing on relevant context.”

Differential Transformer

Differential transformer
Differential Transformer (source: arXiv)

To address this limitation, the researchers developed the Diff Transformer, a new foundation architecture for LLMs. The core idea is to use a “differential attention” mechanism that cancels out noise and amplifies the attention given to the most relevant parts of the input.

The Transformer uses three vectors to compute attention: query, key, and value. The classic attention mechanism performs the softmax function on the entire query and key vectors.

The proposed differential attention works by partitioning the query and key vectors into two groups and computing two separate softmax attention maps. The difference between these two maps is then used as the attention score. This process eliminates common noise, encouraging the model to focus on information that is pertinent to the input.

Advertisement

The researchers compare their approach to noise-canceling headphones or differential amplifiers in electrical engineering, where the difference between two signals cancels out common-mode noise.

While Diff Transformer involves an additional subtraction operation compared to the classic Transformer, it maintains efficiency thanks to parallelization and optimization techniques.

“In the experimental setup, we matched the number of parameters and FLOPs with Transformers,” Wei said. “Because the basic operator is still softmax, it can also benefit from the widely used FlashAttention cuda kernels for acceleration.”

In retrospect, the method used in Diff Transformer seems like a simple and intuitive solution. Wei compares it to ResNet, a popular deep learning architecture that introduced “residual connections” to improve the training of very deep neural networks. Residual connections made a very simple change to the traditional architecture yet had a profound impact.

Advertisement

“In research, the key is to figure out ‘what is the right problem?’” Wei said. “Once we can ask the right question, the solution is often intuitive. Similar to ResNet, the residual connection is an addition, compared with the subtraction in Diff Transformer, so it wasn’t immediately apparent for researchers to propose the idea.”

Diff Transformer in action

The researchers evaluated Diff Transformer on various language modeling tasks, scaling it up in terms of model size (from 3 billion to 13 billion parameters), training tokens, and context length (up to 64,000 tokens).

Their experiments showed that Diff Transformer consistently outperforms the classic Transformer architecture across different benchmarks. A 3-billion-parameter Diff Transformer trained on 1 trillion tokens showed consistent improvements of several percentage points compared to similarly sized Transformer models.

Further experiments with different model sizes and training dataset sizes confirmed the scalability of Diff Transformer. Their findings suggest that in general, Diff Transformer requires only around 65% of the model size or training tokens needed by a classic Transformer to achieve comparable performance.

Advertisement
Diff Transformer performance
The Diff Transformer is more efficient than the classic Transformer in terms of both parameters and train tokens (source: arXiv)

The researchers also found that Diff Transformer is particularly effective in using increasing context lengths. It showed significant improvements in key information retrieval, hallucination mitigation, and in-context learning.

While the initial results are promising, there’s still room for improvement. The research team is working on scaling Diff Transformer to larger model sizes and training datasets. They also plan to extend it to other modalities, including image, audio, video, and multimodal data.

The researchers have released the code for Diff Transformer, implemented with different attention and optimization mechanisms. They believe the architecture can help improve performance across various LLM applications.

“As the model can attend to relevant context more accurately, it is expected that these language models can better understand the context information with less in-context hallucinations,” Wei said. “For example, for the retrieval-augmented generation settings (such as Bing Chat, Perplexity, and customized models for specific domains or industries), the models can generate more accurate responses by conditioning on the retrieved documents.”


Source link
Continue Reading

Trending

Copyright © 2024 WordupNews.com