Connect with us

Tech

Why This Is the Worst Crypto Winter Ever

Published

on

Bitcoin has fallen roughly 44% from its October peak, and while the drawdown isn’t crypto’s deepest ever on a percentage basis, Bloomberg’s Odd Lots newsletter lays out a case that this is the industry’s worst winter yet. The macro backdrop was supposed to favor Bitcoin: public confidence in the dollar is shaky, the Trump administration has been crypto-friendly, and fiat currencies are under perceived stress globally. Yet gold, not Bitcoin, has been the safe haven of choice.

The “we’re so early” narrative is dead — crypto ETFs exist, barriers to entry are zero, and the online community that once rallied holders through downturns has largely hollowed out. Institutional adoption arrived but hasn’t lifted existing tokens like ETH or SOL; Wall Street cares about stablecoins and tokenization, not the coins themselves. AI is pulling both talent and miners toward data centers. Quantum computing advances threaten Bitcoin’s encryption. And MicroStrategy and other Bitcoin treasury companies, once steady buyers during the bull run, are now large holders who may eventually become forced sellers.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

A Simple Guide to Staying Safe Online for Everyone

Published

on

The internet is useful, powerful, and unavoidable. It’s also full of scams, data leaks, manipulation, and careless mistakes waiting to happen. Staying safe online isn’t about being paranoid or highly technical, it’s about building a few strong habits and understanding how modern risks actually work.

Most online harm doesn’t come from sophisticated hackers. It comes from ordinary people being rushed, distracted, or unaware.

Advertisement

Understand the Most Common Online Risks

You don’t need to know everything—as beautiful escorts in Mumbai often emphasize from experience—you just need to recognize the most frequent threats.

The biggest risks most people face are:

  • Phishing emails and messages pretending to be trusted brands
  • Weak or reused passwords
  • Fake websites and online scams
  • Oversharing personal information
  • Insecure public Wi-Fi connections

If you can handle these, you avoid the majority of problems.

Use Strong, Unique Passwords (Yes, It Matters)

Password reuse is still one of the biggest mistakes people make. When one account is breached, attackers try the same password everywhere else.

A strong password:

Advertisement
  • Is long (12+ characters)
  • It is unique for each important account
  • Doesn’t use personal information

The realistic solution is a password manager. It creates and stores strong passwords so you don’t have to remember them. This isn’t optional anymore; it’s basic digital hygiene.

Turn On Two-Factor AuthenticationA person holds a smartphone showing a home control app with light settings in a dimly lit room. A laptop with a login screen is in the background.A person holds a smartphone showing a home control app with light settings in a dimly lit room. A laptop with a login screen is in the background.

Two-factor authentication (2FA) adds a second step, a concept often emphasized by professional escorts in Bolton when talking about security awareness, when logging in, usually a code sent to your phone or an app.

Yes, it’s slightly inconvenient. That inconvenience is the point.

Even if someone steals your password, 2FA can stop them cold. Prioritize it for:

  • Email accounts
  • Banking apps
  • Social media
  • Cloud storage

This one step blocks a huge percentage of account takeovers.

Learn to Spot Phishing Attempts

Phishing isn’t always obvious. Modern scams look professional and urgent on purpose.

Red flags include:

Advertisement
  • Unexpected messages asking you to verify or confirm something
  • Links that don’t match the official website
  • Spelling errors or unusual formatting
  • Pressure to act immediately

Rule of thumb: Never click links in messages you weren’t expecting. Go directly to the website instead.

Be Careful What You Share Online

Oversharing makes you an easier target.

Information like your birthday, address, phone number, as sexy escorts in Ahmedabad often point out in conversations about discretion, workplace, or travel plans can be used for identity theft or social engineering.

Ask before posting:

  • Does this reveal personal details?
  • Would this help someone guess security questions?
  • Do strangers need to know this?

Privacy isn’t secrecy, it’s control.

Use Public Wi-Fi With Caution

Free Wi-Fi is convenient but risky. Public networks are easier to intercept.

Advertisement

If you must use public Wi-Fi:

  • Avoid banking or sensitive accounts
  • Use secure (HTTPS) websites only.
  • Consider a trusted VPN for added protection.

Better yet, use your mobile data for anything important.

Keep Devices and Software Updated

Updates aren’t just new features, they fix security holes.

Ignoring updates leaves your device vulnerable to known exploits. Enable automatic updates for:

  • Operating systems
  • Browsers
  • Apps
  • Antivirus or security tools

Delaying updates is like leaving your door unlocked because locking it feels annoying.

Be Skeptical of Too Good to Be True Offers

Online scams often promise:

Advertisement
  • Easy money
  • Free prizes
  • Urgent refunds
  • Exclusive deals

If something triggers excitement or fear immediately, pause. Scammers rely on emotional reactions, not logic.

Real companies don’t pressure you to act instantly.

Teach Children and Older Adults Basic Safety

Online safety isn’t age-specific. Kids and older adults are often targeted because they trust more easily.

Simple rules help:

  • Don’t talk to strangers online
  • Don’t share personal details.
  • Ask before downloading or clicking.
  • Speak up if something feels wrong.

Education is more effective than restriction.

Back Up Your Data Regularly

Accidents happen. Devices break. Files get deleted. Ransomware exists.

Advertisement

Backups protect you from loss, not just attacks. Use:

  • Cloud backups
  • External drives
  • Automatic backup schedules

If data matters, back it up. Once is not enough.

Trust Your Instincts, Then Verify

If something feels off, it probably is. Don’t ignore that instinct, but don’t panic either.

Slow down. Verify sources. Ask someone you trust. Most online damage happens when people rush.

Conclusion

You don’t need to be a tech expert to stay safe online. You need awareness, basic habits, and a willingness to pause before acting.

Advertisement

Online safety isn’t about fear. It’s about control.

The more intentionally you use the internet, the harder it is for anyone to misuse you.

Source link

Advertisement
Continue Reading

Tech

MAGA Zealots Are Waging War On Affordable Broadband

Published

on

from the fuck-the-poor dept

The Trump administration keeps demonstrating that it really hates affordable broadband. It particularly hates it when the government tries to make broadband affordable to poor people or rural school kids.

In just the last year the Trump administration has:

I’m sure I missed a few.

This week, the administration’s war on affordable broadband shifted back to attacking the FCC Lifeline program, a traditionally uncontroversial, bipartisan effort to try and extend broadband to low income Americans. Brendan Carr (R, AT&T) has been ramping up his attacks on these programs, claiming (falsely) that they’re riddled with state-sanctioned fraud:

Advertisement

“Carr’s office said this week that the FCC will vote next month on rule changes to ensure that Lifeline money goes to “only living and lawful Americans” who meet low-income eligibility guidelines. Lifeline spends nearly $1 billion a year and gives eligible households up to $9.25 per month toward phone and Internet bills, or up to $34.25 per month in tribal areas.”

For one, $9.25 is a pittance. It barely offsets the incredibly high prices U.S. telecom monopolies charge. Monopolies, it should be noted, only exist thanks to the coddling of decades of corrupt lawmakers like Carr, who’ve effectively exempted them from all accountability. That’s resulted in heavy monopolization, limited competition, high prices, and low-quality service.

Two, there’s lots of fraud in telecom. Most of it, unfortunately, is conducted by our biggest companies with the tacit approval of folks like FCC boss Brendan Carr. AT&T, for example, has spent decades ripping off U.S. schools and various subsidy programs, and you’ll never see Carr make a peep about that. Fraud is, in MAGA world, only something involving minorities and poor people.

The irony is that the lion’s share of the fraud in the Lifeline program has involved big telecom giants, like AT&T or Verizon, which, time and time again, take taxpayer money for poor people that the just made up. This sort of fraud, where corporations are involved, isn’t of interest to Brendan Carr.

In this case, Carr is alleging (without evidence) that certain left wing states are intentionally ripping off the federal government, throwing untold millions of dollars at dead people for Lifeline broadband access. Something the California Public Utilities Commission has had to spend the week debunking:

Advertisement

“The California Public Utilities Commission (CPUC) this week said that “people pass away while enrolled in Lifeline—in California and in red states like Texas. That’s not fraud. That’s the reality of administering a large public program serving millions of Americans over many years. The FCC’s own advisory acknowledges that the vast majority of California subscribers were eligible and enrolled while alive, and that any improper payments largely reflect lag time between a death and account closure, not failures at enrollment.”

Brendan Carr can’t overtly admit this (because he’s a corrupt zealot), but his ideal telecom policy agenda involves throwing billions of dollars at AT&T and Comcast in exchange for doing nothing. That’s it. That’s the grand Republican plan for U.S. telecom. It gets dressed up as something more ideologically rigid, but coddling predatory monopolies has always been the foundational belief structure.

This latest effort by Carr and Trump largely appears to be a political gambit targeting California Governor Gavin Newsom, suggesting they’re worried about his chances in the next presidential election. This isn’t to defend Newsom; I’ve certainly noted how his state has a mixed track record on broadband affordability. But it appears this is mostly about painting a picture of Newsom, as they did with Walz in Minnesota, as a political opponent that just really loves taxpayer fraud.

Again though, actually policing fraud is genuinely the last thing on Brendan Carr’s mind. If it was, he’d actually target the worst culprits on this front: corporate America.

Filed Under: affordability, brendan carr, broadband, fraud, lifeline, telecom

Advertisement

Source link

Continue Reading

Tech

Apple’s iPhone Air MagSafe battery drops to an all-time-low price

Published

on

We found the iPhone Air to have a pretty decent battery life for such a thin-and-light phone, somewhere in the region of 27 hours if you’re continuously streaming video. But it’s still a phone, arguably your most used device on a daily basis, so you may need to top it up during the day if you’re using it constantly. That’s where Apple’s iPhone Air MagSafe battery pack comes in, and it’s currently on sale for $79.

Image for the small product module

Apple

This battery iPhone Air battery pack is $20 off right now. 

Advertisement

This accessory only works with the iPhone Air, but much like the phone it attaches to, it’s extremely slim at 7.5mmm, so crucially doesn’t add so much bulk when attached that it defeats the point of having a thin phone in the first place. The MagSafe Battery isn’t enormous at 3,149mAh (enough to add an extra 65 percent of charge to the Air), but it can wirelessly charge the AirPods Pro 3 as well, making it an even more useful travel companion. You can also charge your iPhone while charging the battery pack.

At its regular price of $99, the MagSafe battery pack is an admittedly pricey add-on to what is already an expensive phone, but for $20 off it’s well worth considering what Engadget’s Sam Rutherford called an “essential accessory” for some users in his iPhone Air review.

Many Apple loyalists will always insist on having first-party accessories for their iPhone, but there are plenty of third-party MagSafe chargers out there too, a lot of them considerably cheaper than Apple’s lineup. Be sure to check out our guide for those.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

Advertisement

Source link

Continue Reading

Tech

Andrew Ng: Unbiggen AI – IEEE Spectrum

Published

on

Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company
Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.


Andrew Ng
on…

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Advertisement

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Advertisement

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Advertisement

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first
NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Advertisement

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Advertisement

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Advertisement

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

Advertisement

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

Advertisement

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

Advertisement

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

Advertisement

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Advertisement

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

Advertisement

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Advertisement

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

NASA will now allow astronauts to take their smartphones to space

Published

on

Most people wouldn’t leave their phones behind when they so much as go for a drive, but NASA astronauts have had to leave their phones on Earth while they went to work 250 miles away at the International Space Station. That is, until now.

In a , NASA Administrator shared that the Crew-12 and astronauts will be allowed to bring smartphones along for the journey to the ISS and beyond. “We are giving our crews the tools to capture special moments for their families and share inspiring images and video with the world,” Isaacman said.

While these won’t be the captured in space — that distinction belongs to a trio of miniature phone-based satellites sent into Earth orbit in 2013 which succeeded where the earlier British failed. But thanks to the upcoming , we can look forward to the first smartphone images from the moon’s orbit. The March (for now) launch will be the agency’s first crewed moon mission since Apollo 17 in 1972.

The crews’ personal devices will be far less cumbersome to use than the they were previously limited to for high-quality still images. Ideally, this means more spontaneous pictures that can be shared with friends and family back on Earth.

Advertisement

Source link

Continue Reading

Tech

A Deep Dive Into Inductors

Published

on

[Prof MAD] runs us through The Hidden Power of Inductors — Why Coils Resist Change.

The less often used of the passive components, the humble and mysterious inductor is the subject of this video. The essence of inductance is a conductor’s tendency to resist changes in current. When the current is steady it is invisible, but when current changes an inductor pushes back. The good old waterwheel analogy is given to explain what an inductor’s effect is like.

There are three things to notice about the effect of an inductor: increases in current are delayed, decreases in current are delayed, and when there is no change in current there is no noticeable effect. The inductor doesn’t resist current flow, but it does resist changes in current flow. This resistive effect only occurs when current is changing, and it is known as “inductive reactance”.

After explaining an inductor’s behavior the video digs into how a typical inductor coil actually achieves this. The basic idea is that the inductor stores energy in a magnetic field, and it takes some time to charge up or discharge this field, accounting for the delay in current that is seen.

Advertisement

There’s a warning about high voltages which can be seen when power to an inductor is suddenly cut off. Typically a circuit will include snubber circuits or flyback diodes to help manage such effects which can otherwise damage components or lead to electric shock.

[Prof MAD] spends the rest of the video with some math that explains how voltage across an inductor is proportional to the rate of change of current over time (the first derivative of current against time). The inductance can then be defined as a constant of proportionality (L). This is the voltage that appears across a coil when current changes by 1 ampere per second, opposing the change. The unit is the volt-second-per-ampere (VsA-1) which is known as the Henry, named in honor of the American physicist Joseph Henry.

Inductance can sometimes be put to good use in circuits, but just as often it is unwanted parasitic induction whose effects need to be mitigated, for more info see: Inductance In PCB Layout: The Good, The Bad, And The Fugly.

Advertisement

Source link

Continue Reading

Tech

OpenAI introduces Frontier, an easier way to manage all your AI agents in one place

Published

on


  • OpenAI Frontier lets enterprises manage OpenAI, proprietary and third-party agents
  • Each AI agent gets its own unique identity, permissions and guardrails
  • The company sees this as a collaborative approach

OpenAI has launched Frontier, a new AI agent management platform where enterprise customers can build, deploy and manage agentic AI from both OpenAI and third-party companies.

In its announcement, the ChatGPT-maker hinted Frontier is designed to address agent sprawl where fragmented tools, siloed data and disconnected workflows reduce the efficacy of AI agents.

Source link

Advertisement
Continue Reading

Tech

Lunar Radio Telescope to Unlock Cosmic Mysteries

Published

on

Isolation dictates where we go to see into the far reaches of the universe. The Atacama Desert of Chile, the summit of Mauna Kea in Hawaii, the vast expanse of the Australian Outback—these are where astronomers and engineers have built the great observatories and radio telescopes of modern times. The skies are usually clear, the air is arid, and the electronic din of civilization is far away.

It was to one of these places, in the high desert of New Mexico, that a young astronomer named Jack Burns went to study radio jets and quasars far beyond the Milky Way. It was 1979, he was just out of grad school, and the Very Large Array, a constellation of 28 giant dish antennas on an open plain, was a new mecca of radio astronomy.

But the VLA had its limitations—namely, that Earth’s protective atmosphere and ionosphere blocked many parts of the electromagnetic spectrum, and that, even in a remote desert, earthly interference was never completely gone.

Could there be a better, even lonelier place to put a radio telescope? Sure, a NASA planetary scientist named Wendell Mendell, told Burns: How about the moon? He asked if Burns had ever thought about building one there.

Advertisement

“My immediate reaction was no. Maybe even hell, no. Why would I want to do that?” Burns recalls with a self-deprecating smile. His work at the VLA had gone well, he was fascinated by cosmology’s big questions, and he didn’t want to be slowed by the bureaucratic slog of getting funding to launch a new piece of hardware.

But Mendell suggested he do some research and speak at a conference on future lunar observatories, and Burns’s thinking about a space-based radio telescope began to shift. That was in 1984. In the four decades since, he’s published more than 500 peer-reviewed papers on radio astronomy. He’s been an adviser to NASA, the Department of Energy, and the White House, as well as a professor and a university administrator. And while doing all that, Burns has had an ongoing second job of sorts, as a quietly persistent advocate for radio astronomy from space.

And early next year, if all goes well, a radio telescope for which he’s a scientific investigator will be launched—not just into space, not just to the moon, but to the moon’s far side, where it will observe things invisible from Earth.

“You can see we don’t lack for ambition after all these years,” says Burns, now 73 and a professor emeritus of astrophysics at the University of Colorado Boulder.

Advertisement

The instrument is called LuSEE-Night, short for Lunar Surface Electromagnetics Experiment–Night. It will be launched from Florida aboard a SpaceX rocket and carried to the moon’s far side atop a squat four-legged robotic spacecraft called Blue Ghost Mission 2, built and operated by Firefly Aerospace of Cedar Park, Texas.

Illustration of a four-legged structure with solar panels on the sides on the surface of the moon. In an artist’s rendering, the LuSEE-Night radio telescope sits atop Firefly Aerospace’s Blue Ghost 2 lander, which will carry it to the moon’s far side. Firefly Aerospace

Landing will be risky: Blue Ghost 2 will be on its own, in a place that’s out of the sight of ground controllers. But Firefly’s Blue Ghost 1 pulled off the first successful landing by a private company on the moon’s near side in March 2025. And Burns has already put hardware on the lunar surface, albeit with mixed results: An experiment he helped conceive was on board a lander called Odysseus, built by Houston-based Intuitive Machines, in 2024. Odysseus was damaged on landing, but Burns’s experiment still returned some useful data.

Burns says he’d be bummed about that 2024 mission if there weren’t so many more coming up. He’s joined in proposing myriad designs for radio telescopes that could go to the moon. And he’s kept going through political disputes, technical delays, even a confrontation with cancer. Finally, finally, the effort is paying off.

“We’re getting our feet into the lunar soil,” says Burns, “and understanding what is possible with these radio telescopes in a place where we’ve never observed before.”

Advertisement

Why Go to the Far Side of the Moon?

A moon-based radio telescope could help unravel some of the greatest mysteries in space science. Dark matter, dark energy, neutron stars, and gravitational waves could all come into better focus if observed from the moon. One of Burns’s collaborators on LuSEE-Night, astronomer Gregg Hallinan of Caltech, would like such a telescope to further his research on electromagnetic activity around exoplanets, a possible measure of whether these distant worlds are habitable. Burns himself is especially interested in the cosmic dark ages, an epoch that began more than 13 billion years ago, just 380,000 years after the big bang. The young universe had cooled enough for neutral hydrogen atoms to form, which trapped the light of stars and galaxies. The dark ages lasted between 200 million and 400 million years.

timeline visualization

LuSEE-Night will listen for faint signals from the cosmic dark ages, a period that began about 380,000 years after the big bang, when neutral hydrogen atoms had begun to form, trapping the light of stars and galaxies. Chris Philpot

“It’s a critical period in the history of the universe,” says Burns. “But we have no data from it.”

The problem is that residual radio signals from this epoch are very faint and easily drowned out by closer noise—in particular, our earthly communications networks, power grids, radar, and so forth. The sun adds its share, too. What’s more, these early signals have been dramatically redshifted by the expansion of the universe, their wavelengths stretched as their sources have sped away from us over billions of years. The most critical example is neutral hydrogen, the most abundant element in the universe, which when excited in the laboratory emits a radio signal with a wavelength of 21 centimeters. Indeed, with just some backyard equipment, you can easily detect neutral hydrogen in nearby galactic gas clouds close to that wavelength, which corresponds to a frequency of 1.42 gigahertz. But if the hydrogen signal originates from the dark ages, those 21 centimeters are lengthened to tens of meters. That means scientists need to listen to frequencies well below 50 megahertz—parts of the radio spectrum that are largely blocked by Earth’s ionosphere.

Which is why the lunar far side holds such appeal. It may just be the quietest site in the inner solar system.

Advertisement

“It really is the only place in the solar system that never faces the Earth,” says David DeBoer, a research astronomer at the University of California, Berkeley. “It really is kind of a wonderful, unique place.”

For radio astronomy, things get even better during the lunar night, when the sun drops beneath the horizon and is blocked by the moon’s mass. For up to 14 Earth-days at a time, a spot on the moon’s far side is about as electromagnetically dark as any place in the inner solar system can be. No radiation from the sun, no confounding signals from Earth. There may be signals from a few distant space probes, but otherwise, ideally, your antenna only hears the raw noise of the cosmos.

“When you get down to those very low radio frequencies, there’s a source of noise that appears that’s associated with the solar wind,” says Caltech’s Hallinan. Solar wind is the stream of charged particles that speed relentlessly from the sun. “And the only location where you can escape that within a billion kilometers of the Earth is on the lunar surface, on the nighttime side. The solar wind screams past it, and you get a cavity where you can hide away from that noise.”

How Does LuSEE-Night Work?

LuSEE-Night’s receiver looks simple, though there’s really nothing simple about it. Up top are two dipole antennas, each of which consists of two collapsible rods pointing in opposite directions. The dipole antennas are mounted perpendicular to each other on a small turntable, forming an X when seen from above. Each dipole antenna extends to about 6 meters. The turntable sits atop a box of support equipment that’s a bit less than a cubic meter in volume; the equipment bay, in turn, sits atop the Blue Ghost 2 lander, a boxy spacecraft about 2 meters tall.

Advertisement

A person wearing a hairnet, facemask, and vinyl gloves working on a shiny metal apparatus.

A photo of people wearing hairnets, facemasks, and vinyl gloves working on a shiny metal apparatus.

A person wearing a hairnet, facemask, and vinyl gloves working on a shiny metal apparatus. LuSEE-Night undergoes final assembly [top and center] at the Space Sciences Laboratory at the University of California, Berkeley, and testing [bottom] at Firefly Aerospace outside Austin, Texas. From top: Space Sciences Laboratory/University of California, Berkeley (2); Firefly Aerospace

“It’s a beautiful instrument,” says Stuart Bale, a physicist at the University of California, Berkeley, who is NASA’s principal investigator for the project. “We don’t even know what the radio sky looks like at these frequencies without the sun in the sky. I think that’s what LuSEE-Night will give us.”

The apparatus was designed to serve several incompatible needs: It had to be sensitive enough to detect very weak signals from deep space; rugged enough to withstand the extremes of the lunar environment; and quiet enough to not interfere with its own observations, yet loud enough to talk to Earth via relay satellite as needed. Plus the instrument had to stick to a budget of about US $40 million and not weigh more than 120 kilograms. The mission plan calls for two years of operations.

The antennas are made of a beryllium copper alloy, chosen for its high conductivity and stability as lunar temperatures plummet or soar by as much as 250 °C every time the sun rises or sets. LuSEE-Night will make precise voltage measurements of the signals it receives, using a high-impedance junction field-effect transistor to act as an amplifier for each antenna. The signals are then fed into a spectrometer—the main science instrument—which reads those voltages at 102.4 million samples per second. That high read-rate is meant to prevent the exaggeration of any errors as faint signals are amplified. Scientists believe that a cosmic dark-ages signature would be five to six orders of magnitude weaker than the other signals that LuSEE-Night will record.

The turntable is there to help characterize the signals the antennas receive, so that, among other things, an ancient dark-ages signature can be distinguished from closer, newer signals from, say, galaxies or interstellar gas clouds. Data from the early universe should be virtually isotropic, meaning that it comes from all over the sky, regardless of the antennas’ orientation. Newer signals are more likely to come from a specific direction. Hence the turntable: If you collect data over the course of a lunar night, then reorient the antennas and listen again, you’ll be better able to distinguish the distant from the very, very distant.

Advertisement

What’s the ideal lunar landing spot if you want to take such readings? One as nearly opposite Earth as possible, on a flat plain. Not an easy thing to find on the moon’s hummocky far side, but mission planners pored over maps made by lunar satellites and chose a prime location about 24 degrees south of the lunar equator.

Other lunar telescopes have been proposed for placement in the permanently shadowed craters near the moon’s south pole, just over the horizon when viewed from Earth. Such craters are coveted for the water ice they may hold, and the low temperatures in them (below -240 °C) are great if you’re doing infrared astronomy and need to keep your instruments cold. But the location is terrible if you’re working in long-wavelength radio.

“Even the inside of such craters would be hard to shield from Earth-based radio frequency interference (RFI) signals,” Leon Koopmans of the University of Groningen in the Netherlands, said in an email. “They refract off the crater rims and often, due to their long wavelength, simply penetrate right through the crater rim.”

RFI is a major—and sometimes maddening—issue for sensitive instruments. The first-ever landing on the lunar far side was by the Chinese Chang’e 4 spacecraft, in 2019. It carried a low-frequency radio spectrometer, among other experiments. But it failed to return meaningful results, Chinese researchers said, mostly because of interference from the spacecraft itself.

Advertisement

The Accidental Birth of Radio Astronomy

Sometimes, though, a little interference makes history. Here, it’s worth a pause to remember Karl Jansky, considered the father of radio astronomy. In 1928, he was a young engineer at Bell Telephone Laboratories in Holmdel, N.J., assigned to isolate sources of static in shortwave transatlantic telephone calls. Two years later, he built a 30-meter-long directional antenna, mostly out of brass and wood, and after accounting for thunderstorms and the like, there was still noise he couldn’t explain. At first, its strength seemed to follow a daily cycle, rising and sinking with the sun. But after a few months’ observation, the sun and the noise were badly out of sync.

Black and white photo of a man standing in a field in front of a large structure made of crisscrossing segments and resting on wheels. In 1930, Karl Jansky, a Bell Labs engineer in Holmdel, N.J., built this rotating antenna on wheels to identify sources of static for radio communications. NRAO/AUI/NSF

It gradually became clear that the noise’s period wasn’t 24 hours; it was 23 hours and 56 minutes—the time it takes Earth to turn once relative to the stars. The strongest interference seemed to come from the direction of the constellation Sagittarius, which optical astronomy suggested was the center of the Milky Way. In 1933, Jansky published a paper in Proceedings of the Institute of Radio Engineers with a provocative title: “Electrical Disturbances Apparently of Extraterrestrial Origin.” He had opened the electromagnetic spectrum up to astronomers, even though he never got to pursue radio astronomy himself. The interference he had defined was, to him, “star noise.”

Thirty-two years later, two other Bell Labs scientists, Arno Penzias and Robert Wilson, ran into some interference of their own. In 1965 they were trying to adapt a horn antenna in Holmdel for radio astronomy—but there was a hiss, in the microwave band, coming from all parts of the sky. They had no idea what it was. They ruled out interference from New York City, not far to the north. They rewired the receiver. They cleaned out bird droppings in the antenna. Nothing worked.

Black and white photo of a large triangular structure on a frame, with two people looking up at it.  In the 1960s, Arno Penzias and Robert W. Wilson used this horn antenna in Holmdel, N.J., to detect faint signals from the big bang. GL Archive/Alamy

Meanwhile, an hour’s drive away, a team of physicists at Princeton University under Robert Dicke was trying to find proof of the big bang that began the universe 13.8 billion years ago. They theorized that it would have left a hiss, in the microwave band, coming from all parts of the sky. They’d begun to build an antenna. Then Dicke got a phone call from Penzias and Wilson, looking for help. “Well, boys, we’ve been scooped,” he famously said when the call was over. Penzias and Wilson had accidentally found the cosmic microwave background, or CMB, the leftover radiation from the big bang.

Advertisement

Burns and his colleagues are figurative heirs to Jansky, Penzias, and Wilson. Researchers suggest that the giveaway signature of the cosmic dark ages may be a minuscule dip in the CMB. They theorize that dark-ages hydrogen may be detectable only because it has been absorbing a little bit of the microwave energy from the dawn of the universe.

The Moon Is a Harsh Mistress

The plan for Blue Ghost Mission 2 is to touch down soon after the sun has risen at the landing site. That will give mission managers two weeks to check out the spacecraft, take pictures, conduct other experiments that Blue Ghost carries, and charge LuSEE-Night’s battery pack with its photovoltaic panels. Then, as local sunset comes, they’ll turn everything off except for the LuSEE-Night receiver and a bare minimum of support systems.

Image of the moon's surface, with a closeup of one section. LuSEE-Night will land at a site [orange dot] that’s about 25 degrees south of the moon’s equator and opposite the center of the moon’s face as seen from Earth. The moon’s far side is ideal for radio astronomy because it’s shielded from the solar wind as well as signals from Earth. Arizona State University/GSFC/NASA

There, in the frozen electromagnetic stillness, it will scan the spectrum between 0.1 and 50 MHz, gathering data for a low-frequency map of the sky—maybe including the first tantalizing signature of the dark ages.

“It’s going to be really tough with that instrument,” says Burns. “But we have some hardware and software techniques that…we’re hoping will allow us to detect what’s called the global or all-sky signal.… We, in principle, have the sensitivity.” They’ll listen and listen again over the course of the mission. That is, if their equipment doesn’t freeze or fry first.

Advertisement

A major task for LuSEE-Night is to protect the electronics that run it. Temperature extremes are the biggest problem. Systems can be hardened against cosmic radiation, and a sturdy spacecraft should be able to handle the stresses of launch, flight, and landing. But how do you build it to last when temperatures range between 120 and −130 °C? With layers of insulation? Electric heaters to reduce nighttime chill?

“All of the above,” says Burns. To reject daytime heat, there will be a multicell parabolic radiator panel on the outside of the equipment bay. To keep warm at night, there will be battery power—a lot of battery power. Of LuSEE-Night’s launch mass of 108 kg, about 38 kg is a lithium-ion battery pack with a capacity of 7,160 watt-hours, mostly to generate heat. The battery cells will recharge photovoltaically after the sun rises. The all-important spectrometer has been programmed to cycle off periodically during the two weeks of darkness, so that the battery’s state of charge doesn’t drop below 8 percent; better to lose some observing time than lose the entire apparatus and not be able to revive it.

Lunar Radio Astronomy for the Long Haul

And if they can’t revive it? Burns has been through that before. In 2024 he watched helplessly as Odysseus, the first U.S.-made lunar lander in 50 years, touched down—and then went silent for 15 agonizing minutes until controllers in Texas realized they were receiving only occasional pings instead of detailed data. Odysseus had landed hard, snapped a leg, and ended up lying almost on its side.

Color photo of a metal structure inside an open rocket.  ROLSES-1, shown here inside a SpaceX Falcon 9 rocket, was the first radio telescope to land on the moon, in February 2024. During a hard landing, one leg broke, making it difficult for the telescope to send readings back to Earth.Intuitive Machines/SpaceX

As part of its scientific cargo, Odysseus carried ROLSES-1 (Radiowave Observations on the Lunar Surface of the photo-Electron Sheath), an experiment Burns and a friend had suggested to NASA years before. It was partly a test of technology, partly to study the complex interactions between sunlight, radiation, and lunar soil—there’s enough electric charge in the soil sometimes that dust particles levitate above the moon’s surface, which could potentially mess with radio observations. But Odysseus was damaged badly enough that instead of a week’s worth of data, ROLSES got 2 hours, most of it recorded before the landing. A grad student working with Burns, Joshua Hibbard, managed to partially salvage the experiment and prove that ROLSES had worked: Hidden in its raw data were signals from Earth and the Milky Way.

Advertisement

“It was a harrowing experience,” Burns said afterward, “and I’ve told my students and friends that I don’t want to be first on a lander again. I want to be second, so that we have a greater chance to be successful.” He says he feels good about LuSEE-Night being on the Blue Ghost 2 mission, especially after the successful Blue Ghost 1 landing. The ROLSES experiment, meanwhile, will get a second chance: ROLSES-2 has been scheduled to fly on Blue Ghost Mission 3, perhaps in 2028.

Artist\u2019s rendering of a gray surface with parallel zigzagging lines.  NASA’s plan for the FarView Observatory lunar radio telescope array, shown in an artist’s rendering, calls for 100,000 dipole antennas to be spread out over 200 square kilometers. Ronald Polidan

If LuSEE-Night succeeds, it will doubtless raise questions that require much more ambitious radio telescopes. Burns, Hallinan, and others have already gotten early NASA funding for a giant interferometric array on the moon called FarView. It would consist of a grid of 100,000 antenna nodes spread over 200 square kilometers, made of aluminum extracted from lunar soil. They say assembly could begin as soon as the 2030s, although political and budget realities may get in the way.

Through it all, Burns has gently pushed and prodded and lobbied, advocating for a lunar observatory through the terms of ten NASA administrators and seven U.S. presidents. He’s probably learned more about Washington politics than he ever wanted. American presidents have a habit of reversing the space priorities of their predecessors, so missions have sometimes proceeded full force, then languished for years. With LuSEE-Night finally headed for launch, Burns at times sounds buoyant: “Just think. We’re actually going to do cosmology from the moon.” At other times, he’s been blunt: “I never thought—none of us thought—that it would take 40 years.”

“Like anything in science, there’s no guarantee,” says Burns. “But we need to look.”

Advertisement

This article appears in the February 2026 print issue as “The Quest To Build a Telescope That Can Hear the Cosmic Dark Ages.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

BreezyBox: A BusyBox-Like Shell And Virtual Terminal For ESP32

Published

on

Much like how BusyBox crams many standard Unix commands and a shell into a single executable, so too does BreezyBox provide a similar experience for the ESP32 platform. A demo implementation is also provided, which uses the ESP32-S3 platform as part of the Waveshare 7″ display development board.

Although it invokes the BusyBox name, it’s not meant to be as stand-alone as it uses the standard features provided by the FreeRTOS-based ESP-IDF SDK. In addition to the features provided by ESP-IDF it adds things like a basic virtual terminal, current working directory (CWD) tracking and a gaggle of Unix-style commands, as well as an app installer.

The existing ELF binary loader for the ESP32 is used to run executables either from a local path or a remote one, a local HTTP server is provided and you even get ANSI color support. Some BreezyBox apps can be found here, with them often running on a POSIX-compatible system as well. This includes the xcc700 self-hosted C compiler.

Advertisement

You can get the MIT-licensed code either from the above GitHub project link or install it from the Espressif Component Registry if that’s more your thing.

Advertisement

Source link

Continue Reading

Tech

Civilization VII Apple Arcade brings a big PC strategy game to your pocket

Published

on

Civilization VII Apple Arcade is now available on iPhone, iPad, and Mac, so you can start a campaign on your phone and keep it going on a larger screen later. It’s the closest thing to true pocket Civ on Apple hardware, without ads, upsells, or a separate purchase for each device.

The appeal is simple. Civ rewards short bursts and long nights, and this version finally lets you do both with the same save. But it also draws a clear line around what it can’t do yet.

The portability comes with limits

Your progress syncs across Apple devices, so a match can follow you from iPhone to iPad to Mac. That’s the feature that makes the subscription feel practical, not just convenient. One save, one campaign, no starting over.

The tradeoff is in the borders. Multiplayer isn’t available at launch, and it’s planned for a later update. There’s also no cross-play, and your saves don’t move over to the PC or console releases, so it won’t merge with an existing campaign you already have elsewhere.

Advertisement

Touch-first Civ, with a safety net

This edition is built around touch, with taps and gestures doing the heavy lifting for unit moves, city choices, and the endless menu hopping Civ is known for. Controller support helps if you’d rather play in a familiar way on iPad or Mac.

It’s a better fit for solo play than anything else. You can take a few turns while waiting in line, then swap to a bigger screen when you want to plan a war, juggle districts, or untangle a late-game mess. It’s still Civ that just fits your day.

What to check before installing

Your device will shape how far you can push it. The App Store listing calls for iOS 17 and an A17 Pro class iPhone or newer, and the largest map sizes are reserved for devices with more than 8GB of RAM.

If you want a portable way to scratch the Civ itch, Apple Arcade is the smoothest option on iPhone, especially if you’ll also play on Mac or iPad. If your Civ life revolves around online matches, mods, or huge scenarios, this is best as a side door for now.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025