Connect with us
DAPA Banner

Tech

Starlink’s next-gen satellite network could provide 150 Mbps speeds by end of next year

Published

on

Starlink is getting ready to launch its second generation of satellites, and it’s expected to match the speeds of a traditional terrestrial network. During a keynote at Mobile World Congress, Starlink execs detailed the roadmap for the company’s upgrade towards the next generation of satellites called V2.

“The goal of Starlink Mobile … is to provide a terrestrial-like connectivity when you’re connected to the satellite system,” Michael Nicolls, SpaceX’s senior vice president of Starlink engineering, said during the MWC keynote. “In the right conditions, it should look and feel like you’re connected to a high-performing 5G terrestrial network.”

Nicolls detailed that the V2 satellite constellation could offer download speeds up to 150 Mbps in ideal conditions, comparing it to a broadband experience. According to Starlink, next-gen satellites will offer 100 times the data density of its predecessors, which should help users with faster streaming and browsing as well as more reliable voice calls. Notably, Nicolls added that the V2 satellite constellation would offer better coverage to Earth’s polar regions, which are known to have unreliable coverage with traditional networks.

Nicolls said that SpaceX is planning to send out more than 50 V2 satellites on each SpaceX launch starting in mid-2027, with a goal of building out a full constellation in six months. Outside its MWC presser, Starlink also announced a partnership with German telecommunications company Deutsche Telekom. The partnership would help Deutsche Telekom address internet coverage gaps in Europe using Starlink’s constellation, starting in 2028.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Low Self-Discharge, High-Voltage Supercapacitors Using Porous Carbon

Published

on

Supercapacitors rely mostly on double-layer capacitance to bridge the divide between chemical batteries and traditional capacitors, but they come with a number of weaknesses. Paramount among these are their relatively low voltage of around 2.7 V before their electrolyte begins to decompose, as well as their relatively high rates of self-discharge. Here a new design using lignin-derived porous carbon electrodes and a fluorinated diluent was demonstrated by [Shichao Zhang] et al., as published in Carbon Research, that seems to address these issues.

Most notable are the relatively high voltage of 4 V, an energy density of 77 Wh/kg and a self-discharge rate that’s much slower than that of conventional supercapacitors. In comparison with these supercapacitors, these demonstrated versions are also superior in terms of recharge cycles with 90% of capacity remaining after 10,000 cycles, which together with their much higher energy density should prove to be quite useful.

This feat is accomplished by using lignin as the base for the carbon electrodes to make a highly porous surface, along with the new electrolyte formulation consisting of alithium salt (LiBF4) dissolved in sulfolane with TTE as a non-solvating diluent. The idea of using lignin-derived carbon for such a purpose has previously been pitched by [Jia Liu] et al. in 2022 and [Zhihao Ding] in 2025, with this seemingly one of the first major applications we may be seeing.

Advertisement

Although the path towards commercialization from a lab-assembled prototype is a rough one, we may be seeing some of these improvements come to supercapacitors near you sooner rather than later.

Source link

Advertisement
Continue Reading

Tech

Use AI to Find Your Next Rental Car With Turo’s ChatGPT App

Published

on

ChatGPT can now help you find and book a rental car thanks to a new Turo integration that launched Monday. The Turo app for ChatGPT allows you to just tell the AI chatbot what you’re looking for — from pickup location and dates to number of seats, EV preference and more — using natural language, and be presented with real Turo rental cars, advice and links directly to the Turo website to book.

Turo is a peer-to-peer marketplace that lets private owners rent out their personal vehicles to travelers and locals. I like to think of it as the “Airbnb of cars” or drive-it-yourself Uber. Unlike traditional rental agencies that own and maintain large fleets of cars, Turo merely provides the tech, insurance and support to connect vehicle hosts with guest drivers. Turo has proven to be a popular alternative to the airport rental counter thanks to its more varied selection of unique car models (including luxury or high-tech vehicles), competitive pricing, and the convenience of having certain vehicles delivered.

And now, it has a ChatGPT integration. You can access the new Turo app within ChatGPT by first searching for and then adding Turo to the list of available agents in ChatGPT’s Apps menu. Once connected, adding “@Turo” to any chat with the AI bot will trigger the new functionality.

Advertisement

I fired up ChatGPT after setting it up for myself and typed the prompt: “@Turo, I’m going to be landing in Atlanta on Friday and would like to rent an EV for the weekend with enough range to make it to Augusta. What’s available?”

screenshot of a Turo search within ChatGPT

I used natural language to find available cars on the Turo service using ChatGPT.

Screenshot by Antuan Goodwin/CNET

The app replied with listings of vehicles currently available to rent near the airport with enough range to make the approximately 300-mile round trip with as little as one quick top-up. Each listing featured photos, price estimates (including tax and fees), star ratings and the number of times each car had been rented. 

Advertisement
AI Atlas

Clicking on a listing took me straight to the Turo website (or app on mobile) to complete the booking. I also tried asking for “an EV near my home that seats six people” and “a hybrid that would be useful for moving,” and found the results to be adequate. 

In addition to the listings, ChatGPT and Turo provided details (like range) about each car as well as pros and cons, such as Tesla’s plentiful Superchargers between Atlanta and Augusta or the Kia EV6’s very fast charging speed. Overall, the new functionality looks like a fairly convenient and decent starting point for someone who knows nothing about cars to choose a rental.

Turo’s app for ChatGPT is the latest example of AI’s rapid advance into every aspect of the automotive industry, from natural language AI assistants in the dashboard to AI-powered inspection of rental car returns. 

(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit against OpenAI last year, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Advertisement

Source link

Continue Reading

Tech

Sony confirms AI frame generation is coming to PlayStation, just not this year

Published

on


Speaking to Digital Foundry about Project Amethyst, Mark Cerny confirmed that Sony is developing AI-powered frame generation for PlayStation, but noted that no new releases are planned this year. He also declined to reveal whether the feature will be rolled out to the PS5 and PS5 Pro or be exclusive…
Read Entire Article
Source link

Continue Reading

Tech

OpenAI rolls out ChatGPT Library to store your personal files

Published

on

ChatGPT

OpenAI is rolling out a new feature called ‘Library’ for ChatGPT, which allows you to store your personal files or images on OpenAI’s cloud storage.

OpenAI says ChatGPT Library requires Plus, Pro, and Business. It’s rolling out to customers across the world except the European Economic Area, Switzerland, and the United Kingdom.

I refreshed the ChatGPT web, and the Library automatically showed up on the sidebar.

GPT
ChatGPT Library

To my surprise, it’s actually not empty, as ChatGPT has already saved some of the files I uploaded in the last two weeks.

Turns out it’s an expected behaviour. By default, GPT will save your uploaded files in a dedicated, secure location, and they can be used for reference in a future chat.

Advertisement

“ChatGPT automatically saves uploaded and created files, including files uploaded in chats (for example: documents, spreadsheets, presentations, and images) in a dedicated, secure location so they can be easily accessed later,” OpenAI noted in a document.

On the other hand, if you use ChatGPT to generate AI images, they will continue to appear in the Images tab.

The Library section only has files you uploaded, and you can upload files by following these steps:

  • Open the composer menu (the attachment/add button).
  • Select Add from library.
  • Choose the file you want to use.

OpenAI says ChatGPT automatically saves uploaded and created files, including files uploaded in chats, in a dedicated, secure location so they can be easily accessed later. 

This means files are saved to your account until you delete them manually, but deleting a chat containing a file does not delete those files saved to Library.

Advertisement

To delete a file:

  • Select the file in the “Library” tab
  • Click Delete, or click the trash icon next to the file.  

OpenAI will remove files from its servers within 30 days of deletion.

It’s unclear why it takes nearly a month to purge files, but it is likely due to legal reasons.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

What is the release date for The Pitt season 2 episode 12 on HBO Max?

Published

on

Slowly but surely, the ante is starting to be upped in The Pitt season 2. Last week, a woman detained by two male ICE agents was brought in to see Dr. Robbie (Noah Wyle) and Cassie (Fiona Dourif).

Blood covered her arms as her handcuffs cut into her wrists, with the distressed woman clearly too scared to say anything. When Cassie asks her if there’s anybody she’d like to call, one ICE agent immediately responds that there are “no calls allowed.”

Advertisement

Source link

Continue Reading

Tech

Credo Ventures closes $88M fifth fund to stay the first cheque for CEE’s most ambitious founders

Published

on

The Prague and Krakow firm, whose earliest bets include UiPath and ElevenLabs, is doubling down on pre-seed in Central and Eastern Europe and its global diaspora, with a six-partner team and a $1–5M typical cheque.

Credo Ventures has closed Credo Stage 5, an $88 million fund raised in a single closing, continuing the Prague and Krakow firm’s fifteen-year strategy of writing the first institutional cheque for founders from Central and Eastern Europe and its diaspora.

The fund is the firm’s largest to date, stepping up from the €75 million fourth fund closed in 2022.

The firm’s founding partners Ondrej Bartos and Jan Habermann launched Credo in 2010 and have backed over 100 companies across four funds.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The two headline outcomes, UiPath, the Romanian-founded RPA platform that listed on the NYSE in 2021 at a $35 billion valuation, and ElevenLabs, the AI voice company most recently valued at $11 billion, are the cases Credo leads with, and with good reason: both were pre-seed investments led or co-led by the firm before either company was widely known. Maciek Gnutek, now a partner at Credo, was an early backer of ElevenLabs. 

The fifth fund is managed by six partners. Alongside Bartos and Habermann, the team includes Gnutek, who focuses on the Polish market and diaspora connections; Jakub Krikava, whose background spans public policy and the Czech defence ministry; Max Kolowrat-Krakowsky, with international investment experience and US networks; and Matej Micek, focused on infrastructure, AI, and developer tools.

Advertisement

The multi-generational structure is deliberate. Credo’s stated argument is that the new generation of GPs reinforces the firm’s positioning at a moment when the CEE ecosystem is maturing but its pre-seed layer remains structurally underserved.

The firm’s thesis for Fund 5 is a sharpened version of what it has always done. Typical cheques will fall in the $1–5 million range, though Krikava told start-up.ro the firm remains flexible.

Sectoral focus is deliberately loose, Credo describes itself as founder-first rather than theme-driven, but the team is specifically attuned to technical founders with global ambitions, and has a growing eye on AI companies after the pace of growth in its fourth fund portfolio.

The CEE region it covers has a combined population of around 170 million and GDP of roughly $2 trillion, and Credo argues that the diaspora element, particularly founders from the region building in San Francisco and London, is an equally important sourcing channel.

Advertisement

The competitive framing is quietly pointed. The firm notes that fragmentation and cultural divergence across CEE countries remain meaningful barriers for outside investors, creating a structural advantage for a firm that has built networks across the region for fifteen years.

Credo-backed companies have attracted follow-on from Sequoia, Andreessen Horowitz, Accel, and Index Ventures, which the firm cites as validation of the quality of its early-stage sourcing. Around two-thirds of the capital in the fund comes from institutional investors, with no public funding involved.

The fund size increase, from €75 million to $88 million, is modest rather than dramatic, which suggests Credo is not betting on a step-change in the regional output but on the compounding value of being consistently the first name a breakout founder from the region calls.

Advertisement

Source link

Continue Reading

Tech

Court Says Pentagon Can’t Pick And Choose Which News Outlets Have Access

Published

on

from the five-stars-don’t-beat-one-amendment dept

This was extremely wild shit to be happening anywhere, much less in the land of the First Amendment. No sooner had Donald Trump decided it was time to rename the Department of Defense to the Department of War than the head of DoD operations decided it would be sorting news agencies by level of subservience.

Pretending this was all about national security, the Defense Department basically kicked everyone out of the Pentagon’s press office and stated that only those that chose to play by the new rules would be allowed back inside.

Booted: NBC News, the New York Times, NPR. Welcomed back into the fold: OAN, Newsmax, Breitbart. The Pentagon wanted a state-run press, but without having to do all the heavy lifting that comes with instituting a state-run press in the Land of the Free.

Somewhat surprisingly, some of those explicitly invited to partake of the new Defense Department media wing refused to participate. Fox and Newsmax decided to stay out, rather than promise they’d never publish leaked documents. Those choosing to bend the knee were those who never needed this sort of coercion in the first place: One America News (OAN), The Federalist, and far-right weirdos, the Epoch Times. In other words, MAGA-heavy breathers that have never been known for their independence, much less their journalism.

Advertisement

That didn’t stop Hegseth and the department he’s mismanaging from attempting to take a victory lap. And it certainly didn’t stop news agencies like the New York Times from suing over this blatant violation of the First Amendment.

It’s so obvious it only took the NYT four months to secure a win in a federal court (DC) that is positively swamped with litigation generated by Trump’s swamp. (h/t Adam Klasfield)

The decision [PDF] makes it clear in the opening paragraph how this is going to go for the administration and its extremely selective “respect” of enshrined rights and freedoms.

A primary purpose of the First Amendment is to enable the press to publish what it will and the public to read what it chooses, free of any official proscription. Those who drafted the First Amendment believed that the nation’s security requires a free press and an informed people and that such security is endangered by governmental suppression of political speech. That principle has preserved the nation’s security for almost 250 years. It must not be abandoned now.

Amen.

Advertisement

The court notes that in the past, there has been some friction between national security concerns and reporting by journalists. In some cases, the friction has been little more than the government chafing a bit when something has been published that it would rather have kept a secret. In other cases, leaks involving sensitive information have provoked reform efforts on both sides of the equation, seeking to balance these concerns with serving the public interest.

Up until now, any efforts to expel reporters have been limited to backroom bitching. What’s happening now, however, is unprecedented.

Historically, though, even when Department leaders disliked a journalist’s reporting, they did not consider suspending, revoking, or not renewing the journalist’s press credentials in response to that reporting. Julian Barnes, Pete Williams, and Robert Burns—reporters who have spent decades covering the Pentagon—as well as former Pentagon officials, are not aware of the Department ever suspending, revoking, or not renewing a journalist’s credentials due to concern over the safety or security of Department personnel or property or based on the content of their reporting.

This may be new, but the court isn’t willing to make it the “new normal.” It’s the decades of precedent that truly matter, not the vindictive whims of the overgrown toddlers currently holding office.

The Pentagon claims that demanding journalists agree not to “solicit,” much less print data or information not explicitly approved for release by the Defense Department doesn’t reach any further than existing laws governing the handling of classified documents. The court disagrees, noting that the new policy allows the government to conflate the illegal solicitation of classified material with the sort of soliciting — i.e., requests for information, etc. — journalists do every day in hopes of securing something newsworthy.

Advertisement

On top of allowing the government to punish people for things that weren’t previously considered unlawful, the demand for obeisance wasn’t created in a vacuum. Instead, it flowed directly from this entire administration’s constant attacks on the press by the president and pretty much every one in his Cabinet.

The plaintiffs are correct: “The record is replete with undisputed evidence that the Policy is viewpoint discriminatory.” That evidence tells the story of a Department whose leadership has been and continues to be openly hostile to the “mainstream media” whose reporting it views as unfavorable, but receptive to outlets that have expressed “support for the Trump administration in the past.”

The story begins prior to the adoption of the Policy, when—following extensive reporting on Secretary Hegseth’s background and qualifications during his confirmation process—Secretary Hegseth and Department officials “openly complained about reporting they perceive[d] as unfavorable to them and the Department.” Then, in the weeks and months leading up to the issuance of the Policy, Department officials repeatedly condemned certain news organizations—including The Times—for their coverage of the Department. For example, in response to reporting by The Times on Secretary Hegseth’s alleged misuse of the messaging platform Signal, Mr. Parnell posted on X to call out The Times “and all other Fake News that repeat their garbage.” Mr. Parnell decried these news organizations as “Trump-hating media” who “continue[] to be obsessed with destroying anyone committed to President Trump’s agenda.” In other social media posts leading up to the issuance of the Policy, Department officials referred to journalists from The Washington Post as “scum” and called for their “severe punishment” in response to reporting on Secretary Hegseth’s security detail.

It was never about keeping loose lips from sinking ships. It was always about cutting off access to news agencies the administration didn’t like. And once you’ve gotten rid of the critics, you’re left with the functional equivalent of a state-run media, but without the nastiness of having to disappear people into concentration camps or usher them out of their cubicles at gunpoint.

The court won’t let this stand. The new policy violates both the First Amendment and Fifth Amendment (due to the vagueness of its ban on “soliciting” sensitive information). That’s never been acceptable before in this nation. Just because there’s an aspiring tyrant leaning heavily on the Resolute Desk these days doesn’t make it any more permissible.

Advertisement

The Court recognizes that national security must be protected, the security of our troops must be protected, and war plans must be protected. But especially in light of the country’s recent incursion into Venezuela and its ongoing war with Iran, it is more important than ever that the public have access to information from a variety of perspectives about what its government is doing—so that the public can support government policies, if it wants to support them; protest, if it wants to protest; and decide based on full, complete, and open information who they are going to vote for in the next election. As Justice Brandeis correctly observed, “sunlight is the most powerful of all disinfectants.”

The administration will definitely appeal this decision. And it almost definitely will try to bypass the DC Appeals Court and go straight to the Supreme Court by claiming not being able to expel reporters it doesn’t like is some sort of national emergency. It will probably even claim that the fight it picked in Iran justifies the actions it took months before it decided to involve us in the nation’s latest Afghanistan/Vietnam.

But it definitely shouldn’t win. This isn’t some obscure permutation of First Amendment law. This is the government crafting a policy that allows it to decide what gets to be printed and who gets to print it. That’s never been acceptable here. And it never should be.

Filed Under: 1st amendment, defense department, dod, free speech, leaks, pete hegseth, trump administration

Companies: ny times

Advertisement

Source link

Continue Reading

Tech

Congress Is Dropping The Ball With A Clean Extension Of FISA

Published

on

from the mass-surveillance-winning-again dept

Two years ago, Congress passed the “Reforming Intelligence and Securing America” Act (RISAA) that included nominal reforms to Section 702 of the Foreign Intelligence Surveillance Act (FISA). The bill unfortunately included some problematic expansions of the law—but it also included a relatively big victory for civil liberties advocates: Section 702 authorities were only extended for two years, allowing Congress to continue the important work of negotiating a warrant requirement for Americans as well as some other critical reforms

However, Congress clearly did not continue this work. In fact, it now appears that Congress is poised to consider another extension of this program without even attempting to include necessary and common sense reforms. Most notably, Congress is not considering a requirement to obtain a warrant before looking at data on U.S. persons that was indiscriminately and warrantlessly collected. House Speaker Mike Johnson confirmed that “the plan is to move a clean extension of FISA … for at least 18 months.” 

Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole, told the press he would vote for a clean extension of FISA, claiming that RISAA included enough reforms for the moment.

It’s important to note RISAA was just a reauthorization of this mass surveillance program with a long history of abuse. Prior to the 2024 reauthorization, Section 702 was already misused to run improper queries on peaceful protesters, federal and state lawmakers, Congressional staff, thousands of campaign donors, journalists, and a judge reporting civil rights violations by local police. RISAA further expanded the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. As we said when it passed, overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.

Advertisement

Section 702 should not be reauthorized without any additional safeguards or oversight. Fortunately, there are currently three reform bills for Congress to consider: SAFEPLEWSA, and GSRA. While none of these bills are perfect, they are all significantly better than the status quo, and should be considered instead of a bill that attempts no reform at all. 

Mass spying—accessing a massive amount of communications by and with Americans first and sorting out targets second and secretly—has always been a problem for our rights.  It was a problem at first when President George W. Bush authorized it in secret without Congressional or court oversight. And it remained a problem even after the passage of Section 702 in 2008 created the possibility of  some oversight. Congress was right that this surveillance is dangerous, and that’s why it set Section 702 up for regular reconsideration. That reconsideration has not occurred, even as the circumstances of the NSA, Justice Department, and FBI leadership, have radically changed. Reform is long overdue, and now it’s urgent.  

Republished from the EFF’s Deeplinks blog.

Filed Under: congress, fisa, jim jordan, mass surveillance, section 702, surveillance

Advertisement

Source link

Continue Reading

Tech

What is vibe coding? AI coding with Claude, Codex, and Gemini, explained

Published

on

Just over a year ago, OpenAI co-founder Andrej Karpathy coined the term “vibe coding” and it’s exactly what it sounds like. In a post on X, he wrote that it’s where “you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

Since then, coders from all backgrounds — and folks with zero experience — have tapped into their vibes to make apps and websites. Vibe coding platforms, powered by AI models like Claude, Codex, and Gemini, have gained traction as a way to give normies a toolset to code whatever they want, without writing a single line of script.

Tech behemoths like Amazon and bustling Silicon Valley startups even have their coders using it. It’s doing the grunt work for now, but they say it’s opening up a whole new world of possibilities. One possibility: It takes their job. But it’s a trade-off that some of them are willing to make.

Clive Thompson wrote a book about this and spent time with over 70 vibe coders to understand how the technology is upending the industry and if this is the end of computer programming as we know it. On Today, Explained, co-host Sean Rameswaram dug into these questions and even vibe coded a simple website while doing it.

Advertisement

Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

You spent a lot of time hanging out with coders who were vibe coding. And from what I could tell from reading your piece in the New York Times Magazine is that they’re not vibe coding the same way that I was vibe coding.

No, they’re doing something that’s a lot more aggressive and ambitious. What they’re doing is they are using multiple agents, kind of swarms of agents at the same time. If they’re using Claude Code or Codex or Gemini they will have it wired into their laptops. Those agents can create files, destroy files. They can take code that’s been written, they can push it live into production in the world.

And they will also work little teams. So when they want to create a piece of software, sometimes they’ll write, like, a spec, like a page saying, “Here’s what I want to do.” Or sometimes they’ll just talk to the agent. But they’ll be kind of talking to the lead agent that’s going to be the head of the team and they’ll talk to it and say, “Here’s what I want you to do. What do you think? Give me your ideas.” And they’ll sort of go back and forth generating a plan. And when they’re confident that this top agent understands what is to be done, they’ll say, “All right. Go do it.”

Advertisement

And that one will spawn off several subagents. It will have one agent that’s writing code, another one that is testing the code. It’s quite wild to watch them do this. And sometimes if it does something wrong, they’ll have to yell at it. They’ll be like, “This is unacceptable.” Or they’ll say things like, you know, “This is embarrassing. You’re humiliating me.”

And I said to him, “What’s up with that? Does that language improve the sort of output of these agents?” And he was like, “I couldn’t prove it. But generally we find that when we sort of reprimand them a little bit, they become a little more reliable.”

Can you help us understand just how much time, money, human labor is being saved by vibe coding at the level that you observed?

Yeah, it can be really significant. They’re most significant when someone is building something new from scratch. The startup founders, one- or two-person, three-person shops, they’re like, “I need to get to market fast. There might be 10 other people with this idea. I got to beat them.” It’s dizzying. Some of those people were telling me that they were working 20 times faster than they would on their own. Stuff that would normally have taken them a day now takes half an hour.

Advertisement

But at a very large and mature company like Amazon or Google, you’ve got billions of lines of existing code and if one little part of it stops working, that could cascade through everything. So those folks are definitely using the agents, but they are less likely to be pushing stuff rapidly out. They’re more likely to be looking carefully at it and putting it through what’s known as code review, where multiple humans look at it and go, “Oh, okay, does that work?” So for them, basically it’s like a 10 percent improvement in terms of the velocity of productivity of the engineers, how fast they go from having an idea to making it happen.

And what’s really interesting, and you may have discovered this too, in your vibe coding: a lot of engineers told me that it was even less about speed than about the ability to experiment with a bunch of ideas and see which one might really work.

In the before times, you’d have an idea for a feature. Are you really going to spend six weeks developing it just to discover that it’s not really what you thought it was going to be?

Now, well, let’s just do 10 different versions of that over the next week and let’s look at all of them and then we can pick the one we want. You might not necessarily have gone faster, but the feature that you’ve got is exactly the one you wanted and you know because you held it in your hands.

Advertisement

A lot of tech layoffs in the past few years, and now we’re talking about how vibe coding has dramatically overturned the norms in engineering. How are developers feeling about that?

Well, here’s the thing. So there is definitely a civil war insofar as there is the majority of people that I spoke to, and I reached out to a very wide array — I talked to 75 developers.

And I actively wanted to talk to ones that didn’t like AI because I wanted to know their feelings. It’s a minority of people that are really hotly opposed, but they’re very, very strongly opposed. They don’t like the fact that these are trained on stolen materials. They don’t like the fact that it uses tons of energy. They don’t like the fact that they think it’s going to de-skill [people].

Why do you think they’re not the majority, when this is so clearly going to replace so many of them and bypass all of their ethical, moral concerns and objections?

Advertisement

I think it’s because for a lot of developers it’s just such a delightful experience in the short term of going from everything being a slow slog to it being like, “Oh my God, all these ideas and things I wanted to do, I can now try them and do them.”

Because it’s fun, basically.

It’s enormously fun. The pleasure of coding used to be that there were a lot of these little wins when you got something working. Those little wins have gone away because you’re not doing that bug fixing, you’re not doing that line writing.

So the big wins are just coming in avalanches and it’s very intoxicating. Also, there are ones who essentially don’t think that those bad labor things are going to obtain. They think there’s a potential that more [jobs] will get created in areas that they have previously been unable to be created.

Advertisement

Give it five years for us. Does this harken the end of computer programming as we know it?

No, I would not go so far as to say that it ends in five years. I do think it becomes something very different potentially. I still think — everyone told me, and I believe — that you still need some understanding of the way a code base works to do the complicated things.

Weirdly, what you might see is something a little different, which is the explosion of code in areas where there is currently none. There’s a bazillion people out there that are code-adjacent. You work in accounting, you are a wizard at Excel, and you can import data if you’re given the ability now to have an agent say, “Okay, could you bring more data in?”

There is going to be this really weird world where there’s a lot of customized software for an audience of two, three people. We have thought of software historically as something that only exists if 10,000 people or a million people want it because it costs a lot of money to make it.

Advertisement

But if you can now start making it for next to nothing, you can start using it the way that we use Post-it notes. Put it all over the place. I need to jot this idea down. I’m going to make this happen. And maybe this software solves one problem for this afternoon and we never use it again. Software starts becoming almost disposable.

Source link

Continue Reading

Tech

Orico HS500 MetaBox Pro 5 NAS review – Good hardware, badly let down by software

Published

on

The Orico HS500 MetaBox Pro is a five-bay NAS with good hardware, but unless you like taking apart the hardware to install third-party software, there are better options for Apple owners.

Dark gray Orico external hard drive enclosure on a white desk, with a small blue spherical smart speaker in the background against a light textured wall
Orico HS500 MetaBox Pro

Network-attached storage (NAS) is more than its hardware, and the number of bays it has. It’s simply not possible for anyone to just buy a NAS without having to check out what other features it can do beyond just storage.
With AI becoming a hot topic in tech, it’s also becoming part of more onboard features.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025