Connect with us
DAPA Banner

Tech

What skills drive a senior software engineer’s work at Yahoo?

Published

on

Graham Bartley discusses his role in the software engineering landscape and the evolution of his job over recent years.

“No two days are exactly the same, which is part of what keeps the job engaging,” Yahoo senior software engineer Graham Bartley told SiliconRepublic.com. 

“On days when I go into the Dublin office, I typically spend the first hour of the morning enjoying tea and catching up with my colleagues face-to-face,” he said.

“I always enjoy hearing their stories, learning a little about what they’re working on and spotting opportunities for collaboration.

Advertisement

“On remote days, I usually start by checking in on our squad’s Jira board and any overnight Slack discussions. I lead a squad called Optimus, which is Yahoo Demand Side Platform (DSP)’s core generative AI (GenAI) backend team, so there’s often a thread to catch up on from my squad members or a design question from one of the other squads we support.”

From there, Bartley’s day is a mix of writing code, reviewing pull requests, design work, managing Jira tickets and taking calls with squad members. On a coding day, he might find himself building out a new API endpoint for Yahoo’s troubleshooting agent, writing integration tests or debugging a production issue end-to-end across multiple services.

“There are usually quite a few meetings, including our daily squad standup, individual calls with each squad member, cross-team syncs with other squads, architecture meetings and office-hour sessions where engineers present designs for feedback.

“I try to protect blocks of focused time for deep work, but being a squad lead means always being available when someone needs support.”

Advertisement
As a senior software engineer, how does your role fit into the wider software industry?

I work on Yahoo’s DSP, which is the technology behind how advertisers plan, target and optimise digital advertising campaigns across channels like mobile, connected TV, desktop and audio. It’s a big part of what’s called programmatic advertising and it’s a space that keeps growing.

Within that, my current focus is on applying GenAI to make the platform smarter and more intuitive. I lead a squad that built Yahoo DSP’s first GenAI-powered feature, an AI troubleshooting agent that helps advertisers understand why their campaigns might be underdelivering and what they can do about it.

What skills do you use on a daily basis and how have they evolved over time?

Day to day, I write Python and Java, design APIs and distributed systems, review code and work across the full stack from database schemas through backend services to UI components, although recently my focus has been entirely on the backend as that is my squad’s responsibility. I also design technical architectures, write design documents, and present to both technical and non-technical audiences.

The breadth of what I do has grown a lot over time. Earlier in my career, I was heads-down writing code and delivering features within a couple of codebases based on the Jira tickets I was assigned. Now I spend as much time on system design, cross-team coordination and setting technical direction as I do writing code.

Advertisement

The biggest recent shift has been AI, both in what I build and how I build it. On the product side, I’ve gone deep on large language models, agent frameworks, model context protocol, vector databases, knowledge bases and evaluation pipelines for non-deterministic AI outputs. Two years ago, none of that was part of my daily vocabulary. 

What are the hardest parts of your working day and how do you navigate them? 

Context-switching is perhaps the biggest challenge. As a squad lead and code-owner across many repositories, I might go from a deep debugging session to a design review to a cross-team Slack discussion about authentication architecture, all within a couple of hours. Each context requires full attention and the transitions are costly. I navigate that by being disciplined about time-blocking. I protect mornings for deep technical work wherever possible and cluster meetings in the afternoons. I also use structured notes and documentation heavily. If I’m interrupted mid-task, I want to be able to pick up exactly where I left off. 

AI tools actually help here, too. Being able to drop back into a Claude Code session and say ‘here’s where I was, here’s what I was trying to do’ and have it reconstruct the context is genuinely useful.

Working remotely from Ireland with teams distributed globally can present some challenges, such as working after-hours to interface with colleagues abroad. Managing this is definitely a skill in itself and can be difficult because it can feel very productive to work late when US colleagues are available, for example.

Advertisement

I try to be aware of making up for late hours worked to keep a good work-life balance, but it’s definitely challenging balancing the feeling of productivity with the reality of burnout. 

Has your role changed as the sector has grown and evolved?

Significantly. When I started, engineering work in the DSP was largely about building features in reliable CRUD applications and optimising serving workflows. The biggest evolution since then has been the arrival of GenAI. In late 2024, I was part of a small working group exploring how GenAI could be applied to our product. Within months, that exploratory work became a full strategic initiative. I went from leading a commerce media team to founding and leading Yahoo DSP’s first GenAI squad.

The way we build software is changing too. A year ago, our team wrote code the traditional way. Now we’re using AI coding assistants daily. It’s still early days and there are real learning curves. We’ve found that the AI works brilliantly for well-defined, bounded tasks but struggles with cross-cutting concerns that span multiple services and large contexts. Knowing when to lean on it and when to step back and think for yourself is a skill in itself.

The organisational model has changed too. We’ve moved from traditional team structures to a ‘squads, tribes, chapters, and guilds’ model, which puts more emphasis on cross-functional collaboration and autonomous delivery. My role has shifted from primarily writing code to being a technical leader who sets direction, defines standards and enables other teams to build on the foundations we create together.

Advertisement
Is there anything you know now about working in software that you wish you knew starting out?

I wish I’d known more about work-life balance and recognising the signs of burnout. Early in my career, I had a tendency to just push through and it took me a while to realise that’s not sustainable. Now I’m much more intentional about managing my time, protecting my energy and knowing when to step away from a problem. You actually come back sharper when you do.

I also wish I’d challenged assumptions more. When you’re starting out, it’s natural to take requirements and existing designs at face value. But as I’ve become more senior and taken on more design work, I’ve found that some of the biggest improvements come from questioning why something is the way it is. The best solutions often come from pushing back on the premise, not just optimising within the constraints you’re given.

What advice do you have for others looking to follow in your professional footsteps?

Don’t be afraid to say ‘I don’t know’ and then go learn. This is crucial. The best engineers I’ve worked with aren’t the ones who know everything. They’re the ones who learn quickly and aren’t embarrassed to be a beginner at something new. Being a mentor also means being a mentee and I’ve learned a lot from the engineers I’ve mentored. A year ago, I had never built an AI agent. Now I lead a team that builds them.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Electrical Current Might Be the Key To a Better Cup of Coffee

Published

on

An anonymous reader quotes a report from Ars Technica: University of Oregon chemist Christopher Hendon loves his coffee — so much so that studying all the factors that go into creating the perfect cuppa constitutes a significant area of research for him. His latest project: discovering a novel means of measuring the flavor profile of coffee simply by sending an electrical current through a sample beverage. The results appear in a new paper published in the journal Nature Communications.

[…] The coffee industry typically uses a method for measuring the refractive index of coffee — i.e., how light bends as it travels through the liquid — to determine strength, but it doesn’t capture the contribution of roast color to the overall flavor profile. So for this latest study, Hendon decided to focus on roast color and beverage strength, the two variables most likely to affect the sensory profile of the final cuppa. His solution turned out to be quite simple. Hendon repurposed an electrochemical tool called a potentiostat, typically used to test battery and fuel cell performance. Hendon used the tool to measure how electricity interacted with the liquid. He found that this provided a better measurement of the flavor profile. He even tested it on four different samples of coffee beans and successfully identified the distinctive signature of a batch that had failed the roaster’s quality-control process.

Granted, one’s taste in coffee is fairly subjective, so Hendon’s goal was not to achieve a “perfect” cup but to give baristas a simple tool to consistently reproduce flavor profiles more tailored to a given customer’s taste. “It’s an objective way to make a statement about what people like in a cup of coffee,” said Hendon. “The reason you have an enjoyable cup of coffee is almost certainly that you have selected a coffee of a particular roast color and extracted it to a desired strength. Until now, we haven’t been able to separate those variables. Now we can diagnose what gives rise to that delicious cup.” Outside of his latest electrical-current experiment, Christopher Hendon’s coffee research has shown that espresso can be made more consistently by modeling extraction yield — how much coffee dissolves into the final drink — and controlling water flow and pressure.

He also found that static electricity from grinding causes fine coffee particles to clump, which disrupts brewing. The solution: adding a small squirt of water to beans before grinding (known as the Ross droplet technique) to reduce that static, cut clumping and waste, and lead to a stronger, more consistent espresso.

Advertisement

Source link

Continue Reading

Tech

Albert King and Eddie Kirkland Vinyl and Digital Reissues Announced by Craft Recordings and Bluesville Records: Try Not to Miss Them Again

Published

on

Blues doesn’t always get the same glossy reissue treatment or attention as jazz, but that says more about the market than the music. The truth is, blues is just as essential to the American story, and for a lot of listeners, it hits harder and feels more direct. You don’t have to decode it. You feel it. Audiences just proved that again with Sinners. They showed up for the rawness, the history, and the emotional truth that blues has always carried without apology.

And if we’re being honest, some of us connect to that more than jazz, which can feel repetitive and a little too polite when what you really want is some level of truth as you take another sip of your drink and second guess not calling her back or remember exactly why you loved or hated her in the first place.

Now Craft Recordings and its Bluesville Records imprint are doubling down on that legacy with two reissues that don’t need a sales pitch, just a turntable. Albert King’s I’ll Play the Blues for You from 1972 and Eddie Kirkland’s It’s the Blues Man! from 1962 arrive June 12, 2026 with AAA remastering from the original analog tapes by Matthew Lutthans at The Mastering Lab. Both titles are pressed on 180 gram vinyl at Quality Record Pressings in partnership with Acoustic Sounds, housed in tip on jackets, and include obi strips with new notes by Scott Billington.

These are not museum pieces or background music for suburban wine tastings with a fleet of Range Rover driving Karens pretending they “get the blues.” Their idea of the blues is the Starbucks app going down mid order, the Ozempic shot disappearing between the seats, or someone cutting them off at H-E-B for the last bag of jerky meant for a hypoallergenic dog.

Advertisement

Albert King’s title track still cuts deep, and Eddie Kirkland’s “Saturday Night Stomp,” featuring King Curtis, has more life than most modern recordings that spend thousands trying to fake it. The reissues will also be available across digital platforms in hi-res and standard formats, with both key tracks already streaming.

Albert King’s I’ll Play the Blues for You Returns: Stax Era Fire, Memphis Muscle, No Apologies

albert-king-play-blues-lp

Often lumped in with the other “Kings of the Blues” like Freddie King and B.B. King, Albert King didn’t just belong in that conversation, he helped define it. Born in Mississippi in 1923, self taught, and eventually landing in Memphis after stops in Gary and St. Louis, King built a sound that didn’t ask for permission. His Gibson Flying V didn’t whisper, it testified. The voice was just as unmistakable. Deep, worn, and completely uninterested in sounding pretty.

Signing with Stax Records changed everything. Backed by one of the tightest in house crews in the business, King hit a run that most artists never get close to. “Laundromat Blues,” “Crosscut Saw,” and “Born Under a Bad Sign” were not just hits, they were statements. Blues that moved, grooves that hit, and songs that actually meant something.

By the time I’ll Play the Blues for You landed in 1972, King wasn’t chasing relevance. He already had it. Produced by Allen Jones, the album leans into a funkier, more modern feel without losing the grit. The Bar-Kays and The Movement handle the rhythm section with zero wasted motion, while The Memphis Horns bring the kind of punch that makes everything feel bigger without turning it into a circus.

The tracks stretch out because they need to. “I’ll Play the Blues for You” and “Breaking Up Somebody’s Home” both push past seven minutes, not for show, but because that’s how long it takes to say something real. The latter cracked the Hot 100 and landed in the R&B Top 40, but chart positions almost feel beside the point here. In 2017, the title track was inducted into the Blues Hall of Fame, which is nice, but anyone who’s actually heard it already knew.

Advertisement

Where to pre-order: $37 at Amazon (Available June 12, 2026)


Eddie Kirkland’s It’s the Blues Man! Returns: Raw, Road-Tested, and Cut at Van Gelder’s Peak

cr00980-eddie-kirkland-its-the-blues-lp

Eddie Kirkland didn’t come up through conservatories or polite circles. Born in Jamaica in 1923 and raised in Alabama, he learned everything the hard way and then took it on the road. A lot. As a teenager, he headed to Detroit and spent more than a decade playing alongside John Lee Hooker, which is about as real an education as it gets. In the early ’60s, he linked up with Otis Redding, serving as guitarist and bandleader. Not exactly a side gig you stumble into.

Advertisement. Scroll to continue reading.

Somewhere between all that mileage, Kirkland managed to carve out his own lane. In 1962, he dropped It’s the Blues Man! on Tru Sound, a short lived offshoot of Prestige Records. It didn’t come with hype or a marketing machine. It came with intent.

The session was engineered by Rudy Van Gelder, which tells you everything about the sound before the needle even drops. It’s immediate, punchy, and alive. Kirkland is backed by King Curtis and his band, and Curtis keeps things locked in with that unmistakable blend of R&B swing and street level grit. No excess, no filler.

Advertisement

Kirkland moves easily between styles because he actually lived them. “Train Done Gone” hits with purpose, “Man of Stone” locks you into its groove, and yeah, John Mayall covered it later on Crusade, which should tell you something. Then he pivots into slower cuts like “I’m Gonna Forget You” and “Have Mercy on Me” and reminds you that restraint can hit just as hard when it’s done right.

Where to pre-order: $37 at Amazon (Available June 12, 2026)

Source link

Advertisement
Continue Reading

Tech

App Store policy must change as stay reversed

Published

on

Apple will have to comply with previous mandates as it takes its fight with Epic Games back to the Supreme Court, so expect App Store changes soon.

The Apple vs Epic saga is years long and could easily fill a book at this point, but it hasn’t ended yet. The latest update comes after Apple won a stay against enforcing App Store changes as it appealed the Supreme Court.

That stay was short-lived, as Epic immediately appealed the stay and 9to5Mac shared that it has won. The US Ninth Circuit Court has reversed the stay it placed on enforcing a mandate that would require Apple to change how it charges developers for external purchases.

Basically, Apple won on every count in the Epic lawsuit except one. It was ordered to end its anti-steering rules and allow external purchases.

Advertisement

Apple complied, but its new setup for external commissions was constructed in a way that wouldn’t make it worthwhile for developers to adopt. Apple was found in contempt of the order, and an injunction was filed to force Apple to allow external purchases with zero commission.

The injunction was appealed again and again, and eventually an agreement was reached that Apple should be allowed to charge a commission, just not 27%. A later ruling said that Apple and Epic must decide on what would be acceptable, but that hasn’t happened yet.

Apple was taking the case to the Supreme Court again and requested that the negotiations over a new fee and more App Store changes be stayed. It argued that there would be no need for lower court involvement until the Supreme Court appeal was done, and that stay was granted.

Epic appealed that stay order, and that’s where we are today. Even as Apple appeals to the Supreme Court, it will have to go back to the lower courts and work out the new commission structure.

Advertisement

Epic Games CEO Tim Sweeney took to social media to celebrate.

That’s quite the fall from wanting free and open access to the App Store user base. Even as Epic “wins,” Apple still gets to collect its dues.

Apple has the power to end this

Given that the case was refused at the Supreme Court already, it doesn’t seem like things will go Apple’s way. The company may not be able to charge as much as it wants, but at least the courts have agreed it is owed something.

Advertisement

All of these regulatory cases around the world can’t be avoided when you’re as big as Apple. However, I fully believe that Apple could reduce the pain if it wanted to.

It is well within Apple’s power and resources to come up with a new App Store commission system that would still earn it plenty of money, that governments would approve of, and only a few developers might sneer at. Epic will never be satisfied short of running Apple’s App Store itself for all of the profit, but others would be happy with more revenue.

This ongoing epic began in 2020 with Epic purposefully violating App Store policy so it could goad Apple into a lawsuit. The entire campaign was pitched as Epic taking on big bad Apple and even came with a 1984-style advert.

Like with Spotify and other giants that take on Apple, it’s about maximizing their bottom line while leeching off of Apple’s user base. Users might benefit in the long run, but Epic has paid more than a billion dollars for what could be considered rather small victories.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Truveta launches AI research tool to get quick insights from big database of U.S. clinical data

Published

on

Screenshots of Truveta Intelligence, showing the query interface and sample results analyzing GLP-1 drug uptake trends. (Truveta Images)

Truveta wants to give healthcare researchers something that has long eluded them: the ability to ask a question about real-world patient data and get an answer in minutes, not months.

The Bellevue, Wash.-based health data company on Tuesday announced Truveta Intelligence, a new AI-powered tool that lets researchers and healthcare leaders ask natural language questions about patient populations, treatments, and outcomes.

The answers from the tool are drawn from continuously updated, de-identified clinical records covering more than 130 million patients. Truveta has access to the data through its network of 30 health system partners, representing more than 18% of daily clinical care in the U.S.

The key distinction from other AI tools in healthcare, the company says, is that Truveta Intelligence isn’t summarizing published research. Instead, it’s querying live clinical data.

The tool could give drug makers faster insight into how new therapies are performing, help health systems spot variation in patient outcomes, and let public health researchers detect emerging trends earlier based on information that is much closer to real-time.

Truveta was founded in 2020 with roots at Providence, the large health system based in Washington state. Terry Myerson, a former Microsoft executive vice president who led the company’s Windows and Devices Group, has led the company as CEO since its inception.

Advertisement

The company raised $320 million in January 2025 in a round involving Regeneron, Illumina, and 17 health systems, pushing its valuation above $1 billion. The company, ranked No. 3 on the GeekWire 200, has raised $515 million to date and has more than 400 employees.

Truveta Intelligence is available now to existing Truveta Data subscribers, which include life sciences companies and health systems.

Source link

Advertisement
Continue Reading

Tech

At his OpenAI trial, Musk relitigates an old friendship

Published

on

Among the most interesting parts of Elon Musk’s testimony Tuesday in his lawsuit against OpenAI wasn’t the charity he claims was stolen from him (we all knew that was coming). It was about an old friend.

Musk testified that one of his core motivations for co-founding OpenAI was a falling-out with Google’s Larry Page over AI safety — specifically, a conversation in which Musk raised the prospect of AI wiping out humanity and Page shrugged it off as “fine,” so long as AI itself survived. Page called Musk a “speciest” for being “pro human.” Musk called the attitude “insane.”

That’s mostly notable given how close the two once were. Fortune included them on its 2016 list of secretly best-friend business leaders; Musk was so comfortable with Page that he regularly crashed at his Palo Alto home. Page once told Charlie Rose that he’d rather give his money to Musk than to charity.

The friendship didn’t survive OpenAI. When Musk recruited Google AI star Ilya Sutskever to help launch the company in 2015, Page felt personally betrayed and cut off contact.

Advertisement

It’s a story Musk has told before — including to author Walter Isaacson for his bestselling biography of Musk — but Tuesday was the first time he said it under oath. Page hasn’t commented, and it’s worth remembering everything that Musk said was in service of a lawsuit. Still, as recently as 2023 he told tech podcaster Lex Fridman he wanted to patch things up: “We were friends for a very long time.”

Source link

Continue Reading

Tech

South Africa withdraws national AI policy after at least 6 of 67 academic citations found to be AI-generated hallucinations

Published

on

TL;DR

South Africa’s Communications Minister Solly Malatsi withdrew the country’s draft national AI policy after News24 discovered that at least 6 of its 67 academic citations were AI-generated hallucinations, citing fake articles in real journals. The policy had been approved by Cabinet in March and published for public comment. Malatsi called it an “unacceptable lapse” and promised consequence management. The scandal leaves South Africa without an AI governance framework and raises questions about institutional capacity to regulate the technology.

South Africa’s Department of Communications and Digital Technologies spent months drafting a national artificial intelligence policy. It proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. It outlined five pillars of AI governance: skills capacity, responsible governance, ethical and inclusive AI, cultural preservation, and human-centred deployment. It adopted a risk-based approach modelled on the EU AI Act. Cabinet approved the draft on 25 March. The Government Gazette published it on 10 April for public comment. And then News24, the South African news outlet, checked the bibliography and discovered that at least six of the document’s 67 academic citations did not exist. The journals were real. The articles were not. The authors credited with foundational research on AI governance had never written the papers attributed to them. Editors at the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the cited articles had never been published in their pages. The most plausible explanation, according to Communications Minister Solly Malatsi, is that the drafters used a generative AI tool and published the output without verifying a single reference. A government policy designed to govern artificial intelligence was undermined by the artificial intelligence it failed to govern.

Advertisement

The withdrawal

Malatsi announced the withdrawal on 27 April, calling the fictitious citations an “unacceptable lapse” that “compromised the integrity and credibility of the draft policy.” He said consequence management would follow for those responsible for drafting and quality assurance. “This failure is not a mere technical issue,” the minister said. The parliamentary portfolio committee chair offered a more concise assessment, suggesting the department “skip using ChatGPT this time” when redrafting. The document will be revised before being reissued for public comment, but no timeline has been given. South Africa is now without a formal AI governance framework at a time when governments worldwide are grappling with how to regulate AI, and the country’s credibility as a serious participant in that conversation has taken a blow that will outlast the policy revision.

The scandal is not simply that fake citations appeared in a government document. It is that they appeared in a government document about artificial intelligence, written by the department responsible for the country’s digital technology strategy, during the exact period when the world’s most consequential AI governance debates are being fought in Brussels, Washington, and Beijing. The EU AI Act, the most ambitious regulatory framework for artificial intelligence, is grappling with delayed standards and an implementation timeline that has been pushed back to 2027 for high-risk systems. The United States has no federal AI legislation and is watching states legislate independently while the White House attempts to preempt their efforts. China has enacted AI regulations but applies them selectively. Into this landscape, South Africa offered a policy that could not survive a bibliography check.

The pattern

South Africa’s hallucinated citations are an extreme case of a problem that is quietly spreading across institutions that use generative AI for research and drafting. A study published in Nature found that 2.6 per cent of academic papers published in 2025 contained at least one potentially hallucinated citation, up from 0.3 per cent in 2024. If that rate holds across the roughly seven million scholarly publications from 2025, more than 110,000 papers contain invalid references. GPTZero, a Canadian detection startup, analysed more than 4,000 research papers accepted at NeurIPS 2025, one of the world’s premier AI conferences, and found over 100 hallucinated citations across at least 53 papers. In a separate multi-model study, only 26.5 per cent of AI-generated bibliographic references were entirely correct. The problem is structural: large language models generate citations through probabilistic token prediction rather than information retrieval. They do not look up papers. They predict what a citation should look like based on the patterns in their training data, and when the prediction is confident enough, they produce a reference that reads as authoritative but points to nothing.

The South African case is distinctive not because the technology hallucinated, which is a well-documented and inherent limitation of generative AI, but because the hallucinations were published in an official government policy document that passed through Cabinet approval without anyone verifying the references. The drafting process included civil servants, subject matter consultations, and ministerial review. Dumisani Sondlo, the department’s AI policy lead, had previously described the policy development as “an act of acknowledging that we don’t know enough.” That acknowledgment did not extend to acknowledging that the tool being used to help draft the policy was itself unreliable. The six fake citations that News24 identified are the ones that were caught. Whether additional citations in the document’s 67 references are genuine has not been publicly confirmed. The entire bibliography is now under suspicion, and by extension, so is the analytical foundation on which the policy’s proposals were built.

The implications

The immediate consequence is that South Africa’s AI governance timeline has been reset. The draft policy, which was intended to position the country as a leader in responsible AI adoption on the African continent, will need to be redrafted, reconsulted, and resubmitted. The institutional credibility damage extends beyond the policy itself. If the department responsible for governing AI cannot verify whether the sources in its own policy document are real, the question becomes whether it has the capacity to evaluate the AI systems it proposes to regulate. The policy envisioned a multi-regulator model in which AI governance and human oversight would be embedded within existing supervisory frameworks rather than centralised under a single authority. That model requires each participating regulator to have sufficient technical understanding to assess AI systems in their sector. The hallucination scandal does not inspire confidence that the coordinating department meets that threshold.

Advertisement

The broader lesson is not that governments should avoid using AI in policy development. It is that the failure mode of AI is not dramatic. It does not crash. It does not display an error message. It produces fluent, formatted, confident text that looks exactly like the output of a competent researcher. The fake citations in South Africa’s AI policy were not obviously wrong. They were plausible. They cited real journals. They attributed work to real people. They followed the formatting conventions of academic references. The only way to catch them was to check whether each one actually existed, a task that requires exactly the kind of methodical human verification that AI is supposed to make unnecessary. Growing public distrust of AI is not irrational. It is a response to a technology that is simultaneously powerful enough to draft a national policy and unreliable enough to fabricate the evidence that policy rests on. South Africa’s embarrassment is singular, but the underlying failure, using AI without the capacity to verify its output, is not. It is happening in universities, law firms, newsrooms, and government departments around the world. South Africa is simply the first government to publish the receipts. The challenges of implementing AI regulation are real, but they begin with a prerequisite that South Africa’s department did not meet: understanding what the technology does before trying to write the rules for it.

Source link

Advertisement
Continue Reading

Tech

The Secretary Of Health & Human Services Doesn’t Believe In The Foundation Of Modern Medicine

Published

on

from the medicine-101 dept

We discussed RFK Jr.’s recent appearance before Congress, where he bravely declared that the current measles outbreak in America has absolutely nothing to do with him, despite that definitely not being true. But, unsurprisingly, that wasn’t the only craziness that Kennedy put on display in the hearing.

The Secretary of HHS doesn’t believe in the foundational theory that powers modern medicine.

Read that again. It’s an insane sentence, the sort that should be fiction. What we’re talking about here is the germ theory of disease, which is the accepted science when it comes to how many diseases infect and spread through pathogens. We mentioned in a post last year, which was chiefly about how Kennedy decided to take his grandkids swimming in a creek filled with poop, that he had also written in a 2021 book that he doesn’t believe in germ theory, and instead believes in what he incorrectly labels “miasma theory”.

It’s one thing to write something in a book as we were mired in a global pandemic. But Kennedy both admitted that he doesn’t believe in germ theory, and defended that belief, before Congress.

Advertisement

In the hearing on Wednesday, Sanders called attention to Kennedy’s denial of germ theory while raising one of Kennedy’s shaky arguments for debunking. In opening statements, Sanders warned Kennedy that he wanted to question the “things that you have written which call in doubt the very existence of the germ theory.”

Sanders pointed out a 2024 study led by the World Health Organization and published in The Lancet that found that since 1974, vaccines had saved an estimated 154 million lives, including 146 million children under the age of 5—or, as WHO put it, vaccines saved the equivalent of six lives every minute of every year over the past 50 years.

“My question is a simple one,” Sanders said, “do you still believe that one of the central tenets of the germ theory, that vaccines sharply reduce infant mortality, is quote-unquote simply untrue?”

Kennedy first did what he always does: try to tell you that the experts and studies have no idea what they’re talking about, or are hopelessly corrupted tools of industry. He does this so often that you can set your watch by it. If a study agrees with him, it’s a good study. If it doesn’t, it’s bad. He’s more like Trump than any of us realized.

Then he launched into his own justification and offered up a 2000 study that he claimed demonstrated that it was improved nutrition and sanitation that reduced childhood deaths this century, and explicitly not medicines like vaccines. Unfortunately for Kennedy, Bill Cassidy piped up with a, oh, let’s call it a minor correction.

Advertisement

The study by Guyer notes that sanitation, among other public health strategies introduced in the first half of the 20th century, drove major declines in mortality. But, as Cassidy noted during the hearing, it’s not all that the study found. Cassidy looked up the studies Kennedy raised and read through them during the hearing.

The Guyer study highlighted that vaccination did not become widely used until after the middle of the century, thus it cannot account for mortality declines prior to that. But it concluded, as Cassidy read out loud at the hearing:

The reductions in vaccine-preventable diseases, however, are impressive. In the early 1920s, diphtheria accounted for about 175,000 cases annually and pertussis for nearly 150,000 cases; measles accounted for about half a million annual cases before the introduction of vaccine in the 1960s. Deaths from these diseases have been virtually eliminated, as have deaths from Haemophilus influenzae, tetanus, and poliomyelitis.

Kennedy tried again, with another study, but Cassidy pointed out that it had the same issue as Kennedy’s first: it measured data from the beginning of the century to the early 1970s. Many of the vaccines Kennedy rails against had barely been out during the period the study analyzed, or in many cases hadn’t come out at all. Speaking specifically to the measles vaccine, released in 1963, Cassidy said:

“There’s 3.5 million cases of measles per year before the vaccine came along and about 550 deaths, and then the vaccine took those to less than 100 [cases] and like zero deaths,” Cassidy said. “So a tremendous impact of the vaccination.”

The problem with Cassidy is that he’s acting like he’s trying to convince Kennedy to change his mind on this. He’s not going to. Not ever. He’s made that clear.

So impeach him or convince Trump to make Kennedy his next cabinet firing. That’s all that’s left to do. Because we certainly cannot continue having someone run HHS who doesn’t believe in the very baseline theory for medicine.

Advertisement

Filed Under: bill cassidy, germ theory, health & human services, rfk jr.

Source link

Advertisement
Continue Reading

Tech

Can You 3D Print A Pinball Machine That’s Fun To Play?

Published

on

It seems fair to say that pinball machines are among the most universally loved gaming systems known today, yet the full-sized ones are both very expensive and very large, while even the good quality table-sized ones tend to be on the expensive side. That raises the question of whether a fully 3D printed pinball machine could at all be fun and not just feel like a cheapo toy? A recent video by [Steven] from [3D Printer Academy] on YouTube makes here a compelling argument that it might actually be worth something to consider.

In addition to being fully modular and customizable the most compelling element is probably that the design supports two- and four-player multiplayer. This sees the metal balls leaving at the rear and from there entering the playing field of another player’s machine, which can probably get pretty chaotic.

Unfortunately this is part of a Kickstarter campaign, so you’ll have to either shell out some cash to get access to the print files or DIY your own version. We’d also be remiss to not address the durability concerns of a 100% plastic pinball machine like this, plus the lack of serious heft to compensate for more enthusiastic playing styles.

If you are more into traditional DIY pinball machines, we have covered these as well, along with small screen-based machines, and their miniature brethren for when space is really at a premium.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Musk Testifies OpenAI Was Created As Nonprofit To Counter Google

Published

on

Elon Musk testified on day two of his trial against OpenAI, saying he helped create the company as a nonprofit counterweight to Google and would not have backed it if the goal had been private profit. CNBC reports: Musk on Tuesday was the first witness called to testify in the trial. He spoke about his upbringing, his many companies, his role in founding OpenAI and his understanding of its structure. Musk said in his testimony that he was not opposed to the creation of a small for-profit subsidiary, “as long as the tail didn’t wag the dog.” Musk said he was motivated to start OpenAI to serve as a counterweight to Google. He got the idea after an argument he had with Google co-founder Larry Page, who called Musk a “speciesist for being pro-human,” he testified. “I could have started it as a for profit and I chose not to,” Musk said on the stand.

Earlier, attorneys for Musk and OpenAI presented their opening arguments to the jury. Musk’s lead trial lawyer, Steven Molo, delivered the opening statement for the Tesla and SpaceX CEO. OpenAI lawyer William Savitt gave the opening statement for the AI company, Altman and Brockman. OpenAI has characterized Musk’s lawsuit as a baseless “harassment campaign.” The company said Monday in a post on X that it “can’t wait to make our case in court where both the truth and the law are on our side.”

During his testimony on Tuesday, Musk repeatedly emphasized that he founded OpenAI to serve as a counterweight to Google. He said he got the idea after an argument about AI safety with Google co-founder Larry Page, who Musk said called him “a speciesist for being pro-human.” Musk said he was concerned Page was not taking AI safety seriously, so he wanted there to be an nonprofit, open source alternative to Google. “I could have started it as a for profit and I chose not to,” Musk said on the stand. Further reading: Elon Musk and OpenAI CEO Sam Altman Head To Court

Source link

Advertisement
Continue Reading

Tech

Apple is finally building the AI Photo editor that Google and Samsung have had for years

Published

on

Google’s Photos app has been doing things that Apple’s Photos app couldn’t, for years, and the iPhone-maker has noticed. Bloomberg’s Mark Gurman, in his latest report, claims that iOS 27, iPadOS 27, and macOS 27 will come with a dedicated “Apple Intelligence Tools” section inside the Photos editing interface. 

The “Apple Intelligence Tools” section will include three new AI-powered photo-editing features: Extend, Enhance, and Reframe. Before we begin with what the features actually do, all of them will run entirely on-device, and, in a typical Apple fashion, complete their edits in seconds. 

What will the new Apple Intelligence photo editing tools do?

Extend, as the name suggests, extends a picture’s boundaries by generating new imagery and seamlessly stitching it to the existing one. You should be able to use the feature to add some surrounding to close-up shots or add some negative space to either side of the subject. 

Enhance, on the other hand, works as a one-tap enhancement button, which immediately adjusts the color, lighting, and the overall image quality, without going through different editing options and fiddling with various sliders. 

Reframe is designed primarily for spatial photos captured for the Vision Pro headset. It lets users shift the perspective of a 3D image after it’s already been taken, allowing you to move from a front-facing to a side-facing view. 

Advertisement

Is Apple actually ready to release all three features?

Not at the moment, no. Per Gurman, both the Extend and Reframe features are producing inconsistent results in the internal testing. If the underlying AI models don’t adapt or the results don’t improve significantly before the September launch event, Apple might delay them or scale back. 

While I’m a big fan of the Apple Photos app myself, it currently offers only one AI-based editing feature, Clean Up, and that doesn’t work as well as the feature does on other smartphones like the Galaxy S or the Pixel flagship series. 

I remember when Google released its Magic zeditor in 2023, and Samsung’s Galaxy AI followed quite aggressively in the coming years. In response, the best Apple could come up with was Clean Up. In my opinion, Apple genuinely needs the Extend, Enhance, and Reframe features to work, and work in time for a showcase at the WWDC 2026 and a public release in September. 

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025