Connect with us
DAPA Banner

Tech

How to Set Up Google Family Link on Android: Step-by-Step Guide (2026)

Published

on

Google Family Link is a free parental control tool built directly into Android that lets you manage your child’s device from your own phone. It covers app approvals, screen time limits, content filters, location sharing, and more — all without installing any third-party software. This guide walks you through every step: from pre-setup requirements to configuring the controls that actually matter after you are linked.

Quick take: Setup takes about ten minutes if both devices are nearby and the child’s Google Account is already created. The most common cause of failure is having multiple Google accounts on the child’s device — Family Link requires the child’s supervised account to be the only account on their phone during setup.

Before you start: what you need

Getting the right pieces in place before you open the app saves time and avoids the most common setup errors.

Device requirements

According to Google’s official Family Link device compatibility page, your child’s Android device needs to run Android 7.0 (Nougat) or higher for full functionality. Devices running Android 5.0 or 6.0 may support some settings but are not fully reliable. Your own device — the parent phone — needs Android 7.0 or higher, or iOS 16 or higher if you use an iPhone.

Advertisement

To check your child’s Android version: open Settings → scroll to the bottom → tap About phone → look for Android version.

Account requirements

  • You need a Google Account (standard Gmail is fine).
  • Your child needs a Google Account. If they are under 13, you will create one through the Family Link setup flow — you cannot use a standard account for children under 13 without parental supervision.
  • The child’s device must have only one Google account signed in at setup time. If there are multiple accounts, Family Link will remove them during the process — a warning you want to see before, not during, setup.

Apps to download

  • On your phone: Google Family Link (the parent version)
  • On your child’s phone: Google Family Link for Children & Teens (a separate app)

Both are free on the Google Play Store. Make sure you download the correct version for each device — they are listed separately and serve different functions.

Step 1: Create your child’s Google Account (if they do not have one)

If your child already has a supervised Google Account, skip to Step 2.

Open the Family Link app on your phone and tap Get started. The app will ask whether your child has a Google Account. Select No. You will then be guided through creating a supervised account, which requires:

  • Your child’s first name (a last name is optional)
  • Their date of birth — this determines the type of account created and the applicable age rules in your country
  • A Gmail address for the child (the app will suggest available options)
  • A password for the child’s account
  • Your own Google Account password to verify parental consent

Once the account is created, Google will ask you to review the privacy settings and data collection preferences for the account. Read through these carefully — this is where you control whether Google can use personalised ads, activity tracking, and similar settings on your child’s profile.

With both apps open and both devices nearby, the Family Link app on your phone will generate a short linking code. Here is the exact sequence:

Advertisement
  1. On your phone (parent device): open the Family Link app, sign in with your Google Account, select your child’s account, and tap through until you see the linking code screen. Keep this screen visible.
  2. On your child’s phone (child device): open the Family Link for Children & Teens app, sign in with the child’s Google Account, and enter the code shown on your screen when prompted.
  3. Back on your phone: the app will confirm that the devices are linked. Tap Next to proceed to the permissions setup screen.

If the code expires before you enter it, tap Generate new code on the parent device. Codes are valid for a short window.

Step 3: Grant permissions on the child’s device

After the link code is accepted, the child’s device will display a series of permission screens. Keep tapping Allow or Next through all of them — these permissions are what allow Family Link to enforce screen time limits, manage apps, and report activity. Without them, most controls will not work.

You will also be prompted to name the child’s device (useful if you have more than one child or device) and to choose which apps the child can access immediately. You can approve or restrict app access from this screen, but you can also do it later from the Family Link dashboard on your own phone.

Step 4: Configure the controls that matter most

Once linked, most parents open the dashboard and are not sure where to start. Here is a practical order that covers the highest-value settings first.

Screen time limits and Downtime

Go to Screen time in the Family Link app on your phone (this tab was redesigned in Google’s February 2025 Family Link update). You can set a total daily screen time limit, schedule Downtime (when the device locks automatically — useful for bedtime and homework), and view how much time your child spends on each app. These are the controls most families configure first.

Advertisement

School Time

School Time is a dedicated block mode that limits device use to approved apps only during school hours. It was previously available on smartwatches and became available on Android phones and tablets in the same February 2025 update. Set your child’s school schedule once, and the device will automatically restrict access during those hours without you needing to manage it manually each day.

App approvals

Under Controls, you can require your approval for every app your child attempts to download from the Play Store. When your child requests an app, you receive a notification on your phone and can approve or decline with one tap. You can also block specific apps already installed on the device.

Content filters

Family Link applies content filters across Google Search (SafeSearch), Chrome (site filtering), YouTube (supervised or restricted mode), and the Play Store (age-based content ratings). Go to ControlsContent filters to review each one. The default settings are conservative but worth reviewing against your child’s age and needs.

Approved contacts

Following the February 2025 update, parents can now set which contacts their child is allowed to call and text on Android phones. Go to ControlsContacts to add approved contacts directly from the Family Link app. Your child can request to add new contacts, which you can approve or decline. This is useful for younger children whose device use should be limited to family and close contacts.

Advertisement

Location sharing

Under your child’s profile in the app, you will find a Location section. Tap See location to view the device on a map. Location sharing requires the child’s device to be on with location services enabled and connected to mobile data or Wi-Fi. It does not update in real time continuously; it shows the most recent known location and can be refreshed manually.

Step 5: Review security settings on the child’s device

Before handing the device back, confirm that Google Play Protect is enabled on the child’s phone. It scans installed apps for harmful behaviour and runs automatically in the background. To check: open Play Store → tap your account icon → Play Protect → confirm scanning is on.

Also review which apps have access to the camera, microphone, and location under SettingsPrivacyPermission manager. Remove permissions that do not match an app’s obvious function. This is a good habit to repeat every few months, particularly after new apps are added. For a broader overview of what each permission does, see the guide on understanding Android app permissions on this site.

What happens when your child turns 13

This is the section most setup guides miss, and it changed significantly at the start of 2026. Previously, children could independently disable Family Link supervision once they reached age 13. Google reversed that policy in January 2026 — teens now require explicit parental permission to remove supervision, regardless of age. You will receive a notification when your child is approaching the applicable age and can decide at that point whether to continue supervision or transition to an unsupervised account through a managed conversation.

Advertisement

If you choose to continue supervision for a teenager, it is worth revisiting your content filter and screen time settings. Controls that work well for a nine-year-old often create unnecessary friction for a fourteen-year-old, which can damage the trust that makes monitoring useful in the first place. You can find a more detailed discussion of that transition in the wider guide on legal Android phone monitoring for parents.

  • Child under 13 using a personal Android device → Family Link is the right default. Free, official, no third-party trust required.
  • Teenager active on social media with mental health or safety concerns → consider adding Bark alongside Family Link. Bark’s AI content detection covers platforms Family Link does not.
  • Multiple children across Android and iOS, or a need for detailed per-app time limits → Qustodio covers multi-device families better than Family Link alone.
  • Want to know more before deciding → the Bark vs Qustodio comparison on this site covers both in detail.

Implementation checklist

  • Confirm child’s device runs Android 7.0 or higher.
  • Download the correct Family Link app on both devices (two separate apps).
  • Remove any additional Google accounts from the child’s device before starting.
  • Create a supervised child Google Account during setup if the child does not already have one.
  • Grant all permissions on the child’s device when prompted — do not skip any.
  • Set Screen Time limits and Downtime schedule immediately after linking.
  • Configure School Time if the child’s school schedule is consistent.
  • Enable app approval for Play Store downloads.
  • Set approved contacts if the child is young enough to benefit from contact restrictions.
  • Confirm Google Play Protect is active on the child’s device.
  • Review app permissions on the child’s device before handing it back.

Troubleshooting

Codes expire quickly. Tap Generate new code on the parent device and re-enter it on the child’s device within a few seconds. Make sure both devices are connected to the internet.

The most common cause is the child’s device being offline. Controls sync when the device has an internet connection. Also check that all permissions were granted during setup — open the child’s Family Link app and look for any incomplete setup warnings.

The child’s device shows a different account is still signed in

Family Link requires the child’s supervised account to be the only Google Account on the device. Go to SettingsAccounts on the child’s phone and remove any additional accounts before relinking.

Location is not updating

Check that location services are enabled on the child’s device (SettingsLocation → make sure it is on). Also verify that the Family Link app has location permission under SettingsAppsFamily LinkPermissions.

Advertisement

App approvals are not coming through to the parent device

Check that notifications are enabled for the Family Link app on your own phone (SettingsAppsFamily LinkNotifications). Without notifications, approval requests will pile up unnoticed.

School Time is not locking the device during school hours

Confirm the schedule was saved correctly in the app and that the child’s device time zone matches the schedule you set. Devices in a different time zone will trigger School Time at the wrong local time.

Key takeaways

  • Family Link is free, built by Google, and integrates at the OS level — it is the most reliable starting point for Android parental controls.
  • Setup requires two separate apps: one on your phone, one on your child’s phone. Using the wrong app on either device is the most common setup error.
  • The child’s supervised account must be the only Google Account on their device during setup.
  • As of January 2026, teens need parental approval to remove supervision — this is a significant change from earlier policy.
  • School Time, parent-approved contacts, and the redesigned Screen Time tab were all added in the February 2025 update — older setup guides may not mention these.
  • Family Link works best alongside a conversation about why monitoring is in place. Transparent oversight tends to build better digital habits than hidden controls.

FAQ

Yes. Google Family Link is completely free. There is no paid tier or premium version — all features are included at no cost.

Does my child know they are being monitored?

Yes. Family Link is a transparent tool by design. The child’s device displays a supervision indicator, and the child can see which apps are approved or restricted. It is not a hidden monitoring app.

Yes, but only if the account was created for a child under 13 through the supervised account creation flow, or if you add supervision to a teen’s existing account. Standard adult Google Accounts cannot be placed under Family Link supervision.

Advertisement

What happens if my child’s phone dies or goes offline?

Screen time limits and Downtime schedules that were already set will continue to apply. However, the parent dashboard will not update with new location data or activity reports until the device reconnects.

The parent Family Link app supports iOS 16 or higher on the parent’s device. However, Family Link cannot manage an iPhone as the child’s device — it only supervises Android devices and Chromebooks. For iPhone supervision, Apple’s Screen Time is the equivalent built-in tool.

Family Link can show you your child’s device location when the device is online and location services are active. It does not continuously stream a live location; instead, it shows the most recent known location and allows you to request a refresh.

No. Family Link cannot be uninstalled by the child from a supervised Android device without parental approval. Since January 2026, teenagers also need parental permission to disable supervision from their account settings.

Advertisement

Google Play parental controls only restrict content ratings inside the Play Store itself — they do not cover screen time, location, app usage, web filtering, or the rest of the device. Family Link is the full parental control system that includes Play Store controls alongside all other features. If you only want to restrict what your child can download, Play Store controls alone may be enough; for broader oversight, you need Family Link.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Self-propagating malware poisons open source software and wipes Iran-based machines

Published

on

In an email, Aikido researcher Charlie Eriksen said the canister was taken down Sunday night and is no longer available.

“It wasn’t as reliable/untouchable as they expected,” Eriksen wrote. “But for a while, it would have wiped systems if infected.”

Like previous TeamPCP malware, CanisterWorm, as Aikido has named the malware, targets organizations’ CI/CD pipelines used for rapid development and deployment of software.

“Every developer or CI pipeline that installs this package and has an npm token accessible becomes an unwitting propagation vector,  Eriksen wrote. “Their packages get infected, their downstream users install those, and if any of them have tokens, the cycle repeats.”

Advertisement

As the weekend progressed, CanisterWorm was updated to add an additional payload: a wiper that targets machines exclusively in Iran. When the updated worm infects machines, it checks if the machine is in the Iranian timezone or is configured for use in that country. When either condition was met, the malware no longer activated the credential stealer and instead triggered a novel wiper that TeamPCP developers named Kamikaze. Eriksen said in an email that there’s no indication yet that the worm caused actual damage to Iranian machines, but that there was “clear potential for large-scale impact if it achieves active spread.”

Eriksen said Kamikaze’s “decision tree is simple and brutal.”

  • Kubernetes + Iran: Deploy a DaemonSet that wipes every node in the cluster
  • Kubernetes + elsewhere: Deploy a DaemonSet that installs the CanisterWorm backdoor on every node
  • No Kubernetes + Iranrm -rf / --no-preserve-root
  • No Kubernetes + elsewhere: Exit. Nothing happens.

TeamPCP’s targeting of a country that the US is currently at war with is a curious choice. Up to now the group’s motivation has been financial gain. With no clear connection to monetary profit, the wiper seems out of character for TeamPCP. Eriksen said Aikido still doesn’t know the motive. He wrote:

While there may be an ideological component, it could just as easily be a deliberate attempt to draw attention to the group. Historically, TeamPCP has appeared to be financially motivated, but there are signs that visibility is becoming a goal in itself. By going after security tools and open-source projects, including Checkmarx as of today, they are sending a clear and deliberate signal.

The hack that keeps on giving

Last week’s supply-chain compromise of Trivy was made possible by a previous compromise of Aqua Security in late February. Although the company’s incident response was intended to replace all compromised credentials, the rotation was incomplete, allowing TeamPCP to take control of the GitHub account for distributing the vulnerability scanner. Aqua Security said it was performing a more thorough credential purge in response.

Advertisement

Source link

Continue Reading

Tech

OpenAI unexpectedly kills Sora, and I wonder if this is the start of a mini AI bubble collapse

Published

on


  • OpenAI is shutting down Sora
  • It’s not clear why, but people will be able to somehow save the videos that they created in the app and elsewhere
  • Disney may have also exited its big OpenAI deal

In a stunning turn of events, OpenAI has unexpectedly shuttered the Sora app, and Disney may have walked away from its $1 billion deal in support of the generative video technology.

Less than 18 months after launching Sora to wide shock and acclaim and less than a year after launching the Sora App, OpenAI announced in a X post on Tuesday, “We’re saying goodbye to Sora.”

Advertisement

Source link

Continue Reading

Tech

What You Need to Know About the Foreign-Made Router Ban in the US

Published

on

The Federal Communications Commission has banned new consumer internet routers manufactured outside the US, citing national security concerns. The ban doesn’t affect any routers already in American homes or currently on sale in the US, but all new routers aimed at the consumer market will need to be approved.

While the headline is that foreign-made consumer routers are banned, manufacturers can apply for exemptions. There’s no need to throw out your router, and you’ll still find plenty of mesh systems on the store shelves. But what does this mean for you?

Why Are Foreign-Made Routers Banned?

“Malicious actors have exploited security gaps in foreign-made routers to attack American households, disrupt networks, enable espionage, and facilitate intellectual property theft,” the FCC wrote. “Foreign-made routers were also involved in the Volt, Flax, and Salt Typhoon cyberattacks targeting vital US infrastructure.”

Foreign-made consumer routers were added to the Covered List, which details equipment and services “deemed to pose an unacceptable risk to the national security of the United States.”

Advertisement

Bogdan Botezatu, director of Threat Research at cybersecurity firm Bitdefender, says this ban is a step to harden the cybersecurity readiness of US households, given ongoing geopolitical tensions.

“Consumer routers sit at the edge of every home network, which makes them an attractive target and a strategic risk if compromised at scale,” he says. Asked whether he thinks the risk is real, Botezatu says the risk is real, though there’s no easy way to prove intent. “[Internet of Things] devices, including routers, are a weak point across the internet.”

Which Routers Are Banned?

The ban only affects the sale of new Wi-Fi routers aimed at consumer households. The ban does not apply to existing FCC-approved routers on sale in the US. Previously purchased routers already in use in homes across the country are also fine and are not part of the ban, according to the FCC’s FAQ. These routers can continue to be sold, used, and updated with new firmware.

Any new router manufactured outside the US now requires FCC approval before it can be imported, marketed, or sold in the US. This includes routers from US companies that are manufactured overseas, which is the vast majority of the market right now.

Advertisement

What Does Foreign-Made Mean?

This is decidedly murky. The ban is concerned with “consumer-grade” routers and could include any that are designed or manufactured outside the US or manufactured by companies that are not completely US-owned and operated. All the major players in the market, including Netgear, TP-Link, Asus, Amazon’s Eero, Google’s Nest, Synology, Linksys, and Ubiquiti, fall under the definition. As do most, if not all, of the routers supplied by internet service providers in the US.

Just like the recent federal drone ban, the router only applies only to new routers, but manufacturers can apply for Conditional Approval from the Department of Defense and the Department of Homeland Security. Applications must include details about ownership, board membership, and country of origin for components, IP ownership, design, assembly, and firmware, among other things. The final section requests details of the applicant’s US manufacturing and onshoring plan, so there’s a clear push to persuade companies to commit to making their routers in the US.

“No routers or manufacturers have been granted a Conditional Approval so far, but as the process gets underway, we expect approvals to be granted in a timely manner,” an FCC spokesperson tells WIRED.

What About Foreign-Made Components?

Well, the FCC provides some clarification in its FAQ (“covered” here means banned):

Advertisement

“Non-‘covered’ devices do not become ‘covered’ simply because they contain a ‘covered’ component part, unless the ‘covered’ component part is a modular transmitter under the FCC’s rules,” it says. “Therefore, a router produced in the United States is not considered ‘covered’ equipment solely because it contains one or more foreign-made components.”

Manufacturers importing components from China but assembling them in the US will presumably be OK, though it’s far from clear. “Applicants will need to be able to have sufficient evidence that the routers were not produced in a foreign country to make this certification, but there is no specific documentation or evidence required,” according to the FCC.

Let’s look at the big three US router brands and see how they’re affected.

Will TP-Link Be Banned?

Advertisement

Since all of its routers are made overseas, TP-Link will have to apply for Conditional Approval or spin up manufacturing in the US to sell any new routers. Estimates vary, but TP-Link’s US consumer router market share is somewhere around 35 percent, with Netgear and Asus accounting for another 25 percent or so.

The US Commerce, Defense, and Justice departments have reportedly been investigating and considering a ban on TP-Link routers for more than a year over concerns about the company’s links to China. No ban has been enacted until now, but Texas attorney general Ken Paxton sued TP-Link in February, claiming the company allows the Chinese Communist Party to access American consumers’ devices. Detractors have also criticized perceived predatory pricing, claiming TP-Link flooded the US market with a wide range of affordable routers to establish dominance.

TP-Link has repeatedly denied any wrongdoing and claims it has divested from its Chinese roots and is now headquartered in the US with the bulk of manufacturing in Vietnam. TP-Link’s cofounder and CEO, Jeffrey Chao, recently applied for permanent US residency through President Trump’s Gold Card program, according to the Times of India.

“Virtually all routers are made outside the United States, including those produced by US-based companies like TP-Link, which manufactures its products in Vietnam,” a spokesperson from TP-Link tells WIRED. “It appears that the entire router industry will be impacted by the FCC’s announcement concerning new devices not previously authorized by the FCC.”

Advertisement

TP-Link is a privately owned company and not publicly listed on any stock exchange. Chao and his wife, Hillary, are listed as the company’s sole owners.

Will Netgear Be Banned?

While it is a US-founded and headquartered company, Netgear’s routers are manufactured abroad, mostly in Vietnam, Thailand, Indonesia, and Taiwan, so it will have to apply for Conditional Approval. The company has moved away from China in recent years. Netgear has been lobbying the government on “cybersecurity and strategic competition with China.”

“We commend the administration and the FCC for their action toward a safer digital future for Americans,” a Netgear spokesperson tells WIRED. “Home routers and mesh systems are critical to national security and consumer protection, and today’s decision is a step forward.”

Advertisement

Netgear is a publicly traded company on the Nasdaq, mostly owned by institutional investors, including BlackRock and Vanguard. The company’s stock rose on news of the ban, suggesting that many investors believe it won’t be hit too hard.

Will Asus Be Banned?

Asus primarily makes its routers in Taiwan, though it has production facilities in China and works with several third-party manufacturers. Recent tariff pressures led the company to branch out to Thailand, Vietnam, Indonesia, Mexico, and the Czech Republic, but the bulk of its routers still come from Taiwan or China. Asus will have to apply for Conditional Approval to sell new routers. The company did not respond to WIRED’s request for comment.

The company is listed on the Taiwanese Stock Exchange and is mostly owned by public shareholders. The ban doesn’t appear to have impacted its stock price.

Advertisement

Are Any Routers Manufactured in the US?

The only routers I know of that are manufactured in the US are some Starlink Wi-Fi routers, which are primarily made in Texas. Starlink is part of Elon Musk’s SpaceX company, but many of the components in these routers come from East Asia.

Botezatu says what matters more than geography is the security model behind the product. Companies that invest in “long-term firmware support, vulnerabilitgy management, and built-in protection layers” offer stronger security.

How Will the Router Ban Impact Ordinary Folks?

It’s not entirely clear, but it probably won’t have a huge immediate impact. There is already a wide range of Wi-Fi 7 routers and mesh systems on the market that will continue to be sold—they enable speeds well in excess of what most people need at home. Whether companies spin up manufacturing in the US or find other ways to satisfy government agencies that their wares are not a security risk, the result is likely to be higher prices for consumers.

“This ruling has the potential to significantly disrupt the US consumer router market,” Brandon Butler, a research manager of Network Infrastructure and Services at IDC tells WIRED. “In the near term, much will depend on how quickly conditional waivers are processed. Most vendors are likely to pursue them, but any delays could constrain supply and create upward pressure on pricing.”

Advertisement

If you haven’t upgraded to the latest Wi-Fi 7 standard, now might be a good time to do it. But it’s worth keeping in mind what you’re buying. Botezatu says consumers should “stick with reputable manufacturers that have a track record of issuing updates and maintaining their devices. Check that your router is still supported and runing the latest firmware.”

Unanswered Questions

The ban does leave several unanswered questions. Why is it being applied only to consumer routers? Which routers or manufacturers will be granted a Conditional Approval? Why are the foreign-made routers currently on sale and in our homes deemed safe? The FCC did not address these questions.

Source link

Advertisement
Continue Reading

Tech

This startup will pay you $800 to yell at AI all day

Published

on


As Boston Dynamics demonstrated years ago, “bullying” technology designed to mimic intelligent behaviors is nothing new. Memvid is now offering $800 to someone interested in putting modern AI models to the test – a “professional” yeller tasked with spending an entire day stressing popular chatbots.
Read Entire Article
Source link

Continue Reading

Tech

Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says

Published

on

The US Department of Defense appears to be illegally punishing Anthropic for trying to restrict the use of its AI tools by the military, US district judge Rita Lin said during a court hearing on Tuesday.

“It looks like an attempt to cripple Anthropic,” Lin said of the Pentagon designating the company a supply-chain risk. “It looks like [the department] is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which of course would be a violation of the First Amendment.”

Anthropic has filed two federal lawsuits alleging that the Trump administration’s decision to designate the company a security risk amounted to illegal retaliation. The government slapped the label on Anthropic after it pushed for limitations on how its AI could be used by the military. Tuesday’s hearing came in a case filed in San Francisco.

Anthropic is seeking a temporary order to pause the designation. The relief, Anthropic hopes, would help convince some of the company’s skittish customers to stick with it just a bit longer. Lin can issue a pause only if she determines that Anthropic is likely to win the overall case. Her ruling on the injunction is expected in the next few days.

Advertisement

The dispute has sparked a broader public conversation about how artificial intelligence is increasingly being used by the armed forces, and whether Silicon Valley companies should give deference to the government in determining how the technology they develop is deployed.

The Department of Defense, which now calls itself the Department of War (DoW), has argued that it followed procedures and appropriately determined that Anthropic’s AI tools could no longer be relied upon to operate as expected during critical moments. It has asked Lin not to second-guess its assessment about the threat it claims Anthropic poses to national security.

“The worry is that Anthropic, instead of merely raising concerns and pushing back, will say we have a problem with what DoW is doing and will manipulate the software … so it doesn’t operate in the way DoW expects and wants it to,” Trump administration attorney Eric Hamilton said during Tuesday’s hearing.

Lin said that it was Defense Secretary Pete Hegseth’s role—not hers—to decide whether Anthropic is an appropriate vendor for the department. But Lin said it’s up to her to determine whether Hegseth violated the law by taking steps beyond simply canceling Anthropic’s government contracts. Lin said it was “troubling” to her that the security designation and directives more broadly limiting use of Anthropic’s AI tool Claude by government contractors “don’t seem to be tailored to stated national security concerns.”

Advertisement

As Anthropic’s spat with the government escalated last month, Hegseth posted on X that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

But on Tuesday, Hamilton acknowledged that Hegseth has no legal authority to bar military contractors from using Anthropic for work unrelated to the Department of Defense. When asked by Lin why Hegseth would have posted that, Hamilton said, “I don’t know.”

Lin further questioned Hamilton about whether the Pentagon had considered taking less punitive measures to move the department away from using Anthropic’s tools. She described the supply-chain-risk designation as a powerful authority typically reserved for foreign adversaries, terrorists, and other hostile actors.

Michael Mongan, a WilmerHale attorney representing Anthropic, said it was extraordinary for the government to go after a “stubborn” negotiating partner with the designation.

Advertisement

The Pentagon has said it is working to replace Anthropic technologies over the coming months with alternatives from Google, OpenAI, and xAI. It also said it has put measures in place to prevent Anthropic from engaging in any tampering during the transition. Hamilton said he didn’t know if it was even possible for Anthropic to update its AI models without permission from the Pentagon; the company says it is not.

A ruling in the other case, at the federal appeals court in Washington, DC, is expected to come soon without a hearing.

Source link

Advertisement
Continue Reading

Tech

What’s new with the instant camera?

Published

on

Fujifilm has recently unveiled the latest addition to its instant camera range, with the aptly named Instax Mini 13.

As the Fujifilm Instax Mini 12 has a spot on our best instant cameras list, are there enough improvements with the Mini 13 to warrant an upgrade? Or, is the Mini 12 still a great choice for many.

We’ve compared the specs of the Fujifilm Instax Mini 13 to the Mini 12 and noted all the noteworthy differences between the instant cameras below. Keep reading to see what’s new with the Mini 13 and to decide whether or not you should upgrade.

For more of an overview, we’ve also rounded up a list of the best cameras we’ve reviewed recently. 

Advertisement

Price and Availability

At the time of writing, Fujifilm is yet to provide an exact launch date for the Instax Mini 13, and instead has promised the instant camera will be available “in or around late June 2026”. Its current MSRP is £79/€89.99/$93.95.

Advertisement

In comparison, the Fujifilm Instax Mini 12 is readily available to purchase now and has an RRP of around £79.99/$94. Having said that, it is possible to nab the instant camera with a decent price drop.

Instax Mini 13 includes a self-timer

One of the main new additions to the Instax Mini 13 is the inclusion of a self-timer. The timer is fitted with an LED lever that allows you to switch between either a two-second or ten-second countdown. The shorter two-second timer is designed for capturing hands-free selfies with reduced blur, while the ten-second alternative enables easier group shots and different angles.

Advertisement
Self timer on Instax Mini 13Self timer on Instax Mini 13
Self timer on Instax Mini 13. Image Credit (Fujifilm)

As mentioned, this is a brand new addition to the Mini 13 so the Mini 12 unfortunately lacks this tool. Even so, it’s still worth noting that we found the Mini 12 to be easy to use, thanks to the few buttons or features on offer.

Both feature a selfie mirror and close-up mode

If you’re coming from an older Instax Mini, then you’ll be pleased to know that both the Mini 13 and Mini 12 are fitted with built-in selfie mirrors at their respective fronts. It’s a great addition that allows you to check whether everyone is in the frame before potentially wasting a precious print.

Not only that, but both cameras also benefit from Close-Up Mode which is enabled by twisting the lens twice. Essentially, Close-Up Mode could also be classed as “selfie” mode, and ensures the main subject is captured right in the centre.

Advertisement

Instax Mini 12 main imageInstax Mini 12 main image
Instax Mini 12. Image Credit (Trusted Reviews)

Speaking of similarities, it’s also worth noting that both the Mini 13 and Mini 12 have automatic lighting adjustment and promise to print a photo in just five seconds and have it develop within 90. 

Instax Mini 13 has new film

Alongside the launch of the Instax Mini 13, Fujifilm has also revealed a couple of new additions and updates to its existing line-up. Firstly, the Instax Up! Smartphone apps will now integrate AI to increase image scanning precision, which is thanks to an update to its “overall learning capability”. This, according to Fujifilm, is promised to recognise images over backgrounds for “more precise scans” overall.

Advertisement

In addition, Fujifilm is also introducing a new Pastel Galaxy-themed film roll which includes sparkly, gloss embellishments and more colours too. This will be available by “late June 2026” with an MSRP of €9.99.

Although both of these new additions are introduced with the Instax Mini 13, the film and smartphone app updates will be supported by the Instax Mini 12.

Instax Mini 12 photosInstax Mini 12 photos
Instax Mini 12 photos. Image Credit (Trusted Reviews)

Advertisement

Instax Mini 13 includes a camera angle adjustment accessory

Designed to work with the self-timer, the Instax Mini 13 comes equipped with a camera angle adjustment tool. Made up as part of the wrist strap, the tool can be used to position the camera with a slight upward tilt – negating the need for a tripod or any additional equipment.

Instax Mini 13 camera adjustment accessoryInstax Mini 13 camera adjustment accessory

Instax Mini 13 has more of a square design

Although at first glance you’d be forgiven for not noticing a huge design difference between the two, there are a few things to consider. Firstly, although both are undoubtedly portable, it’s fair to say that neither are quite pocket-friendly cameras to whip out in a flash. If that’s something you’d prefer, then we’d recommend the Instax Mini Evo instead.

Instax Mini 13Instax Mini 13
Instax Mini 13. Image Credit (Fujifilm)

Otherwise, alongside the addition of the timer lever at its side, the Mini 13 also has more of a uniform rounded shape compared to the Mini 12. Either way, both cameras are compact and come in a choice of five pastel colours too.

Advertisement

Advertisement

Early Verdict

With the addition of a self-timer, a rounder and more uniform design and the inclusion of the camera angle adjustment accessory on its wrist strap, the Instax Mini 13 looks set to be a brilliant instant camera – especially if you’re coming from an older model.

However, whether you really need to upgrade from the Instax Mini 12 is still up for debate as, although the Mini 12 may lack the self-timer, it still sports Close-Up Mode, automatic light and flash control and speedy photo printing too. We’ll be sure to update this versus once we do review the Instax Mini 13.

Source link

Advertisement
Continue Reading

Tech

Epic cuts 1,000+ jobs amid financial struggles, seeks half-billion-dollar cost savings

Published

on


Sweeney also pointed to industry-wide changes including slower growth, weaker spending on games and consoles, tougher cost economics, and new forms of entertainment competing for gamers’ attention as additional factors hurting their business.
Read Entire Article
Source link

Continue Reading

Tech

Embedding compliance in AI adoption

Published

on

Kyndryl’s Ismail Amla discusses the company’s new policy as code process, and how it can help address AI issues such as agentic drift.

When it comes to AI adoption in enterprise, compliance concerns are becoming ever more important.

According to Kyndryl’s most recent Readiness Report, 31pc of enterprise customers cite regulatory or compliance concerns as a primary barrier limiting their organisation’s ability to scale recent technology investments.

2026 marks an important point on the AI compliance timeline in particular, with the EU’s AI Act transparency rules coming into effect in August.

Advertisement

Last month, Kyndryl announced its new ‘policy as code capability’ – a new process designed for creating policy-governed agentic AI workflows for enterprises.

“Policy as code is the process of translating an organisation’s rules, policies and compliance requirements into machine-readable code, so AI systems are restricted to only operating within pre-defined guardrails,” explains Ismail Amla, senior vice-president at Kyndryl Consult. “Human experts continue to oversee all activities related to these processes.”

Compliant design

“Many organisations, especially those in complex, highly regulated environments, want to scale agentic AI, but are held back by concerns around security, compliance and control”, says Amla.

Speaking to SiliconRepublic.com, he says policy as code can help organisations support “consistent policy interpretations” and define clear operational boundaries, subsequently ensuring agent actions are explainable, reviewable and “aligned with organisational standards”.

Advertisement

Amla also says the framework can help reduce costs, accelerate decision-making, eliminate errors and “power AI-native workflows within defined policy guardrails”.

“By embedding policy and regulatory requirements directly into AI agent operations, policy as code can help organisations execute AI workflows that are governed, transparent, explainable and aligned to business requirements.”

But what about the long-term applications of policy as code?

Amla says the main benefit of the process is “trust through stronger governance, better transparency, lower operational risk and more reliable AI at scale”.

Advertisement

“Managing agentic workflow execution in this way supports controlled and responsible deployment of policy-constrained AI agents in sectors such as financial operations, public services, supply chains and other mission-critical domains, where reliability and predictability are essential,” he explains.

Catch the drift

Over the past year, according to Amla, the biggest change he’s noticed in AI adoption is that organisations are moving beyond proofs of concept and “focusing more seriously on what it takes to make AI work in production and at scale”.

“That means more attention on infrastructure, governance, data quality and organisational readiness,” he says. “Organisations are moving from experimentation to making more strategic decisions with the experience they have gained to drive higher value outcomes and performance for their organisation, and receive a return on their investment.”

But with increased focus on serious AI integrations comes risk, particularly if an organisation is not fully prepared.

Advertisement

Amla warns of something called ‘agentic drift’, which refers to when an AI agent can appear reliable while working toward unwanted outcomes due to a gradual separation from the agent operator’s original intention or goal.

“Agentic drift creates pressing challenges for all organisations, but it is especially acute in the public sector and highly regulated sectors, such as banking and healthcare,” says Amla.

“In these industries, organisations cannot move from pilots to production if issues around control, trust and compliance remain unresolved. It’s clear enterprises urgently need a way to constrain what agents can do at runtime and close governance gaps long before drift leads to financial or compliance failures.”

Amla believes that policy as code can help address this issue, due to its ability to allow businesses to translate their rules and policy into machine-readable instructions that “govern how AI agents reason, adapt and act”.

Advertisement

“This greatly reduces the risk of agentic drift,” he says. “It also alleviates the trust and compliance concerns standing between large enterprises and a return on their AI investments.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Sony is reportedly shutting down Dark Outlaw Games, run by former Call of Duty director

Published

on

Sony is shutting down Dark Outlaw Games, a first-party game studio led by former Call of Duty producer Jason Blundell, Bloomberg‘s Jason Schreier reports. Before leading Dark Outlaw Games, Blundell was the head of Deviation Games, which was an independent studio, but also happened to be developing a PlayStation game before it shut down, Schreier says.

Dark Outlaw Games had yet to announce what it was working on, but considering Blundell’s experience with the Call of Duty franchise, it seems likely the studio was developing a multiplayer project for PlayStation. Blundell was a programmer and producer at Activision before making the jump to Treyarch to work on Call of Duty 3, and he contributed to multiple Call of Duty: Black Ops games after that, including serving as the director for the campaign and Zombies mode of Call of Duty: Black Ops III and the career and Zombies modes of Call of Duty: Black Ops 4.

Engadget has contacted Sony for more information about the fate of Dark Outlaw Games. We’ll update this article if we hear back.

The studio’s shutdown is being paired with cuts to staff at PlayStation focused on mobile development, according to Schreier. Sony has made a habit of laying off staff and shutting down studios in the last year, seemingly as a way to retreat from an earlier investment in online, live-service multiplayer games. The company shut down Bluepoint Games in February following attempts to get a live-service God of War game off the ground. Sony also closed Firewalk Studios after the spectacular failure of multiplayer shooter Concord in October 2024. And a year before that, Naughty Dog officially abandoned work on a standalone multiplayer version of The Last of Us in December 2023.

Advertisement

That leaves Sony with at least two Horizon Zero Dawn spin-offs, a co-op game from original developer Guerilla Games and a MMO from developer NCSoft; Fairgame$, which is still in active development despite the departure of Haven Studios head Jade Raymond; Arrowhead Game Studios’ Helldivers 2; Bungie’s Destiny 2 and Marathon; and if you really want to stretch, Gran Turismo 7. Sony clearly hasn’t given up on producing online multiplayer games, but it’s not hard to characterize its attempt to expand into the space as a disaster.

Source link

Continue Reading

Tech

Anthropic hands Claude Code more control, but keeps it on a leash

Published

on

For developers using AI, “vibe coding” right now comes down to babysitting every action or risking letting the model run unchecked. Anthropic says its latest update to Claude aims to eliminate that choice by letting the AI decide which actions are safe to take on its own — with some limits.  

The move reflects a broader shift across the industry, as AI tools are increasingly designed to act without waiting for human approval. The challenge is balancing speed with control: too many guardrails slows things down, while too few can make systems risky and unpredictable. Anthropic’s new “auto mode,” now in research preview — meaning it’s available for testing but not yet a finished product — is its latest attempt to thread that needle. 

Auto mode uses AI safeguards to review each action before it runs, checking for risky behavior the user didn’t request and for signs of prompt injection — a type of attack where malicious instructions are hidden in content that the AI is processing, causing it to take unintended actions. Any safe actions will proceed automatically, while the risky ones get blocked.

It’s essentially an extension of Claude Code’s existing “dangerously-skip-permissions” command, which hands all decision-making to the AI, but with a safety layer added on top.

Advertisement

The feature builds on a wave of autonomous coding tools from companies like GitHub and OpenAI, which can execute tasks on a developer’s behalf. But it takes it a step further by shifting the decision of when to ask for permission from the user to the AI itself. 

Anthropic hasn’t detailed the specific criteria its safety layer uses to distinguish safe actions from risky ones — something developers will likely want to understand better before adopting the feature widely. (TechCrunch has reached out to the company for more information on this front.)

Auto mode comes off the back of Anthropic’s launch of Claude Code Review, its automatic code reviewer designed to catch bugs before they hit the codebase, and Dispatch for Cowork, which allows users to send tasks to AI agents to handle work on their behalf.  

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Auto mode will roll out to Enterprise and API users in the coming days. The company says it currently only works with Claude Sonnet 4.6 and Opus 4.6, and recommends using the new feature in “isolated environments” — sandboxed setups that are kept separate from production systems, limiting the potential damage if something goes wrong.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025