Connect with us
DAPA Banner

Tech

How to Set Up Google Family Link on Android: Step-by-Step Guide (2026)

Published

on

Google Family Link is a free parental control tool built directly into Android that lets you manage your child’s device from your own phone. It covers app approvals, screen time limits, content filters, location sharing, and more — all without installing any third-party software. This guide walks you through every step: from pre-setup requirements to configuring the controls that actually matter after you are linked.

Quick take: Setup takes about ten minutes if both devices are nearby and the child’s Google Account is already created. The most common cause of failure is having multiple Google accounts on the child’s device — Family Link requires the child’s supervised account to be the only account on their phone during setup.

Before you start: what you need

Getting the right pieces in place before you open the app saves time and avoids the most common setup errors.

Device requirements

According to Google’s official Family Link device compatibility page, your child’s Android device needs to run Android 7.0 (Nougat) or higher for full functionality. Devices running Android 5.0 or 6.0 may support some settings but are not fully reliable. Your own device — the parent phone — needs Android 7.0 or higher, or iOS 16 or higher if you use an iPhone.

Advertisement

To check your child’s Android version: open Settings → scroll to the bottom → tap About phone → look for Android version.

Account requirements

  • You need a Google Account (standard Gmail is fine).
  • Your child needs a Google Account. If they are under 13, you will create one through the Family Link setup flow — you cannot use a standard account for children under 13 without parental supervision.
  • The child’s device must have only one Google account signed in at setup time. If there are multiple accounts, Family Link will remove them during the process — a warning you want to see before, not during, setup.

Apps to download

  • On your phone: Google Family Link (the parent version)
  • On your child’s phone: Google Family Link for Children & Teens (a separate app)

Both are free on the Google Play Store. Make sure you download the correct version for each device — they are listed separately and serve different functions.

Step 1: Create your child’s Google Account (if they do not have one)

If your child already has a supervised Google Account, skip to Step 2.

Open the Family Link app on your phone and tap Get started. The app will ask whether your child has a Google Account. Select No. You will then be guided through creating a supervised account, which requires:

  • Your child’s first name (a last name is optional)
  • Their date of birth — this determines the type of account created and the applicable age rules in your country
  • A Gmail address for the child (the app will suggest available options)
  • A password for the child’s account
  • Your own Google Account password to verify parental consent

Once the account is created, Google will ask you to review the privacy settings and data collection preferences for the account. Read through these carefully — this is where you control whether Google can use personalised ads, activity tracking, and similar settings on your child’s profile.

With both apps open and both devices nearby, the Family Link app on your phone will generate a short linking code. Here is the exact sequence:

Advertisement
  1. On your phone (parent device): open the Family Link app, sign in with your Google Account, select your child’s account, and tap through until you see the linking code screen. Keep this screen visible.
  2. On your child’s phone (child device): open the Family Link for Children & Teens app, sign in with the child’s Google Account, and enter the code shown on your screen when prompted.
  3. Back on your phone: the app will confirm that the devices are linked. Tap Next to proceed to the permissions setup screen.

If the code expires before you enter it, tap Generate new code on the parent device. Codes are valid for a short window.

Step 3: Grant permissions on the child’s device

After the link code is accepted, the child’s device will display a series of permission screens. Keep tapping Allow or Next through all of them — these permissions are what allow Family Link to enforce screen time limits, manage apps, and report activity. Without them, most controls will not work.

You will also be prompted to name the child’s device (useful if you have more than one child or device) and to choose which apps the child can access immediately. You can approve or restrict app access from this screen, but you can also do it later from the Family Link dashboard on your own phone.

Step 4: Configure the controls that matter most

Once linked, most parents open the dashboard and are not sure where to start. Here is a practical order that covers the highest-value settings first.

Screen time limits and Downtime

Go to Screen time in the Family Link app on your phone (this tab was redesigned in Google’s February 2025 Family Link update). You can set a total daily screen time limit, schedule Downtime (when the device locks automatically — useful for bedtime and homework), and view how much time your child spends on each app. These are the controls most families configure first.

Advertisement

School Time

School Time is a dedicated block mode that limits device use to approved apps only during school hours. It was previously available on smartwatches and became available on Android phones and tablets in the same February 2025 update. Set your child’s school schedule once, and the device will automatically restrict access during those hours without you needing to manage it manually each day.

App approvals

Under Controls, you can require your approval for every app your child attempts to download from the Play Store. When your child requests an app, you receive a notification on your phone and can approve or decline with one tap. You can also block specific apps already installed on the device.

Content filters

Family Link applies content filters across Google Search (SafeSearch), Chrome (site filtering), YouTube (supervised or restricted mode), and the Play Store (age-based content ratings). Go to ControlsContent filters to review each one. The default settings are conservative but worth reviewing against your child’s age and needs.

Approved contacts

Following the February 2025 update, parents can now set which contacts their child is allowed to call and text on Android phones. Go to ControlsContacts to add approved contacts directly from the Family Link app. Your child can request to add new contacts, which you can approve or decline. This is useful for younger children whose device use should be limited to family and close contacts.

Advertisement

Location sharing

Under your child’s profile in the app, you will find a Location section. Tap See location to view the device on a map. Location sharing requires the child’s device to be on with location services enabled and connected to mobile data or Wi-Fi. It does not update in real time continuously; it shows the most recent known location and can be refreshed manually.

Step 5: Review security settings on the child’s device

Before handing the device back, confirm that Google Play Protect is enabled on the child’s phone. It scans installed apps for harmful behaviour and runs automatically in the background. To check: open Play Store → tap your account icon → Play Protect → confirm scanning is on.

Also review which apps have access to the camera, microphone, and location under SettingsPrivacyPermission manager. Remove permissions that do not match an app’s obvious function. This is a good habit to repeat every few months, particularly after new apps are added. For a broader overview of what each permission does, see the guide on understanding Android app permissions on this site.

What happens when your child turns 13

This is the section most setup guides miss, and it changed significantly at the start of 2026. Previously, children could independently disable Family Link supervision once they reached age 13. Google reversed that policy in January 2026 — teens now require explicit parental permission to remove supervision, regardless of age. You will receive a notification when your child is approaching the applicable age and can decide at that point whether to continue supervision or transition to an unsupervised account through a managed conversation.

Advertisement

If you choose to continue supervision for a teenager, it is worth revisiting your content filter and screen time settings. Controls that work well for a nine-year-old often create unnecessary friction for a fourteen-year-old, which can damage the trust that makes monitoring useful in the first place. You can find a more detailed discussion of that transition in the wider guide on legal Android phone monitoring for parents.

  • Child under 13 using a personal Android device → Family Link is the right default. Free, official, no third-party trust required.
  • Teenager active on social media with mental health or safety concerns → consider adding Bark alongside Family Link. Bark’s AI content detection covers platforms Family Link does not.
  • Multiple children across Android and iOS, or a need for detailed per-app time limits → Qustodio covers multi-device families better than Family Link alone.
  • Want to know more before deciding → the Bark vs Qustodio comparison on this site covers both in detail.

Implementation checklist

  • Confirm child’s device runs Android 7.0 or higher.
  • Download the correct Family Link app on both devices (two separate apps).
  • Remove any additional Google accounts from the child’s device before starting.
  • Create a supervised child Google Account during setup if the child does not already have one.
  • Grant all permissions on the child’s device when prompted — do not skip any.
  • Set Screen Time limits and Downtime schedule immediately after linking.
  • Configure School Time if the child’s school schedule is consistent.
  • Enable app approval for Play Store downloads.
  • Set approved contacts if the child is young enough to benefit from contact restrictions.
  • Confirm Google Play Protect is active on the child’s device.
  • Review app permissions on the child’s device before handing it back.

Troubleshooting

Codes expire quickly. Tap Generate new code on the parent device and re-enter it on the child’s device within a few seconds. Make sure both devices are connected to the internet.

The most common cause is the child’s device being offline. Controls sync when the device has an internet connection. Also check that all permissions were granted during setup — open the child’s Family Link app and look for any incomplete setup warnings.

The child’s device shows a different account is still signed in

Family Link requires the child’s supervised account to be the only Google Account on the device. Go to SettingsAccounts on the child’s phone and remove any additional accounts before relinking.

Location is not updating

Check that location services are enabled on the child’s device (SettingsLocation → make sure it is on). Also verify that the Family Link app has location permission under SettingsAppsFamily LinkPermissions.

Advertisement

App approvals are not coming through to the parent device

Check that notifications are enabled for the Family Link app on your own phone (SettingsAppsFamily LinkNotifications). Without notifications, approval requests will pile up unnoticed.

School Time is not locking the device during school hours

Confirm the schedule was saved correctly in the app and that the child’s device time zone matches the schedule you set. Devices in a different time zone will trigger School Time at the wrong local time.

Key takeaways

  • Family Link is free, built by Google, and integrates at the OS level — it is the most reliable starting point for Android parental controls.
  • Setup requires two separate apps: one on your phone, one on your child’s phone. Using the wrong app on either device is the most common setup error.
  • The child’s supervised account must be the only Google Account on their device during setup.
  • As of January 2026, teens need parental approval to remove supervision — this is a significant change from earlier policy.
  • School Time, parent-approved contacts, and the redesigned Screen Time tab were all added in the February 2025 update — older setup guides may not mention these.
  • Family Link works best alongside a conversation about why monitoring is in place. Transparent oversight tends to build better digital habits than hidden controls.

FAQ

Yes. Google Family Link is completely free. There is no paid tier or premium version — all features are included at no cost.

Does my child know they are being monitored?

Yes. Family Link is a transparent tool by design. The child’s device displays a supervision indicator, and the child can see which apps are approved or restricted. It is not a hidden monitoring app.

Yes, but only if the account was created for a child under 13 through the supervised account creation flow, or if you add supervision to a teen’s existing account. Standard adult Google Accounts cannot be placed under Family Link supervision.

Advertisement

What happens if my child’s phone dies or goes offline?

Screen time limits and Downtime schedules that were already set will continue to apply. However, the parent dashboard will not update with new location data or activity reports until the device reconnects.

The parent Family Link app supports iOS 16 or higher on the parent’s device. However, Family Link cannot manage an iPhone as the child’s device — it only supervises Android devices and Chromebooks. For iPhone supervision, Apple’s Screen Time is the equivalent built-in tool.

Family Link can show you your child’s device location when the device is online and location services are active. It does not continuously stream a live location; instead, it shows the most recent known location and allows you to request a refresh.

No. Family Link cannot be uninstalled by the child from a supervised Android device without parental approval. Since January 2026, teenagers also need parental permission to disable supervision from their account settings.

Advertisement

Google Play parental controls only restrict content ratings inside the Play Store itself — they do not cover screen time, location, app usage, web filtering, or the rest of the device. Family Link is the full parental control system that includes Play Store controls alongside all other features. If you only want to restrict what your child can download, Play Store controls alone may be enough; for broader oversight, you need Family Link.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Epic cuts 1,000+ jobs amid financial struggles, seeks half-billion-dollar cost savings

Published

on


Sweeney also pointed to industry-wide changes including slower growth, weaker spending on games and consoles, tougher cost economics, and new forms of entertainment competing for gamers’ attention as additional factors hurting their business.
Read Entire Article
Source link

Continue Reading

Tech

Embedding compliance in AI adoption

Published

on

Kyndryl’s Ismail Amla discusses the company’s new policy as code process, and how it can help address AI issues such as agentic drift.

When it comes to AI adoption in enterprise, compliance concerns are becoming ever more important.

According to Kyndryl’s most recent Readiness Report, 31pc of enterprise customers cite regulatory or compliance concerns as a primary barrier limiting their organisation’s ability to scale recent technology investments.

2026 marks an important point on the AI compliance timeline in particular, with the EU’s AI Act transparency rules coming into effect in August.

Advertisement

Last month, Kyndryl announced its new ‘policy as code capability’ – a new process designed for creating policy-governed agentic AI workflows for enterprises.

“Policy as code is the process of translating an organisation’s rules, policies and compliance requirements into machine-readable code, so AI systems are restricted to only operating within pre-defined guardrails,” explains Ismail Amla, senior vice-president at Kyndryl Consult. “Human experts continue to oversee all activities related to these processes.”

Compliant design

“Many organisations, especially those in complex, highly regulated environments, want to scale agentic AI, but are held back by concerns around security, compliance and control”, says Amla.

Speaking to SiliconRepublic.com, he says policy as code can help organisations support “consistent policy interpretations” and define clear operational boundaries, subsequently ensuring agent actions are explainable, reviewable and “aligned with organisational standards”.

Advertisement

Amla also says the framework can help reduce costs, accelerate decision-making, eliminate errors and “power AI-native workflows within defined policy guardrails”.

“By embedding policy and regulatory requirements directly into AI agent operations, policy as code can help organisations execute AI workflows that are governed, transparent, explainable and aligned to business requirements.”

But what about the long-term applications of policy as code?

Amla says the main benefit of the process is “trust through stronger governance, better transparency, lower operational risk and more reliable AI at scale”.

Advertisement

“Managing agentic workflow execution in this way supports controlled and responsible deployment of policy-constrained AI agents in sectors such as financial operations, public services, supply chains and other mission-critical domains, where reliability and predictability are essential,” he explains.

Catch the drift

Over the past year, according to Amla, the biggest change he’s noticed in AI adoption is that organisations are moving beyond proofs of concept and “focusing more seriously on what it takes to make AI work in production and at scale”.

“That means more attention on infrastructure, governance, data quality and organisational readiness,” he says. “Organisations are moving from experimentation to making more strategic decisions with the experience they have gained to drive higher value outcomes and performance for their organisation, and receive a return on their investment.”

But with increased focus on serious AI integrations comes risk, particularly if an organisation is not fully prepared.

Advertisement

Amla warns of something called ‘agentic drift’, which refers to when an AI agent can appear reliable while working toward unwanted outcomes due to a gradual separation from the agent operator’s original intention or goal.

“Agentic drift creates pressing challenges for all organisations, but it is especially acute in the public sector and highly regulated sectors, such as banking and healthcare,” says Amla.

“In these industries, organisations cannot move from pilots to production if issues around control, trust and compliance remain unresolved. It’s clear enterprises urgently need a way to constrain what agents can do at runtime and close governance gaps long before drift leads to financial or compliance failures.”

Amla believes that policy as code can help address this issue, due to its ability to allow businesses to translate their rules and policy into machine-readable instructions that “govern how AI agents reason, adapt and act”.

Advertisement

“This greatly reduces the risk of agentic drift,” he says. “It also alleviates the trust and compliance concerns standing between large enterprises and a return on their AI investments.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Sony is reportedly shutting down Dark Outlaw Games, run by former Call of Duty director

Published

on

Sony is shutting down Dark Outlaw Games, a first-party game studio led by former Call of Duty producer Jason Blundell, Bloomberg‘s Jason Schreier reports. Before leading Dark Outlaw Games, Blundell was the head of Deviation Games, which was an independent studio, but also happened to be developing a PlayStation game before it shut down, Schreier says.

Dark Outlaw Games had yet to announce what it was working on, but considering Blundell’s experience with the Call of Duty franchise, it seems likely the studio was developing a multiplayer project for PlayStation. Blundell was a programmer and producer at Activision before making the jump to Treyarch to work on Call of Duty 3, and he contributed to multiple Call of Duty: Black Ops games after that, including serving as the director for the campaign and Zombies mode of Call of Duty: Black Ops III and the career and Zombies modes of Call of Duty: Black Ops 4.

Engadget has contacted Sony for more information about the fate of Dark Outlaw Games. We’ll update this article if we hear back.

The studio’s shutdown is being paired with cuts to staff at PlayStation focused on mobile development, according to Schreier. Sony has made a habit of laying off staff and shutting down studios in the last year, seemingly as a way to retreat from an earlier investment in online, live-service multiplayer games. The company shut down Bluepoint Games in February following attempts to get a live-service God of War game off the ground. Sony also closed Firewalk Studios after the spectacular failure of multiplayer shooter Concord in October 2024. And a year before that, Naughty Dog officially abandoned work on a standalone multiplayer version of The Last of Us in December 2023.

Advertisement

That leaves Sony with at least two Horizon Zero Dawn spin-offs, a co-op game from original developer Guerilla Games and a MMO from developer NCSoft; Fairgame$, which is still in active development despite the departure of Haven Studios head Jade Raymond; Arrowhead Game Studios’ Helldivers 2; Bungie’s Destiny 2 and Marathon; and if you really want to stretch, Gran Turismo 7. Sony clearly hasn’t given up on producing online multiplayer games, but it’s not hard to characterize its attempt to expand into the space as a disaster.

Source link

Continue Reading

Tech

Anthropic hands Claude Code more control, but keeps it on a leash

Published

on

For developers using AI, “vibe coding” right now comes down to babysitting every action or risking letting the model run unchecked. Anthropic says its latest update to Claude aims to eliminate that choice by letting the AI decide which actions are safe to take on its own — with some limits.  

The move reflects a broader shift across the industry, as AI tools are increasingly designed to act without waiting for human approval. The challenge is balancing speed with control: too many guardrails slows things down, while too few can make systems risky and unpredictable. Anthropic’s new “auto mode,” now in research preview — meaning it’s available for testing but not yet a finished product — is its latest attempt to thread that needle. 

Auto mode uses AI safeguards to review each action before it runs, checking for risky behavior the user didn’t request and for signs of prompt injection — a type of attack where malicious instructions are hidden in content that the AI is processing, causing it to take unintended actions. Any safe actions will proceed automatically, while the risky ones get blocked.

It’s essentially an extension of Claude Code’s existing “dangerously-skip-permissions” command, which hands all decision-making to the AI, but with a safety layer added on top.

Advertisement

The feature builds on a wave of autonomous coding tools from companies like GitHub and OpenAI, which can execute tasks on a developer’s behalf. But it takes it a step further by shifting the decision of when to ask for permission from the user to the AI itself. 

Anthropic hasn’t detailed the specific criteria its safety layer uses to distinguish safe actions from risky ones — something developers will likely want to understand better before adopting the feature widely. (TechCrunch has reached out to the company for more information on this front.)

Auto mode comes off the back of Anthropic’s launch of Claude Code Review, its automatic code reviewer designed to catch bugs before they hit the codebase, and Dispatch for Cowork, which allows users to send tasks to AI agents to handle work on their behalf.  

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Auto mode will roll out to Enterprise and API users in the coming days. The company says it currently only works with Claude Sonnet 4.6 and Opus 4.6, and recommends using the new feature in “isolated environments” — sandboxed setups that are kept separate from production systems, limiting the potential damage if something goes wrong.

Advertisement

Source link

Continue Reading

Tech

OpenAI Discontinues Sora Video Platform App

Published

on

OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. “The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year,” reports the Wall Street Journal.

CEO Sam Altman announced the changes to staff on Tuesday. “We’re saying goodbye to Sora,” the Sora Team said in a post on X. “To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work.”

Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop “superapp.” “We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts,” said CEO of Applications, Fidji Simo. “That fragmentation has been slowing us down and making it harder to hit the quality bar we want.” This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.

Source link

Advertisement
Continue Reading

Tech

This version of the Kindle Scribe Colorsoft is quite hard to get hold of

Published

on

A few months after its initial launch, Amazon has recently unveiled the Kindle Scribe Colorsoft in a brand new fetching Fig shade that’s proved especially popular.

In fact, the Fig-colour Kindle Scribe Colorsoft is so popular that it’s becoming increasingly difficult to get our hands on the e-reader, with shipping delays stretching well beyond the typical delivery windows we’d expect from Amazon.

At the time of writing, new orders for the Fig iteration in the US are expected to arrive anywhere between mid-April to mid-May. However, you can get your hands on the standard Graphite finish which is currently still in stock within the US. This suggests that the issue really only affects the newer colour option, rather than the entire product line.

Such differences in availability often point to supply constraints or production adjustments, particularly when a new finish launches after the initial release and demand shifts toward the latest variant.

Advertisement
Kindle Scribe Colorsoft in FigKindle Scribe Colorsoft in Fig
Kindle Scribe Colorsoft in Fig. Image Credit (Amazon)

Advertisement

It’s worth noting that at the time of writing, neither the Fig nor Graphite Kindle Scribe Colorsoft has officially launched in the UK. In addition, neither iterations are even available to pre-order, as the product page just states the e-reader is “coming soon”. Instead, you can opt into receiving an email to get notified on when the product will be available to buy.

Delays highlight uneven availability

The Kindle Scribe Colorsoft was initially only available in a Graphite option until Amazon recently introduced the new Fig finish, which seemingly appears to have drawn a considerably higher demand than anticipated. Either that, or the Fig shade has encountered production challenges soon after release.

However, delays tied to a specific colour variant are not uncommon, as sometimes manufacturing complexity or material sourcing can affect certain finishes differently than standard models.

In addition, the extended wait times also suggest that supply has not yet caught up with demand, especially as colour e-paper devices remain a relatively new category with more limited production scale compared to traditional e-readers.

Advertisement
Kindle Scribe Colorsoft in GraphiteKindle Scribe Colorsoft in Graphite

Essentially, customers are left choosing between faster delivery by opting for the Graphite version, or waiting considerably longer to nab the Fig iteration instead.

Advertisement

This situation leaves buyers choosing between faster delivery with the Graphite version or waiting longer to secure the Fig model.

Same hardware, different buying experience

Following on from the above, it’s worth noting that both versions of the Kindle Scribe Colorsoft share the same core hardware, including an 11-inch colour e-paper display based on Kaleido 3 technology, which combines standard black-and-white clarity with lower-resolution colour output.

The device also integrates a redesigned front-light system and a textured display surface that improves writing feel, placing it closer to digital notebooks than traditional e-readers focused only on reading.

Advertisement

Storage options and connectivity remain consistent across variants, with support for Wi-Fi, Bluetooth audio, and bundled stylus input, which reinforces that the delay relates to availability rather than product capability.

Amazon has not provided a detailed explanation for the extended shipping times on the Fig model, but current delivery estimates suggest that availability may stabilise later in the Spring.

Advertisement

If you are exploring other options, our Best Kindle 2026 roundup highlights the top-performing e-readers available today.

Advertisement

Source link

Continue Reading

Tech

Arm Unveils New AGI CPU With Meta As Debut Customer

Published

on

Arm unveiled its first self-developed data center chip, the AGI CPU, designed for handling agentic AI workloads. The new chip was built in partnership with Meta and manufactured by TSMC. Other customers for the new chip include OpenAI, Cloudflare, SAP, and SK Telecom. Reuters reports: The new chip, called the AGI CPU, will address data-crunching needed for a specific type of AI that is able to act on behalf of users with minimal oversight, instead of responding to queries as part of a chatbot. For years, Arm, majority-owned by Japan’s SoftBank Group has relied only on intellectual property for revenue, licensing its designs to companies such as Qualcomm and Nvidia and then collecting a royalty payment based on the number of units sold.

“It’s a very pivotal moment for the company,” CEO Rene Haas said in an interview with Reuters. The new chip will be overseen by Mohamed Awad, head of the company’s cloud AI business, and Arm has additional designs in the works that it plans to release at 12- to 18-month intervals. TSMC is fabricating the device on its 3-nanometer technology and is made from two distinct pieces of silicon that operate as a single chip. Arm plans to put it into volume production in the second half of this year but has received test chips that function as expected. In addition to the chip itself, Arm is working with server makers such as Lenovo and Quanta Computer to offer complete systems.

Source link

Continue Reading

Tech

I Wish More Movies Made 3D-Printable Models Like Project Hail Mary

Published

on

If you haven’t watched Project Hail Mary yet, you should. Try to watch it on the largest screen possible. It’s beautiful, heart-warming and fun for any audience. I’ve been obsessed with it since I listened to the Audiobook with Ray Porter, and the cinema version doesn’t disappoint.

Movies like this help inspire people to be scientists and explorers, and to look for the good in people. It shows that no matter who you are, you can change the world.

That’s my mini review, but not the real reason for this article. Project Hail Mary has done something that makes me, a 3D-printing maker, happy, happy, happy. If you visit the Project Hail Mary website and look in the bottom-right corner, you can download a 3D model of a stylized spaceman used in the movie.

Advertisement

I’ll try not to spoil anything, but the little spaceman is given to the main character, Ryland Grace, to help him visualize the ideas that his companion is trying to portray. It’s a beautiful little model, and not the first time a company has done something like this to promote a movie.

Horror popcorn bucket

James Bricknell/CNET

Many years ago, Paramount released a 3D model from Transformers: Rise of the Beasts. This year, Markiplier created a haunted 3D-printed popcorn bucket that you could actually take to the theater and get a free popcorn order. It was gross and cool at the same time, but unique enough that a lot of people enjoyed making it.

My hope is that more movie studios will realize how well these files are received by the maker community and keep giving us more. A lot of the models VFX designers create can be converted into 3D-printable models with ease, and in the case of Project Hail Mary, this file was almost certainly a 3D-printed prop anyway. They have the file, so why not share it with the world?

While we didn’t have any Xenonite around to 3D print with, we did have some lovely silver silk PLA to make this fancy little spaceman. Printing it on the fantastic Bambu Lab H2D was a breeze with some supports as needed. The pattern of the model makes it look so surreal and gives it an alien quality that really makes it stand out. Print time was around four hours using PLA.

Advertisement

My next project after this is to print the same model in Iron filament from Protpasta and let it rust to really make it feel otherworldly.

Project Hail Mary is something of a cultural phenomenon right now, and rightfully so. Adding the ability to 3D models directly from the studio has added a little more advertising from a group of people who are very likely to love a deep sci-fi movie and share what they’ve made with the world. Let’s hope more movie studios see how successful this is and jump on the idea, too.

Source link

Advertisement
Continue Reading

Tech

iOS 18.7.7, macOS 15.7.5 updates fix kernel memory leaks & WebKit flaws

Published

on

Apple pushed out a coordinated round of security updates on March 24, covering older versions of iOS, iPadOS, and macOS that are still widely used and still need protecting.

Tablet with keyboard case on a wooden table, screen showing colorful app icons and widgets, in a bright cafe or workspace with blurred chairs and windows in the background
iPad Pro

The updates include iOS 18.7.7, iPadOS 18.7.7, macOS Sequoia 15.7.5, and macOS Sonoma 14.8.5. They close a long list of vulnerabilities across core parts of the system, from networking to the kernel.
On iPhone and iPad, the fixes cut across everything from low-level system components to user-facing frameworks. Some bugs could let an app access sensitive user data, while others could crash processes or expose internal system state.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

ALPR Tech Now Preventing Parents From Enrolling Their Kids In School

Published

on

from the making-being-awful-more-efficient dept

All the people who have always brushed off concerns about surveillance tech, please come get your kids. And then let someone else raise them.

Lots of people are fine with mass surveillance because they believe the horseshit spewed by the immediate beneficiaries of this tech: law enforcement agencies that claim every encroachment on your rights might (MIGHT!) lead to the arrest of a dangerous criminal.

Running neck and neck with government surveillance state enthusiasts are extremely wealthy Americans. When they’re not adding new levels of surveillance to the businesses they own, they’re scattering cameras all around their gated communities and giving cops unfettered access to any images these cameras record.

Here’s how it plays out at the ground level: parents can’t get their kids enrolled in the nearest school because of surveillance tech. In one recent case, license plate reader data was used to deny enrollment because the data collected claimed the parent didn’t actually reside in the school district.

Advertisement

Just over a year ago, Thalía Sánchez became the proud owner of a home in Alsip. She decided to leave the bustle of the city for a quiet neighborhood setting and the best possible education for her daughter.

However, to this day, despite providing all required paperwork including her driver’s license, utility bills, vehicle registration, and mortgage statement, the Alsip Hazelgreen Oak Lawn School District 126 has repeatedly denied her daughter’s enrollment.

Why would the district do this? Well, it’s apparently because it has decided to trust the determinations made by its surveillance tech partner, rather than documents actually seen in person by the people making these determinations.

According to the school district, her daughter’s new student enrollment form was denied due to “license plate recognition software showing only Chicago addresses overnight” in July and August. In an email sent to Sánchez in August, the school district told her, “Although you are the owner on record of a house in our district boundaries, your license plate recognition shows that is not the place where you reside.”   

But that’s obviously not true. Sanchez says the only reason plate reader data would have shown her car as “staying” in Chicago was because she lent it to a relative during that time period. The school insists this data is enough to overturn the documents she’s provided because… well, it doesn’t really say. It just claims it “relies” on this information gathering to determine residency for students.

All of this can be traced back to Thompson Reuters, which apparently has branched out into the AI-assisted, ALPR-enabled business of denying enrollment to students based on assumptions made by its software.

Advertisement

Here’s what little there is of additional information, as obtained by The Register while reporting on this case:

Thomson Reuters Clear, which more broadly is an AI-assisted records investigation tool, has a page dedicated to its application for school districts. It sells Clear as a tool for residency verification, claiming that it can “automate” such tasks with “enhanced reliability,” and can take care of them “in minutes, not months.” 

One of the particular things the Clear page notes is its ability to access license plate data “and develop pattern of life information” that helps identify whether those who are claiming they’re residents for the sake of getting a kid enrolled in school are lying or not. 

Thomson Reuters does not specify where it gets its license plate reader data and did not respond to questions.

We’ll get to the highlighted sentence in a moment, but let’s just take a beat and consider how creepy and weird this Thomson Reuters promotional pitch is:

Advertisement

The text reads:

Gain deeper insights into mismatched data to support meaningful conversations with families and ensure students are where they need to be. Identify where cars have been seen to establish pattern of life information.

No one expects a law enforcement agency to do this (at least without a warrant or reasonable suspicion), much less a school district. Government agencies shouldn’t have unfettered access to “pattern of life” information just because. It’s not like the people being surveilled here are engaged in criminal activity. They’re just trying to make sure their kids receive an education. And while there will always be people who game the system to get their kids into better schools, that’s hardly justification for subjecting every enrolling student’s family to expansive surveillance-enabled background checks.

And while Thomson Reuters (and the district itself) has refused to comment on the source of its plate reader data, it can safely be assumed that it’s Flock Safety. Flock Safety has never shown any concern about who accesses the data it compiles, much less why they choose to do it. Flock is swiftly becoming the leading provider of ALPR cameras and given its complete lack of internal or external oversight, it’s more than likely the case that its feeding this data to third parties like Thomson Reuters that are willing to pay a premium for data that simply can’t be had elsewhere.

We’re not catching criminals with this tech. Sure, it may happen now and then. But the real value is repeated resale of “pattern of life” data to whoever is willing to purchase it. That’s a massive problem that’s only going to get worse… full stop.

Filed Under: alpr, chicago, license plate readers, surveillance, wtaf

Companies: flock, flock safety, thomson reuters

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025