Connect with us
DAPA Banner

Tech

Screens in Schools: What the New Screen-Time Debate Means for Educators

Published

on

The screen-time debate is no longer confined to parenting advice. As states introduce legislation limiting devices in schools, and pediatric researchers rethink how digital environments affect development, educators are confronting a difficult question: when does technology support learning, and when does it undermine it?

In the first part of this series, I examined the American Academy of Pediatrics’ updated guidance on children’s digital ecosystems and how screens can shape early development at home. The same principles now apply in another place where children spend much of their day: school.

Screens are already a routine part of early childhood classrooms. In a 2025 RAND survey of pre-K teachers, roughly two-thirds reported using games on electronic devices in their classrooms. At the same time, a growing body of research is raising new questions about how different types of digital media affect children’s developing brains.

One frequently cited Canadian longitudinal study followed nearly 2,500 children between 24 and 36 months old and found that higher levels of screen time were associated with missed developmental milestones on screening tests at ages 36 to 60 months. That means that we’re seeing the developmental effects of increased toddler screen time as early as one year later.

Advertisement

Other studies suggest that certain types of media may be particularly overstimulating for young children. Fast-paced content designed to capture attention usually features rapid scene changes, constant motion, bright colors and loud sound effects. I love shows like Netflix’s “Word Party” for the language acquisition skills it teaches, but its features can overwhelm developing brains and temporarily disrupt executive functions such as attention, emotional regulation and self-control (ask me how I know).

These design features are meant to hold viewers’ attention, but the result can sometimes be what many parents recognize instantly: the moment when their sweet child suddenly turns into what I jokingly call a “screen monster.” I have three of them. I can’t imagine a classroom full of screen monsters.

As new technology becomes even more embedded in our lives, screens have become more pervasive in both homes and classrooms. And because technology changes so frequently, it’s helpful for educators to understand how instructional technology choices can either support or disrupt healthy digital environments for students.

I know this tension well, both as a parent and as a behavioral science and public health researcher. In the first part of this column series, I wrote about how screens have both helped and challenged my own family as we navigated parenting during the pandemic. Like most parents and teachers, we are still figuring it out. I’ve written previously about how short-form video addiction has made its way to Gen Z and Gen Alpha. And I recently reported the results of a research project we did at EdSurge that showed that prohibiting devices doesn’t really meet its intended goal.

Advertisement

Devices, screens, algorithms and technology in general have mutated from a household question to an education policy issue.

The Emerging Landscape of Technology Regulation

From a public health perspective, digital media is becoming part of the broader developmental environment shaping childhood development.

In education, conversations about technology traditionally have focused on the digital divide and ensuring equitable access to devices and internet connectivity. That conversation is shifting.

Researchers are now examining how digital environments affect sleep, attention, emotion regulation and social development. Population-level research suggests that heavy or poorly designed media exposure can contribute to sleep disruption, emotional dysregulation and difficulty disengaging from devices. Remember, screen monsters are lurking with their snotty noses and sippy cups.

Advertisement

Now, these concerns are beginning to influence policy.

Across several states, lawmakers are proposing restrictions on student device usage during the school day, including bans on smartphones and new scrutiny of edtech that uses personalized algorithms to maximize engagement. Since many edtech companies have enhanced or marketed their AI-powered features, the competition to capture and hold students’ attention has likely stiffened.

This is a significant shift. Historically, digital technology, social media and the Internet has been one of the least regulated environments with, arguably, among the greatest effects on both children’s and adults’ lives. Technological change often moves faster than public policy and data, leaving lawmakers and educators to respond after new tools become widespread.

Now the regulatory landscape appears to be catching up and entering the environments children already inhabit.

Advertisement

So What Should Educators Do?

What started as a deeply personal parenting dilemma has become a much larger question for schools. As pediatric researchers update guidance on children’s digital environments, and states debate limits on student screen exposure, educators are being asked to reconsider how technology shapes the cognitive environments where children learn.

The debate often falls into extremes. Some people argue that screens are ruining learning. Others claim that technology is the future of education.

The research suggests that the truth lies somewhere in the middle.

This is one of those test questions where “all of the above” fits best. How screens affect children depends heavily on context, content and duration of use. A passive, fast-paced digital experience is very different from an interactive lesson where students discuss ideas, solve problems or collaborate with peers.

Advertisement

It can be tempting to respond to uncertainty by rejecting technology altogether. And I don’t fault that perspective, because I believe that response comes from a desire to protect kids from unpredictable harm. But the reality is that there is no one-size-fits-all approach for every child, classroom, school or community.

Public health offers a useful framework for thinking about this challenge: harm reduction.

When an exposure is widespread and difficult to eliminate, reducing risk is often more effective than banning it outright. Seatbelts and car seats made riding in cars and buses safer, instead of banning vehicles to reduce vehicular accidents. That’s a classic harm-reduction strategy.

Similarly, screens are unlikely to disappear from classrooms. The more productive question is how educators can create guardrails that reduce potential harms while preserving the benefits of digital tools. I think students would keep using devices, anyway. What’s school without TikTok dances nowadays?

Advertisement

That means choosing technology that supports interaction rather than passive consumption, and balancing digital activities with discussion and hands-on learning. The personalized algorithms in edtech are becoming more common, but the science suggests that it’s best to avoid tools designed primarily to maximize screen engagement.

As states debate new regulations on student screen exposure, educators and school leaders will increasingly be asked to make decisions about how technology shapes the environments where children learn.

The research offers a useful starting point: children’s brains learn best through interaction, conversation, manageable stimulation, productive struggle, and moments of curiosity that make ideas stick.

Technology can support those experiences. But it cannot and will not replace the relationships between students and the adults who teach and care for them.

Advertisement

The real question for schools is not whether screens belong in classrooms, but whether they help students think, or simply keep them clicking and scrolling.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

What’s new with the instant camera?

Published

on

Fujifilm has recently unveiled the latest addition to its instant camera range, with the aptly named Instax Mini 13.

As the Fujifilm Instax Mini 12 has a spot on our best instant cameras list, are there enough improvements with the Mini 13 to warrant an upgrade? Or, is the Mini 12 still a great choice for many.

We’ve compared the specs of the Fujifilm Instax Mini 13 to the Mini 12 and noted all the noteworthy differences between the instant cameras below. Keep reading to see what’s new with the Mini 13 and to decide whether or not you should upgrade.

For more of an overview, we’ve also rounded up a list of the best cameras we’ve reviewed recently. 

Advertisement

Price and Availability

At the time of writing, Fujifilm is yet to provide an exact launch date for the Instax Mini 13, and instead has promised the instant camera will be available “in or around late June 2026”. Its current MSRP is £79/€89.99/$93.95.

Advertisement

In comparison, the Fujifilm Instax Mini 12 is readily available to purchase now and has an RRP of around £79.99/$94. Having said that, it is possible to nab the instant camera with a decent price drop.

Instax Mini 13 includes a self-timer

One of the main new additions to the Instax Mini 13 is the inclusion of a self-timer. The timer is fitted with an LED lever that allows you to switch between either a two-second or ten-second countdown. The shorter two-second timer is designed for capturing hands-free selfies with reduced blur, while the ten-second alternative enables easier group shots and different angles.

Advertisement
Self timer on Instax Mini 13Self timer on Instax Mini 13
Self timer on Instax Mini 13. Image Credit (Fujifilm)

As mentioned, this is a brand new addition to the Mini 13 so the Mini 12 unfortunately lacks this tool. Even so, it’s still worth noting that we found the Mini 12 to be easy to use, thanks to the few buttons or features on offer.

Both feature a selfie mirror and close-up mode

If you’re coming from an older Instax Mini, then you’ll be pleased to know that both the Mini 13 and Mini 12 are fitted with built-in selfie mirrors at their respective fronts. It’s a great addition that allows you to check whether everyone is in the frame before potentially wasting a precious print.

Not only that, but both cameras also benefit from Close-Up Mode which is enabled by twisting the lens twice. Essentially, Close-Up Mode could also be classed as “selfie” mode, and ensures the main subject is captured right in the centre.

Advertisement

Instax Mini 12 main imageInstax Mini 12 main image
Instax Mini 12. Image Credit (Trusted Reviews)

Speaking of similarities, it’s also worth noting that both the Mini 13 and Mini 12 have automatic lighting adjustment and promise to print a photo in just five seconds and have it develop within 90. 

Instax Mini 13 has new film

Alongside the launch of the Instax Mini 13, Fujifilm has also revealed a couple of new additions and updates to its existing line-up. Firstly, the Instax Up! Smartphone apps will now integrate AI to increase image scanning precision, which is thanks to an update to its “overall learning capability”. This, according to Fujifilm, is promised to recognise images over backgrounds for “more precise scans” overall.

Advertisement

In addition, Fujifilm is also introducing a new Pastel Galaxy-themed film roll which includes sparkly, gloss embellishments and more colours too. This will be available by “late June 2026” with an MSRP of €9.99.

Although both of these new additions are introduced with the Instax Mini 13, the film and smartphone app updates will be supported by the Instax Mini 12.

Instax Mini 12 photosInstax Mini 12 photos
Instax Mini 12 photos. Image Credit (Trusted Reviews)

Advertisement

Instax Mini 13 includes a camera angle adjustment accessory

Designed to work with the self-timer, the Instax Mini 13 comes equipped with a camera angle adjustment tool. Made up as part of the wrist strap, the tool can be used to position the camera with a slight upward tilt – negating the need for a tripod or any additional equipment.

Instax Mini 13 camera adjustment accessoryInstax Mini 13 camera adjustment accessory

Instax Mini 13 has more of a square design

Although at first glance you’d be forgiven for not noticing a huge design difference between the two, there are a few things to consider. Firstly, although both are undoubtedly portable, it’s fair to say that neither are quite pocket-friendly cameras to whip out in a flash. If that’s something you’d prefer, then we’d recommend the Instax Mini Evo instead.

Instax Mini 13Instax Mini 13
Instax Mini 13. Image Credit (Fujifilm)

Otherwise, alongside the addition of the timer lever at its side, the Mini 13 also has more of a uniform rounded shape compared to the Mini 12. Either way, both cameras are compact and come in a choice of five pastel colours too.

Advertisement

Advertisement

Early Verdict

With the addition of a self-timer, a rounder and more uniform design and the inclusion of the camera angle adjustment accessory on its wrist strap, the Instax Mini 13 looks set to be a brilliant instant camera – especially if you’re coming from an older model.

However, whether you really need to upgrade from the Instax Mini 12 is still up for debate as, although the Mini 12 may lack the self-timer, it still sports Close-Up Mode, automatic light and flash control and speedy photo printing too. We’ll be sure to update this versus once we do review the Instax Mini 13.

Source link

Advertisement
Continue Reading

Tech

Epic cuts 1,000+ jobs amid financial struggles, seeks half-billion-dollar cost savings

Published

on


Sweeney also pointed to industry-wide changes including slower growth, weaker spending on games and consoles, tougher cost economics, and new forms of entertainment competing for gamers’ attention as additional factors hurting their business.
Read Entire Article
Source link

Continue Reading

Tech

Embedding compliance in AI adoption

Published

on

Kyndryl’s Ismail Amla discusses the company’s new policy as code process, and how it can help address AI issues such as agentic drift.

When it comes to AI adoption in enterprise, compliance concerns are becoming ever more important.

According to Kyndryl’s most recent Readiness Report, 31pc of enterprise customers cite regulatory or compliance concerns as a primary barrier limiting their organisation’s ability to scale recent technology investments.

2026 marks an important point on the AI compliance timeline in particular, with the EU’s AI Act transparency rules coming into effect in August.

Advertisement

Last month, Kyndryl announced its new ‘policy as code capability’ – a new process designed for creating policy-governed agentic AI workflows for enterprises.

“Policy as code is the process of translating an organisation’s rules, policies and compliance requirements into machine-readable code, so AI systems are restricted to only operating within pre-defined guardrails,” explains Ismail Amla, senior vice-president at Kyndryl Consult. “Human experts continue to oversee all activities related to these processes.”

Compliant design

“Many organisations, especially those in complex, highly regulated environments, want to scale agentic AI, but are held back by concerns around security, compliance and control”, says Amla.

Speaking to SiliconRepublic.com, he says policy as code can help organisations support “consistent policy interpretations” and define clear operational boundaries, subsequently ensuring agent actions are explainable, reviewable and “aligned with organisational standards”.

Advertisement

Amla also says the framework can help reduce costs, accelerate decision-making, eliminate errors and “power AI-native workflows within defined policy guardrails”.

“By embedding policy and regulatory requirements directly into AI agent operations, policy as code can help organisations execute AI workflows that are governed, transparent, explainable and aligned to business requirements.”

But what about the long-term applications of policy as code?

Amla says the main benefit of the process is “trust through stronger governance, better transparency, lower operational risk and more reliable AI at scale”.

Advertisement

“Managing agentic workflow execution in this way supports controlled and responsible deployment of policy-constrained AI agents in sectors such as financial operations, public services, supply chains and other mission-critical domains, where reliability and predictability are essential,” he explains.

Catch the drift

Over the past year, according to Amla, the biggest change he’s noticed in AI adoption is that organisations are moving beyond proofs of concept and “focusing more seriously on what it takes to make AI work in production and at scale”.

“That means more attention on infrastructure, governance, data quality and organisational readiness,” he says. “Organisations are moving from experimentation to making more strategic decisions with the experience they have gained to drive higher value outcomes and performance for their organisation, and receive a return on their investment.”

But with increased focus on serious AI integrations comes risk, particularly if an organisation is not fully prepared.

Advertisement

Amla warns of something called ‘agentic drift’, which refers to when an AI agent can appear reliable while working toward unwanted outcomes due to a gradual separation from the agent operator’s original intention or goal.

“Agentic drift creates pressing challenges for all organisations, but it is especially acute in the public sector and highly regulated sectors, such as banking and healthcare,” says Amla.

“In these industries, organisations cannot move from pilots to production if issues around control, trust and compliance remain unresolved. It’s clear enterprises urgently need a way to constrain what agents can do at runtime and close governance gaps long before drift leads to financial or compliance failures.”

Amla believes that policy as code can help address this issue, due to its ability to allow businesses to translate their rules and policy into machine-readable instructions that “govern how AI agents reason, adapt and act”.

Advertisement

“This greatly reduces the risk of agentic drift,” he says. “It also alleviates the trust and compliance concerns standing between large enterprises and a return on their AI investments.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Sony is reportedly shutting down Dark Outlaw Games, run by former Call of Duty director

Published

on

Sony is shutting down Dark Outlaw Games, a first-party game studio led by former Call of Duty producer Jason Blundell, Bloomberg‘s Jason Schreier reports. Before leading Dark Outlaw Games, Blundell was the head of Deviation Games, which was an independent studio, but also happened to be developing a PlayStation game before it shut down, Schreier says.

Dark Outlaw Games had yet to announce what it was working on, but considering Blundell’s experience with the Call of Duty franchise, it seems likely the studio was developing a multiplayer project for PlayStation. Blundell was a programmer and producer at Activision before making the jump to Treyarch to work on Call of Duty 3, and he contributed to multiple Call of Duty: Black Ops games after that, including serving as the director for the campaign and Zombies mode of Call of Duty: Black Ops III and the career and Zombies modes of Call of Duty: Black Ops 4.

Engadget has contacted Sony for more information about the fate of Dark Outlaw Games. We’ll update this article if we hear back.

The studio’s shutdown is being paired with cuts to staff at PlayStation focused on mobile development, according to Schreier. Sony has made a habit of laying off staff and shutting down studios in the last year, seemingly as a way to retreat from an earlier investment in online, live-service multiplayer games. The company shut down Bluepoint Games in February following attempts to get a live-service God of War game off the ground. Sony also closed Firewalk Studios after the spectacular failure of multiplayer shooter Concord in October 2024. And a year before that, Naughty Dog officially abandoned work on a standalone multiplayer version of The Last of Us in December 2023.

Advertisement

That leaves Sony with at least two Horizon Zero Dawn spin-offs, a co-op game from original developer Guerilla Games and a MMO from developer NCSoft; Fairgame$, which is still in active development despite the departure of Haven Studios head Jade Raymond; Arrowhead Game Studios’ Helldivers 2; Bungie’s Destiny 2 and Marathon; and if you really want to stretch, Gran Turismo 7. Sony clearly hasn’t given up on producing online multiplayer games, but it’s not hard to characterize its attempt to expand into the space as a disaster.

Source link

Continue Reading

Tech

Anthropic hands Claude Code more control, but keeps it on a leash

Published

on

For developers using AI, “vibe coding” right now comes down to babysitting every action or risking letting the model run unchecked. Anthropic says its latest update to Claude aims to eliminate that choice by letting the AI decide which actions are safe to take on its own — with some limits.  

The move reflects a broader shift across the industry, as AI tools are increasingly designed to act without waiting for human approval. The challenge is balancing speed with control: too many guardrails slows things down, while too few can make systems risky and unpredictable. Anthropic’s new “auto mode,” now in research preview — meaning it’s available for testing but not yet a finished product — is its latest attempt to thread that needle. 

Auto mode uses AI safeguards to review each action before it runs, checking for risky behavior the user didn’t request and for signs of prompt injection — a type of attack where malicious instructions are hidden in content that the AI is processing, causing it to take unintended actions. Any safe actions will proceed automatically, while the risky ones get blocked.

It’s essentially an extension of Claude Code’s existing “dangerously-skip-permissions” command, which hands all decision-making to the AI, but with a safety layer added on top.

Advertisement

The feature builds on a wave of autonomous coding tools from companies like GitHub and OpenAI, which can execute tasks on a developer’s behalf. But it takes it a step further by shifting the decision of when to ask for permission from the user to the AI itself. 

Anthropic hasn’t detailed the specific criteria its safety layer uses to distinguish safe actions from risky ones — something developers will likely want to understand better before adopting the feature widely. (TechCrunch has reached out to the company for more information on this front.)

Auto mode comes off the back of Anthropic’s launch of Claude Code Review, its automatic code reviewer designed to catch bugs before they hit the codebase, and Dispatch for Cowork, which allows users to send tasks to AI agents to handle work on their behalf.  

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Auto mode will roll out to Enterprise and API users in the coming days. The company says it currently only works with Claude Sonnet 4.6 and Opus 4.6, and recommends using the new feature in “isolated environments” — sandboxed setups that are kept separate from production systems, limiting the potential damage if something goes wrong.

Advertisement

Source link

Continue Reading

Tech

OpenAI Discontinues Sora Video Platform App

Published

on

OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. “The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year,” reports the Wall Street Journal.

CEO Sam Altman announced the changes to staff on Tuesday. “We’re saying goodbye to Sora,” the Sora Team said in a post on X. “To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work.”

Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop “superapp.” “We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts,” said CEO of Applications, Fidji Simo. “That fragmentation has been slowing us down and making it harder to hit the quality bar we want.” This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.

Source link

Advertisement
Continue Reading

Tech

This version of the Kindle Scribe Colorsoft is quite hard to get hold of

Published

on

A few months after its initial launch, Amazon has recently unveiled the Kindle Scribe Colorsoft in a brand new fetching Fig shade that’s proved especially popular.

In fact, the Fig-colour Kindle Scribe Colorsoft is so popular that it’s becoming increasingly difficult to get our hands on the e-reader, with shipping delays stretching well beyond the typical delivery windows we’d expect from Amazon.

At the time of writing, new orders for the Fig iteration in the US are expected to arrive anywhere between mid-April to mid-May. However, you can get your hands on the standard Graphite finish which is currently still in stock within the US. This suggests that the issue really only affects the newer colour option, rather than the entire product line.

Such differences in availability often point to supply constraints or production adjustments, particularly when a new finish launches after the initial release and demand shifts toward the latest variant.

Advertisement
Kindle Scribe Colorsoft in FigKindle Scribe Colorsoft in Fig
Kindle Scribe Colorsoft in Fig. Image Credit (Amazon)

Advertisement

It’s worth noting that at the time of writing, neither the Fig nor Graphite Kindle Scribe Colorsoft has officially launched in the UK. In addition, neither iterations are even available to pre-order, as the product page just states the e-reader is “coming soon”. Instead, you can opt into receiving an email to get notified on when the product will be available to buy.

Delays highlight uneven availability

The Kindle Scribe Colorsoft was initially only available in a Graphite option until Amazon recently introduced the new Fig finish, which seemingly appears to have drawn a considerably higher demand than anticipated. Either that, or the Fig shade has encountered production challenges soon after release.

However, delays tied to a specific colour variant are not uncommon, as sometimes manufacturing complexity or material sourcing can affect certain finishes differently than standard models.

In addition, the extended wait times also suggest that supply has not yet caught up with demand, especially as colour e-paper devices remain a relatively new category with more limited production scale compared to traditional e-readers.

Advertisement
Kindle Scribe Colorsoft in GraphiteKindle Scribe Colorsoft in Graphite

Essentially, customers are left choosing between faster delivery by opting for the Graphite version, or waiting considerably longer to nab the Fig iteration instead.

Advertisement

This situation leaves buyers choosing between faster delivery with the Graphite version or waiting longer to secure the Fig model.

Same hardware, different buying experience

Following on from the above, it’s worth noting that both versions of the Kindle Scribe Colorsoft share the same core hardware, including an 11-inch colour e-paper display based on Kaleido 3 technology, which combines standard black-and-white clarity with lower-resolution colour output.

The device also integrates a redesigned front-light system and a textured display surface that improves writing feel, placing it closer to digital notebooks than traditional e-readers focused only on reading.

Advertisement

Storage options and connectivity remain consistent across variants, with support for Wi-Fi, Bluetooth audio, and bundled stylus input, which reinforces that the delay relates to availability rather than product capability.

Amazon has not provided a detailed explanation for the extended shipping times on the Fig model, but current delivery estimates suggest that availability may stabilise later in the Spring.

Advertisement

If you are exploring other options, our Best Kindle 2026 roundup highlights the top-performing e-readers available today.

Advertisement

Source link

Continue Reading

Tech

Arm Unveils New AGI CPU With Meta As Debut Customer

Published

on

Arm unveiled its first self-developed data center chip, the AGI CPU, designed for handling agentic AI workloads. The new chip was built in partnership with Meta and manufactured by TSMC. Other customers for the new chip include OpenAI, Cloudflare, SAP, and SK Telecom. Reuters reports: The new chip, called the AGI CPU, will address data-crunching needed for a specific type of AI that is able to act on behalf of users with minimal oversight, instead of responding to queries as part of a chatbot. For years, Arm, majority-owned by Japan’s SoftBank Group has relied only on intellectual property for revenue, licensing its designs to companies such as Qualcomm and Nvidia and then collecting a royalty payment based on the number of units sold.

“It’s a very pivotal moment for the company,” CEO Rene Haas said in an interview with Reuters. The new chip will be overseen by Mohamed Awad, head of the company’s cloud AI business, and Arm has additional designs in the works that it plans to release at 12- to 18-month intervals. TSMC is fabricating the device on its 3-nanometer technology and is made from two distinct pieces of silicon that operate as a single chip. Arm plans to put it into volume production in the second half of this year but has received test chips that function as expected. In addition to the chip itself, Arm is working with server makers such as Lenovo and Quanta Computer to offer complete systems.

Source link

Continue Reading

Tech

I Wish More Movies Made 3D-Printable Models Like Project Hail Mary

Published

on

If you haven’t watched Project Hail Mary yet, you should. Try to watch it on the largest screen possible. It’s beautiful, heart-warming and fun for any audience. I’ve been obsessed with it since I listened to the Audiobook with Ray Porter, and the cinema version doesn’t disappoint.

Movies like this help inspire people to be scientists and explorers, and to look for the good in people. It shows that no matter who you are, you can change the world.

That’s my mini review, but not the real reason for this article. Project Hail Mary has done something that makes me, a 3D-printing maker, happy, happy, happy. If you visit the Project Hail Mary website and look in the bottom-right corner, you can download a 3D model of a stylized spaceman used in the movie.

Advertisement

I’ll try not to spoil anything, but the little spaceman is given to the main character, Ryland Grace, to help him visualize the ideas that his companion is trying to portray. It’s a beautiful little model, and not the first time a company has done something like this to promote a movie.

Horror popcorn bucket

James Bricknell/CNET

Many years ago, Paramount released a 3D model from Transformers: Rise of the Beasts. This year, Markiplier created a haunted 3D-printed popcorn bucket that you could actually take to the theater and get a free popcorn order. It was gross and cool at the same time, but unique enough that a lot of people enjoyed making it.

My hope is that more movie studios will realize how well these files are received by the maker community and keep giving us more. A lot of the models VFX designers create can be converted into 3D-printable models with ease, and in the case of Project Hail Mary, this file was almost certainly a 3D-printed prop anyway. They have the file, so why not share it with the world?

While we didn’t have any Xenonite around to 3D print with, we did have some lovely silver silk PLA to make this fancy little spaceman. Printing it on the fantastic Bambu Lab H2D was a breeze with some supports as needed. The pattern of the model makes it look so surreal and gives it an alien quality that really makes it stand out. Print time was around four hours using PLA.

Advertisement

My next project after this is to print the same model in Iron filament from Protpasta and let it rust to really make it feel otherworldly.

Project Hail Mary is something of a cultural phenomenon right now, and rightfully so. Adding the ability to 3D models directly from the studio has added a little more advertising from a group of people who are very likely to love a deep sci-fi movie and share what they’ve made with the world. Let’s hope more movie studios see how successful this is and jump on the idea, too.

Source link

Advertisement
Continue Reading

Tech

iOS 18.7.7, macOS 15.7.5 updates fix kernel memory leaks & WebKit flaws

Published

on

Apple pushed out a coordinated round of security updates on March 24, covering older versions of iOS, iPadOS, and macOS that are still widely used and still need protecting.

Tablet with keyboard case on a wooden table, screen showing colorful app icons and widgets, in a bright cafe or workspace with blurred chairs and windows in the background
iPad Pro

The updates include iOS 18.7.7, iPadOS 18.7.7, macOS Sequoia 15.7.5, and macOS Sonoma 14.8.5. They close a long list of vulnerabilities across core parts of the system, from networking to the kernel.
On iPhone and iPad, the fixes cut across everything from low-level system components to user-facing frameworks. Some bugs could let an app access sensitive user data, while others could crash processes or expose internal system state.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025