Thermoforming is the process of softening a material enough so that it can be tweaked into a new shape, with the source of the thermal energy being not particularly relevant. Correspondingly, after [Zion Brock]’s recent video on his journey into thermoforming PLA with a mold and a heat gun, he got many comments suggesting that he should use hot water instead.
We covered his previous video as well, in which he goes through the design steps of making these grilles for a retro-styled, 3D printed radio. The thermoforming method enables him to shape the curvy grille with a heat gun and two-piece mold in a matter of minutes, rather than spending hours more time printing and removing many supports.
Theoretically using hot water instead of hot air would provide a more equal application of heat, but putting your hands into 70°C water does require some more precautions. There’s also the issue that PLA is very hygroscopic, so the part requires drying afterwards to prevent accelerated hydrolysis. Due to the more even heating, the edge of the PLA that clamped into the mold also softened significantly, causing it to pop out of the mold and requiring a small design modification to prevent this.
Advertisement
Basically, aqua-thermoforming like this has many advantages, as its slower and more consistent, but it’s less straightforward to use than hot air. This makes both a useful tool when you’re looking at doing thermoforming.
Well, almost anyone. Not every person owns a phone that can support live translation, or has the time or bandwidth to install an app (and maybe commit to a subscription).
T-Mobile wants to remove any obstacles that stand in the way of you talking to someone on a phone call. It’s introduced an upcoming Live Translation call feature, which begins testing in the spring, that puts language translation at the network level. So even if you own a basic dumb phone, you can talk with someone who speaks one of over 50 languages with the help of T-Mobile’s network AI agent.
Advertisement
Registration is now open for a beta of Live Translation to subscribers of any post-paid T-Mobile plan, such as the Essentials, Experience More, Experience Beyond and Better Value plans.
“We want to make voice cool again,” said John Saw, T-Mobile president of technology and chief technology officer, citing that its customers make 6 billion international calls per year, and 40% of those people travel internationally. “Live translation is a real breakthrough in innovation by introducing the latest AI models into our voice network.”
Just as it did during the beta of what became the T-Satellite service, T-Mobile has not yet decided which plans will include the live translation calling feature. It also hasn’t decided what, if any, cost there will be. T-Satellite is currently included in the Experience Beyond and Better Value plans and available on other plans as a $10 add-on. It’s also open to customers of other providers for $10 a month.
I haven’t tried T-Mobile’s live translation but I look forward to testing it soon.
Advertisement
How live translation will work
You have to dial *87* to turn on T-Mobile’s live translation calling tool.
Kevin Heinz/CNET
To turn on live translation during a call, the T-Mobile subscriber presses *87* (star-eight-seven-star), which activates the AI agent. Only one participant on the call needs to be a T-Mobile subscriber, and it will also work when the customer is roaming.
T-Mobile says there’s no setup, no voice training and no need to specify which languages to translate. The AI agent detects which languages are being spoken in real time and speaks the translation when a person stops speaking.
Advertisement
The AI agent will also detect whether you’re calling from another country and select a language for the translation. If you call someone in Brazil, it might choose Portuguese, for example. If the person speaks a different language, such as Spanish instead of Brazilian Portuguese, the agent will switch immediately.
Also, the spoken translation will not sound like a robotic voice. “Our AI model can actually clone your voice in another language and preserve the intonation, the emotions and the rhythm as well,” all picked up on the fly, said Saw. He attributes the performance to the low latency inherent in T-Mobile’s 5G Advanced network.
Once activated, the feature doesn’t need to be turned off. If both speakers switch to the same language, the AI agent just stops working as the go-between.
The true test will be the quality of the translations. “We have done a lot of benchmarks for AI-powered translations,” Saw said, “and it matches the accuracy of all the established services.” He said the model is compliant with FCC 2027 captioning guidelines and meets all ADA accessibility standards.
Advertisement
When I asked Saw whether conversations are recorded, even during the beta period, he said that kind of fine-tuning is being done using millions of internal-only test calls. “We don’t listen to customers’ calls, and [the AI models] are not trained on customers’ data,” said Saw, noting that the service meets all FCC guidelines for privacy.
Exactly which AI translation models are being used, or which partner companies are providing them, is something Saw declined to share. He did confirm that T-Mobile is working with several AI companies, but “we’re not going to name them because we love them all the same.”
Saw noted that the way T-Mobile’s network is designed as a platform has the advantage of being able to plug in updated AI translation models, run an upgrade overnight and make it available to hundreds of millions of phones.
Live translation is just the first T-Mobile agentic AI feature
Without pointing to specific upcoming strategies, Saw named a few other tasks that AI agents could handle in the future, such as an AI receptionist or AI concierge. Centering the AI technology in the network opens up those possibilities.
So why is the company choosing live translation as the first entry for AI-based, customer-facing network features?
“Live translation is not an easier solution to do,” Saw replied, “but it’s the right pain point to be solving today.”
Meta has secured a patent describing an AI system that could continue the social media presence of deceased or long-term inactive users. Let’s just say the reaction online has been swift.
Originally filed in 2023 and granted in late 2025, the patent outlines a system that would use a large language model to analyse a person’s past posts, messages, comments and interactions. From there, it could replicate their tone, writing style and communication patterns. Consequently, this would allow the account to continue posting and responding in a way designed to feel authentic.
The proposal doesn’t stop at text. According to the filing, the system could also simulate voice, video and even phone calls. Thus, it would effectively create a digital avatar capable of interacting independently.
While the patent covers scenarios involving extended inactivity — such as influencers taking a break but wanting to maintain audience engagement — much of the backlash has centred on its potential use after death.
Advertisement
Advertisement
Meta says it has no current plans to implement the system. As with many patents, the filing appears to be about protecting future possibilities rather than signalling an imminent product launch. Still, the idea has reignited concerns around digital identity and consent.
Key questions remain unresolved. Who would control AI-generated posts after someone’s death? How would personality rights be protected? And what are the psychological implications of interacting with a digital version of someone who has passed away?
On Reddit, users were quick to criticise the concept, describing it as “dystopian” and “immoral,” with several drawing comparisons to the Black Mirror episode Be Right Back, which explored a similar premise years ago. What once felt speculative now appears technically feasible. Nonetheless, it’s far from becoming mainstream.
Advertisement
Whether Meta ever turns this into a real feature is unclear. What is clear is that the conversation around digital legacy, AI replication and online identity isn’t slowing down anytime soon.
The Department of Homeland Security struck a $1 billion purchasing agreement with Palantir last week, further reinforcing the software company’s role in the federal agency that oversees the nation’s immigration enforcement.
According to contracting documents published last week, the blanket purchase agreement (BPA) awarded “is to provide Palantir commercial software licenses, maintenance, and implementation services department wide.” The agreement simplifies how DHS buys software from Palantir, allowing DHS agencies like Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) to essentially skip the competitive bidding process for new purchases of up to $1 billion in products and services from the company.
Palantir did not immediately respond to a request for comment.
Palantir announced the agreement internally on Friday. It comes as the company is struggling to address growing tensions among staff over its relationship with DHS and ICE. After Minneapolis nurse Alex Pretti was shot and killed in January, Palantir staffers flooded company Slack channels demanding information on how the tech they build empowers US immigration enforcement. Since then, the company has updated its internal wiki, offering few unreported details about its work with ICE, and Palantir CEO Alex Karp recorded a video for employees where he attempted to justify the company’s immigration work, as WIRED reported last week. Throughout a nearly hourlong conversation with Courtney Bowman, Palantir’s global director of privacy and civil liberties engineering, Karp failed to address direct questions about how the company’s tech powers ICE. Instead, he said workers could sign nondisclosure agreements for more detailed information.
Advertisement
Akash Jain, Palantir’s chief technology officer and president of Palantir US Government Partners, which works with US government agencies, acknowledged these concerns in the email announcing the company’s new agreement with DHS. “I recognize that this comes at a time of increased concern, both externally and internally, around our existing work with ICE,” Jain wrote. “While we don’t normally send out updates on new contract vehicles, in this moment it felt especially important to provide context to help inform your understanding of what this means—and what it doesn’t. There will be opportunities we run toward, and others we decline—that discipline is part of what has earned us DHS’s trust.”
In the Friday email, Jain suggests that the five-year agreement could allow the company to expand its reach across DHS into agencies like the US Secret Service (USSS), Federal Emergency Management Administration (FEMA), Transportation Security Administration (TSA), and the Cybersecurity and Infrastructure Security Agency (CISA).
Jain also argued that Palantir’s software could strengthen protections for US citizens. “These protections help enable accountability through strict controls and auditing capabilities, and support adherence to constitutional protections, especially the Fourth Amendment,” Jain wrote. (Palantir’s critics have argued that the company’s tools create a massive surveillance dragnet, which could ultimately harm civil liberties.)
Over the last year, Palantir’s work with ICE has grown tremendously. Last April, WIRED reported that ICE paid Palantir $30 million to build “ImmigrationOS,” which would provide “near real-time visibility” on immigrants self-deporting from the US. Since then, it’s been reported that the company has also developed a new tool called Enhanced Leads Identification & Targeting for Enforcement (ELITE) which creates maps of potential deportation targets, pulling data from DHS and the Department of Health and Human Services (HHS).
Advertisement
Closing his Friday email to staff, Jain suggested that staffers curious about the new DHS agreement come work on it themselves. “As Palantirians, the best way to understand the work is to engage on the work directly. If you are interested in helping shape and deliver the next chapter of Palantir’s work across DHS, please reach out,” Jain wrote to employees, who are sometimes referred to internally as fictional creatures from The Lord of the Rings. “There will be a massive need for committed hobbits to turn this momentum into mission outcomes.”
Ring CEO Jamie Siminoff has indicated that the company’s controversial Search Party feature might not always be just for lost dogs, . A creepy surveillance tool being used to surveil. Who could ?
“I believe that the foundation we created with Search Party, first for finding dogs, will end up becoming one of the most important pieces of tech and innovation to truly unlock the impact of our mission,” Siminoff wrote in an email to staffers. “You can now see a future where we are able to zero out crime in neighborhoods. So many things to do to get there but for the first time ever we have the chance to fully complete what we started.”
The words “zero out crime in neighborhoods” are particularly troubling. It is, however, worth noting that this is just an email and doesn’t necessarily indicate a plan by the company. Siminoff wrote the email back in October when , which was months . He did end the thread by noting he couldn’t “wait to show everyone else all the exciting things we are building over the years to come.”
One of those things could be the recently-launched “Familiar Faces” tool, which uses facial recognition to identify people that wander into the frame of a Ring camera. It seems to me that a combination of the Search Party tech, which uses the combined might of connected Ring cameras, with the Familiar Faces tech could make for a very powerful surveillance tool that excels at finding specific individuals.
Advertisement
Siminoff also suggested in an earlier email to staffers that Ring technology could have been used to catch Charlie Kirk’s killer by leveraging the company’s Community Requests feature. This is a tool that allows cops to ask camera owners for footage, thanks to a partnership with the police tech company Axon.
Ring had via a partnership with a surveillance company called Flock Safety. The companies after a Super Bowl ad spotlighting the Search Party tool triggered public outcry. Ring didn’t cite public sentiment for this decision, rather saying the integration would require “significantly more time and resources than anticipated.”
Ring has responded to 404 Media’s reporting, saying in an email that Search Party “does not process human biometrics or track people” and that “sharing has always been the camera owner’s choice.” This response did not provide any information as to what the future will hold for the company’s toolset.
The organization has been . “Our mission to reduce crime in neighborhoods has been at the core of everything we do at Ring,” founding chief Jamie Siminoff said when Amazon back in 2018.
YouTube’s “Ask” button is making its way to the living room. The Gemini-powered feature is now rolling out as an experiment on smart TVs, gaming consoles and streaming devices. 9to5Google first spotted a Google support page announcing the change.
Like on mobile devices and desktop, the feature is essentially a Gemini chatbot trained on each video’s content. Selecting that “Ask” button will bring up a series of canned prompts related to the content. Alternatively, you can use your microphone to ask questions about it in your own words.
The “Ask about this video” feature on desktop (YouTube)
Google says your TV remote’s microphone button (if it has one) will also activate the “Ask” feature. The company listed sample questions in its announcement, such as “what ingredients are they using for this recipe?” and “what’s the story behind this song’s lyrics?”
The conversational AI tool is only launching for “a small group of users” at first. Google promises that it will “keep everyone up to speed on any future expansions.”
The media is full of breathless reports that AI can now code and human programmers are going to be put out to pasture. We aren’t convinced. In fact, we think the “AI revolution” is just a natural evolution that we’ve seen before. Consider, for example, radios. Early on, if you wanted to have a radio, you had to build it. You may have even had to fabricate some or all of the parts. Even today, winding custom coils for a radio isn’t that unusual.
But radios became more common. You can buy the parts you need. You can even buy entire radios on an IC. You can go to the store and buy a radio that is probably better than anything you’d cobble together yourself. Even with store-bought equipment, tuning a ham radio used to be a technically challenging task. Now, you punch a few numbers in on a keypad.
The Human Element
What this misses, though, is that there’s still a human somewhere in the process. Just not as many. Someone has to design that IC. Someone has to conceive of it to start with. We doubt, say, the ENIAC or EDSAC was hand-wired by its designers. They figured out what they wanted, and an army of technicians probably did the work. Few, if any, of them could have envisoned the machine, but they can build it.
Does that make the designers less? No. If you write your code with a C compiler, should assembly programmers look down on you as inferior? Of course, they probably do, but should they?
Advertisement
If you have ever done any programming for most parts of the government and certain large companies, you probably know that system engineering is extremely important in those environments. An architect or system engineer collects requirements that have very formal meanings. Those requirements are decomposed through several levels. At the end, any competent programmer should be able to write code to meet the requirements. The requirements also provide a good way to test the end product.
A good requirement will look like this: “The system shall…” That means that it must comply with the rest of the sentence. For example, “The system shall process at least 50 records per minute.” This is testable.
Bad requirements might be something like “The system shall process many records per minute.” Or, “The system shall not present numeric errors.” A classic bad example is “The system shall use aesthetically pleasing cabinets.”
The first bad example is too hazy. One person might think “many” is at least 1,000. Someone else might be happy with 50. Requirements shouldn’t be negative since it is difficult to prove a negative. You could rewrite it as “The system shall present errors in a human-readable form that explains the error cause in English.” The last one, of course, is completely subjective.
You usually want to have each requirement handle one thing to simplify testing. So “The system shall present errors in human-readable form that explain the error cause in English and keep a log for at least three days of all errors.” This should be two requirements or, at least, have two parts to it that can be tested separately.
Advertisement
In general, requirements shouldn’t tell you how to do something. “The system shall use a bubble sort,” is probably a poor requirement. However, it should also be feasible. “The system shall detect lifeforms” doesn’t tell you how to make that work, but it is suspicious because it isn’t clear how that could work. “The system shall operate forever with no external power” is calling for a perpetual motion machine, so even if that’s what you wish for, it is still a bad requirement.
A portion of a typical NASA SRS requirements document
You sometimes see sentences with “should” instead of shall. These mark goals, and those are important, but not held to the same standard of rigor. For example, you might have “The system should work for as long as possible in the absence of external power.” That communicates the desire to work with no external power to the level that it is practical. If you actually want it to work at least for a certain period of time, then you are back to a solid and testable requirement, assuming such a time period is feasible.
You can find many NASA requirements documents, like this SRS (software requirements specification), for example. Note the table provides a unique ID for each requirement, a rationale, and notes about testing the requirement.
Requirement Decomposition
High-level requirements trace down to lower-level requirements and vice versa. For example, your top-level requirement might be: “The system shall allow underwater research at location X, which is 600 feet underwater.” This might decompose to: “The system shall support 8 researchers,” and “The system shall sustain the crew for up to three months without resupply.”
The next level might levy requirements based on what structure is needed to operate at 600 feet, how much oxygen, fresh water, food, power, and living space are required. Then an even lower level might break that down to even more detail.
Advertisement
Of course, a lower-level document for structures will be different from a lower-level requirement for, say, water management. In general, there will be more lower-level requirements than upper-level ones. But you get the idea. There may be many requirment documents at each level and, in general, the lower you go, the more specific the requirements.
And AI?
We suspect that if you could leap ahead a decade, a programmer’s life might be more like today’s system architect. Your value isn’t understanding printf or Python decorators. It is in visualizing useful solutions that can actually be done by a computer.
Then you generate requirements. Sure, AI might help improve your requirements, trace them, and catalog them. Eventually, AI can take the requirements and actually write code, or do mechanical design, or whatever. It could even help produce test plans.
The real question is, when can you stop and let the machine take over? If you can simply say “Design an underwater base,” then you would really have something. But the truth is, a human is probably more likely to understand exactly what all the unspoken assumptions are. Of course, an AI, or even a human expert, may ask clarifying questions: “How many people?” or “What’s the maximum depth?” But, in general, we think humans will retain an edge in both making assumptions and making creative design choices for the foreseeable future.
Advertisement
The End Result
There is more to teaching practical mathematics than drilling multiplication tables into students. You want them to learn how to attack complex problems and develop intuition from the underlying math. Perhaps programming isn’t about writing for loops any more than mathematics is about how to take a square root without a calculator. Sure, you should probably know how things work, but it is secondary to the real tools: creativity, reasoning, intuition, and the ability to pick from a bewildering number of alternatives to get a workable solution.
Our experience is that normal people are terrible about unambiguously expressing what they want a computer to do. In fact, many people don’t even understand what they want the computer to do beyond some fuzzy handwaving goal. It seems unlikely that the CEO of the future will simply tell an AI what it wants and a fully developed system will pop out.
What do you think? Is AI coding a fad? The new normal? Or is it just a stepping stone to making human programmers obsolete? Let us know in the comments. Although they have improved, we still think the current crop of AI is around the level of a bad summer intern.
Even though it’s a quiet time of the year when it comes to sales events, there is always a significant number of laptop deals available each week. So, I’m here to use my years of bargain-hunting experience and uncover the top offers that are worth buying right now, with prices starting at $199.
I’ve selected a range of options to suit different budgets and needs: whether you need a budget-friendly device for light use, a great value option for everyday use and work, or a performance powerhouse for more demanding creative or productivity tasks.
You can find my 8 top picks below, including laptops available at Amazon, Best Buy, and Dell — three retailers where I often find the best deals. Just a heads up: some are leftovers from the recent Presidents’ Day sales, so they may not be around much longer.
While there are many United States warships, those that are likely the best known are called “Enterprise.” The U.S. Navy has been commissioning ships with that name since the first Enterprise defended American supply routes from the British in May 1775. The trend continues with the new USS Enterprise (CVN-80), the third Gerald R. Ford-class nuclear-powered aircraft carrier and the eighth ship to be thus named. The USS Enterprise was expected to launch in 2025, but it was delayed and is now expected to launch in 2030.
Because of the ship’s name and the fact that Gerald R. Ford-class aircraft carriers are the largest warships ever constructed, there’s a great deal of public fascination surrounding it. The aircraft carrier is massive, and once it launches, The Enterprise will be able to carry a variety of aircraft, including fighter jets, helicopters, and various drones. In terms of combat aircraft capacity, the Enterprise should be able to accommodate between 70 and 90 aircraft, but this depends on a variety of factors, including the type of fighter jets it employs.
Advertisement
The USS Gerald R. Ford (CVN-78) carries 75 aircraft, operated by the Carrier Air Wing Eight . It consists of three squadrons of F/A-18E Super Hornets. These are being replaced over time as more F-35C Lightning IIs (the U.S. Navy carrier variant) are brought onboard. Because the Enterprise is the same model and because it won’t be ready until 2030, it’s likely that its Carrier Air Wing will consist of the same number of squadrons, though with more F-35Cs than its predecessors.
Advertisement
The aircraft of the USS Enterprise (CVN-80)
The number of fighter jets that a U.S. aircraft carrier can hold is dependent on several factors. Nimitz-class carriers, which are being replaced by Gerald R. Ford-class ships, carry around 56 fighter jets and other aircraft. The USS Enterprise will likely have the capacity to hold 90 aircraft, though not all of them will be fighter jets. In addition to the F-35C, the Enterprise could also be home to several F/A-18A/C and F/A-18E/F Super Hornets, which are highly capable multirole fighters employed throughout the U.S. Navy’s 11 active aircraft carriers.
When the USS Gerald R. Ford was launched, it paved the way for the vessels that followed, but there were issues with several of its systems. The lessons learned from the Ford helped guide changes to the Enterprise’s design, accommodating the F-35C and its upgraded Enterprise Air Surveillance Radar. As a result, the two ships are different in some aspects, though they’re fundamentally similar. The F-35C differs from the A and B models, as it’s designed to utilize the Enterprise’s Catapult-Assisted Take-Off Barrier-Arrested Recovery (CATOBAR) system.
The F-35C features larger wings with foldable wingtips. This makes it possible for the Enterprise to hold more F-35Cs than other models, as foldable wingtips allow for a smaller storage footprint. While the U.S. Navy isn’t in the habit of detailing the specifics of its Carrier Air Wing composition and capacities, given the size and attributes of the USS Enterprise and the F-35C, it will likely carry a minimum of 75 fighter jets. Some estimates indicate the number could be as high as 90, but this is unlikely, as room must be made for non-combat aircraft as well.
Behind the investor interest is a highly specialized component: electrostatic chucks, or ESCs, used in semiconductor etch tools to hold wafers flat, clean, and thermally stable while they are bombarded with plasma. Read Entire Article Source link
One of my favorite parts about my job is my ability to test new things and push the limits of my comfort zone with a new piece of tech. When I take on a review, I need to do my best to integrate that piece of tech into my workflow, and I never really know how that’s going to shake out.
I never pictured myself being a digital calendar guy. I mean sure, I live and die by my Google Calendar, but it never occurred to me to get a digital screen whose sole purpose it is to show me what’s coming up on a given day. I had the opportunity to test Skylight’s first calendar around this time last year (SlashGear has a Skylight Calendar (2025) review as well).
Since then, I’ve tested a number of alternatives to the Skylight calendar. At the end of it all, I kept coming back to the Skylight. From a feature-completeness standpoint, it has just the right amount of features — without trying to be something that it’s not.
Advertisement
One of the pitfalls of a new platform is the temptation to try to do too much, and I tested a few calendars that did just that. But Skylight comfortably stays in its lane with some obvious low-hanging fruit, and some not-so-obvious fruit as well. I’ve been using a Skylight Calendar 2 review sample provided by Skylight for around two weeks, and this is my full review.
Advertisement
Two steps forward and a step back
Adam Doud/SlashGear
Skylight actually has two calendars on the market right this minute — the Calendar 2 and the Calendar Max. The latter resides on my kitchen wall, and I’ll discuss that a bit as well, but the subject of this review is the former. The Calendar 2 replaces the previous version with a few upgrades, and a small step back from its predecessor. From a hardware standpoint, the screen on the Calendar 2 is faster, brighter, and more responsive than the previous generation.
If you’re using the calendar on a tabletop, the stand is mostly improved. The last calendar’s stand was solid metal, very heavy and to be honest, way overengineered. The new stand is lighter, but still sturdy, but it only works in landscape orientation…for some reason. This is actually the step back I referenced. I used the previous version of the calendar in portrait orientation on a shelf in between my desktop computer and another shelving unit. The calendar wouldn’t fit in landscape though, so it had to move it.
I’m not sure why Skylight made this call. I would imagine it was hard to simplify the stand and have it still work in both orientations. Whatever the case, I’m not thrilled, but that’s about the only thing I’m not happy about. By the way, you can wall mount the calendar in either orientation, so all is not lost.
One other improvement is the frame around the calendar is now magnetic and interchangeable, though the other frames won’t be coming for another month or so. Mine comes with black and it works just fine, but if you want more personality for your calendar, you’ll need to give it another 30 days or so.
Advertisement
Using a digital calendar
Adam Doud/SlashGear
The idea of a digital calendar never really appealed to me, until I had one. My family uses Google Calendar for all of our family activities — work schedules, school schedules, meetings, and the like. You would think it’s easy enough to pull out your phone or open a new browser tab and see everything that way — and it is. But having a digital calendar on the wall that just shows you what’s coming up all the time is extremely convenient — much like an old-school paper calendar.
You can glance up at the wall and see whatever is coming up. If you want to add an event, you can tap on a plus symbol to add an event or reminder. This version of the calendar is much more responsive than last year’s which is a great upgrade. When you open the new event box, you get a virtual keyboard that betrays the fact that this is an Android build, but one that is highly customized.
When an event is coming up, you get a little tone with an “OK” button, which is handy except when you can’t reach the button to dismiss it. I’d like to see a timer on the OK button to auto-dismiss it, but overall, the calendar is just there when you need it.
Advertisement
It’s so much more
Adam Doud/SlashGear
Beyond the calendar functionality, Skylight has built in some very smart features. Most of these are controlled with an app that I installed on my iPhone 17 Pro Max. In addition to the calendar (which syncs with Google, Outlook, Apple, Yahoo, and a few others) you can also set up reminders, repeating tasks or chores, and then there’s the one that’s most surprising and useful — meal planning, but it’s a little more than that.
At the risk of sounding like an infomercial “Don’t you just hate it when you get home form a hard day and you have to figure out what to eat? Well fortunately, Skylight can do all that for you!” If you have a black and white image of someone standing in their kitchen looking disheveled and shrugging in a very exaggerated way, you’re not alone. Ready for the super cringey part? Skylight does it with AI! But wait, don’t go anywhere. The AI is actually… kind of… good?
Advertisement
Your personal Sidekick
Adam Doud/SlashGear
Skylight’s AI feature is called Sidekick and it’s a handy little tool you can use to perform certain tasks. The obvious one is using Sidekick to capture an event from a poster or something like that. It can find the date, time, and event title and create an event for you. That’s quickly becoming table stakes for AI, but what about creating table steaks?
Sidekick can create a comprehensive meal plan for you and your family. In my case, I typed something to the effect of “My family has one diabetic person, so we have a real focus on protein and reduced sugars. We typically have ground beef and chicken on hand, but we’re not opposed to salads and other vegetables.” From that prompt, Sidekick whipped up a dinner plan for my family for seven days, included the recipes for the meals and added the ingredients to our shopping list.
Advertisement
The limitation here is that it added the ingredients to Skylights List feature, which is fine, except my family already has an app we’ve bought into to accommodate shopping lists and whatnot. I would love to see some kind of integration with AnyList, an iPhone app not enough people know about, but if you do that, you open a can of worms with customers asking, “Well you support XYZ, why not ABC?” I get it, and I respect Skylight for evading that nightmare scenario.
Advertisement
It’s not perfect, but it’s close
Adam Doud/SlashGear
I did not go into the digital calendar experience expecting it to be life-changing, but it really has been. Paper calendars are fine, but once you go digital, it’s hard to go back. Having your life digitized and hanging on the wall at all times is just so useful. Plus, compared to other digital calendars I’ve used, I can promise you Skylight is absolutely killing it. There is some room for improvement, and some caveats to be aware of.
The first is a big note — things like meal planning and magic import are behind a paywall called the Plus Plan. It’s $79 for a year, which isn’t too bad, but when everything in life is a subscription, it grates a little.
A feature that the Plus Plan unlocks lets you use your calendar as a digital frame when it’s not in use. This tracks because Skylight got its start in the digital photo frame industry. However, I never used the digital frame feature because to me, it defeated the purpose of having a digital calendar. The only way to turn off the frame is to tap on the screen, but that kills the ability to glance at the calendar and see what’s coming up.
Advertisement
Price, Availability, and Verdict
Adam Doud/SlashGear
The Skylight Calendar 2 is available from Skylight’s website for $299. It will be coming to other retailers in the future as well.
That price is a tad on the high side. I love the digital calendar, don’t get me wrong, but that’s a pretty high barrier for entry. Personally, I think it’s worth it, especially if you spring for the Plus Plan (which also gives you an extra $20 off the purchase price) and really buy into the Skylight ecosystem. Skylight offers free returns for up to four months, so you can really test drive it to make sure it’s for you.
All that being said, the convenience of putting your digital life on the wall at all times is pretty intoxicating. I suspect you’ll forget about the purchase price soon enough once you get it set up and realize how helpful it can be.