It’s been several years since we last did this, but I’d like to remind you all that the National Football League plays a lot of make believe when it comes to what its trademarks for the “Super Bowl” do and do not allow it to do in terms of enforcement. Thanks largely to media outlets that repeat the false narrative the NFL puts out there, far too many people think that businesses, or even members of the public, simply cannot use the phrase “Super Bowl” in any capacity whatsoever if there is any commercial component to it.
TV companies advertising their goods and telling you to “be prepared for the Super Bowl”? Can’t do it. A church holding a party for the game with invitations to the Super Bowl and a 5$ cover charge? Verboten. And this way of thinking is perpetuated by posts like this one from TVLine.
The term “Super Bowl” is an NFL trademark, and licensing that trademark is very, very expensive. After all, the NFL makes a lot of money from “Super Bowl” commercials – 30-second slots for this year’s game have cost upward of $10 million.
Of course, there are ways around not being able to mention the Super Bowl in commercials. Brands that aren’t willing or able to license the name will refer to it as “the big game” or something along those lines instead. What’s more, the brands that pay to license the name still have to work within strict parameters. According to L.A. Tech & Media Law, parties that purchase Super Bowl ad spots can only mention the name of the event for a limited period of time.
In the past, the league has sent cease-and-desists to bars and even churches that host Super Bowl parties and charge an admission fee. In short, if an entity of any kind uses the term for commercial gain, they can expect a letter from the NFL’s lawyers.
Advertisement
Yes, they can, but that shouldn’t be the entirety of the post. The NFL can send whatever letters they like. What matters is whether they are asserting rights they actually have or not. Otherwise, posts like this leave the public with an, at best, incomplete idea of what rights the NFL has and what rights it doesn’t.
The NFL certainly has a trademark on “Super Bowl.” That does not automagically mean it can fully control all uses of that mark, even where there is money involved. Fair use defenses still apply, of course, as does the general standard that the use had to either confuse the public as to the source of the product or service, or falsely imply an association between the company and the NFL. Not all uses, even commercial, will do that.
Stop giving the NFL power it doesn’t actually have. A restaurant putting out a sidewalk sign that says it will have the Super Bowl on its TVs is not trademark infringement by any sane reading of the law. An advertisement merely acknowledging the existence of the Super Bowl does not in and of itself make it infringing.
Yes, the NFL pulls overly protectionist crap with this trademark all the time. Yes, it would take coordinated pushback from more than one corporate entity with deep pockets to fight it. But it’s a fight worth fighting and, at the very least, none of us have to pretend that the NFL has rights it doesn’t have.
Although often tossed together into a singular ‘retro game’ aesthetic, the first game consoles that focused on 3D graphics like the Nintendo 64 and Sony PlayStation featured very distinct visuals that make these different systems easy to distinguish. Yet whereas the N64 mostly suffered from a small texture buffer, the PS’s weak graphics hardware necessitated compromises that led to the highly defining jittery and wobbly PlayStation graphics.
These weaknesses of the PlayStation and their results are explored by [LorD of Nerds] in a recent video. Make sure to toggle on subtitles if you do not speak German.
It could be argued that the PlayStation didn’t have a 3D graphics chip at all, just a video chip that could blit primitives and sprites to the framebuffer. This forced PS developers to draw 3D graphics without such niceties like a Z-buffer, putting a lot of extra work on the CPU.
This problem extends also to texture mapping, by doing affine texture mapping, as it’s called on the PS. This mapping of textures is rather flawed and leads to the constant shifting of textures as the camera’s perspective is not taken into account. Although this texture mapping can be improved, the developers of the game have to add more polygons for this, which of course reduces performance. This is the main cause of the shifting and wobbling of textures.
Advertisement
Another issue on the PS was a lack of mipmapping support, which means a sequence of the same texture, each with each a different resolution. This allows for high-resolution textures to be used when the camera is close, and low-resolution textures when far away. On the PS this lack of mipmapping led to many texture pixels being rendered to the same point on the display, with camera movement leading to interesting flickering effects.
When it came to rendering to the output format, the Nintendo 64 created smooth gradients between the texture pixels (texels) to make them fit on the output resolution, whereas the PS used the much more primitive nearest neighbor interpolation that made especially edges of objects look like they both shimmered and changed shape and color.
The PS also lacked a dedicated floating point unit to handle graphics calculations, forcing a special Geometry Transformation Engine (GTE) in the CPU to handle transformation calculations, but all in integer calculations instead of with floating point values. This made e.g. fixed camera angles as in Resident Evil games very attractive for developers as movement would inevitably lead to visible artefacts.
Finally, the cartridge-based games of the N64 could load data from the mask ROMs about 100x faster than from the PS’s CDs, and with much lower latency. All of these differences would lead to entirely different games for both game consoles, with the N64 being clearly superior for 3D games, yet the PS being released long before the N64 for a competitive price along with the backing of Sony would make sure that it became a commercial success.
When it comes to the affordability crisis in child care, Lenice Emanuel says that it’s forcing families to take a hard look at their budgets — no matter their income level.
But as child care costs surpass the price of rent in some areas, those money choices are even more extreme for folks on the margins, explains Emanuel, executive director of the Alabama Institute for Social Justice.
“They’re gonna say, ‘We had to make a decision whether my husband stays home or I stay home because we both work, and we still can’t afford to pay the mortgage and child care,’” Emanuel says. “It’s just exacerbated for marginalized people, because they were already contending with deprivation, so these issues are just basically compounding what they were already dealing with. That’s why this is like a national crisis at this point.”
A recent analysis of the 100 largest U.S. metro areas found that the cost of child care for a family with two young children is more expensive than the average rent in each respective market.
Care for one child costs, on average, about 25 percent less than rent, according to the data from LendingTree. That changes with the addition of a second child, pushing child care costs up to more than double that of the average rent for a two-bedroom apartment in markets like Omaha, Nebraska, and Milwaukee, Wisconsin.
Advertisement
The numbers are much the same as they were last year, according to the analysis, with average rent prices increasing slightly. The national average price tag of care for one child has increased by about $3,700 since 2017, coming to about $13,100 per year in 2024.
Despite rising costs, child care workers are not feeling those increases reflected in their paychecks. It’s a sector that continues to struggle with thin margins, low wages, retaining workers and insufficient subsidies.
Managing Costs
Child care providers are caught between a rock and a hard place when it comes to deciding how to price their services, says Tyrone Scott, director of government and external affairs at First Up, an early childhood education advocacy group in Pennsylvania.
That’s because they’re trying to keep prices low enough for families to afford, which can be a struggle even with public-dollar subsidies, while paying their staff fairly. Scott says that the average wage for child care workers is $15 per hour in Pennsylvania, which is not enough to compete with big box retailers and convenience stores that offer a starting rate of $17 or more with no experience or degree required.
Advertisement
“The Wawa near me has a $5,000 signing bonus,” Scott says of the retail chain. “If you are a high school student who just graduated, you can get $5,000 and start at $21 an hour where I’m at. So you are doing much better than our teachers, which is really problematic, obviously.”
Inflation is another factor squeezing child care providers, Scott says, increasing the price of everything the food stocked in the kitchen to liability insurance — with some child care providers reporting that their insurance expenses have tripled. Those centers have to choose between eating the costs and taking in thinner margins or passing them along to parents by raising prices.
Jasmine Bowles, executive state director of 9to5 Georgia, says Georgia’s child care system has long been underfunded — despite politicians’ crowing about the state’s billions of dollars in budget surplus funds. Administrative delays in state reimbursements to child care providers also force them to go without, she says, to ensure the learners in their care have everything they need.
“When our care providers receive their class of students for the day, those babies still eat, even if [the providers] haven’t been paid from the state for a month or more,” she says. “This really manifests itself in housing, food, and health insecurity for the very caregivers that our communities depend on.”
Advertisement
Emanuel says that Alabama’s formula for calculating how much of child care costs the state will subsidize doesn’t accurately reflect the cost, which leaves providers to figure out how to make up the difference. Those costs are often passed on to parents, but Emanuel says providers also apply for grants or take second jobs to keep their centers running.
“Because of the way the market rate survey dictates the reimbursement rates, a parent could be paying $400, but it’s actually costing the provider $900 a month for child care,” she explains. “Many of these women do child care full time, and they actually have side hustles in order for them to be able to provide child care.”
Child care providers are not spared from rising costs for their own kids, Emanuel says. One provider in her state reports paying 80 percent of her own income for care for her own children.
The Attitude Factor
With the widespread challenges in child care affordability, experts say that one barrier to getting more state funding to address the problem is both the public’s and lawmakers’ perception of child care — including who should be doing the caring.
Advertisement
“I think there is some old school mentality in some legislators that still believes children should be at home with their mothers — specifically mothers. Not fathers even, but mothers,” Scott says. “That’s not the reality for a lot of families, whether you’re talking to a two-parent family or a single-parent family — most children need every available adult in the workforce to make ends meet. So there’s this myth that people don’t want to care for their own kids, or whatever, for lack of a better term, sexist trope that people put on.”
Scott and Emanuel both say that there are large swaths of the public who don’t see the benefit of their states subsidizing child care, be it support for working parents or the advantages of giving kids a strong start in their education.
“I think that a lot of times in this state, people see child care as, ‘You had the child, it’s your responsibility to pay for child care,’” Emanuel says. “But if the pandemic didn’t teach us anything else, it taught us that it is critical infrastructure because, without child care, people are not able to go to work.”
Scott says one alliance that has helped get their message across is help from Pennsylvania chambers of commerce, who can describe how the lack of affordable child care options interferes with employees’ ability to stick to their work schedules.
Advertisement
Emanuel says there’s another aspect of the debate over child care funding that can’t be ignored: who is doing the work.
Many Alabama child care centers are run and staffed by Black women, she says, whose labor has a long history of being undervalued. For child care providers, Emanuel says that means they are seen as babysitters rather than educators.
“A lot of the morale of these women is often times just diminished because everything about the system in child care, it’s saying to them repetitively that, ‘We don’t value you,’ and ‘You aren’t important,’” Emanuel says, “because when you do value a thing, then you’re going to tie the resources and the infrastructure in place to ensure a specific end.”
Bowles echoed Emanuel’s sentiments in Georgia, saying that the state’s historical reliance on unpaid labor is a factor in the undervaluing of child care work. There’s a disconnect between, she says, the desire of lawmakers to make the state appealing to businesses and policies that make life easier for workers — like affordable child care, health care and food.
Advertisement
Beyond her role as an advocate, Bowles also has the perspective of someone who sits on the board of directors for her local school district in Georgia. After the start of the coronavirus pandemic, she had a front row seat to how schools grappled with students losing ground in their academic and social skills when in-person classes restarted.
“When we get our young people in the class, we start to see the impact of those learning gaps, and I think that was most keen in our earliest learners,” Bowles says. “My district in particular has started to rethink a traditional public school district’s responsibility for [early childhood education]. We’re also starting to incorporate more pre-K classrooms, because it really is becoming the responsibility of all of us, not just day care centers, to close these gaps.”
Interplay occupies a unique place in the Bill Evans catalog. According to the liner notes, it marked the first time the legendary pianist led a quintet—and his first studio date fronting a group that included a horn player. Craft Recordings’ recent Original Jazz Classics (OJC) reissue may be the best way to revisit this spirited, upbeat session on vinyl, continuing the OJC tradition of excellent sound quality and consistently high production values.
From the official press materials:
“Continuing OJC’s commitment to quality, these reissues feature lacquers cut directly from the original stereo master tapes (AAA) by Kevin Gray at Cohearent Audio, 180-gram vinyl pressed at RTI, and tip-on jackets faithfully reproducing the original artwork.
Originally launched in 1982 under Fantasy Records, Original Jazz Classics was revived in 2023 with a renewed emphasis on audiophile-grade reissues of landmark jazz recordings. With more than 850 titles reissued to date—drawing from the catalogs of Prestige, Riverside, Galaxy, Contemporary, Jazzland, Milestone, and others—OJC remains a reliable source for both jazz discovery and rediscovery.”
Advertisement
The new Craft Original Jazz Classics editions feature period-accurate labels, faithfully reproduced original cover art, and high-quality sleeve construction. Each LP is housed in an audiophile-grade, plastic-lined inner sleeve, underscoring Craft’s attention to both presentation and long-term record care.
The pressing reviewed here is dead quiet and perfectly centered—a detail that matters enormously on piano-driven recordings like this, where even slight off-center pressing can cause the pitch to waver. Here, the vinyl disappears in the best possible way, letting you simply bask in the effortless musicality of these legendary musicians.
As hinted at the start of this review, Interplay brings a noticeably different vibe compared to many other Bill Evans recordings. The band sounds more aggressive and, at times, downright on fire, with Freddie Hubbard’s sizzling trumpet clearly pushing the session’s energy higher. That spark seems to coax guitarist Jim Hall into some surprisingly hot territory. While Hall has never been one of my go-to guitarists, hearing this level of urgency and edge from his playing is a genuine revelation.
Yet it’s not all push-push bravado. This is still a Bill Evans record, and the playing remains unfailingly tasteful. The calmer moments shine just as brightly, highlighted by a positively gorgeous reading of the classic “When You Wish Upon a Star.”
Advertisement
I really like how Percy Heath’s vibrant walking bass lines provide steady propulsion from track to track. Even with the fairly discrete stereo spread that places him slightly to one side, his presence anchors the band. Freddie Hubbard and Jim Hall emerge from the opposite side of the soundstage, while the drums have a pleasing sense of space with crisp cymbals and natural sounding snare “bombs” delivered by the legendary Philly Joe Jones.
All in all, I am very pleased with the music here and expect to spend a lot more time with it now that it is in my collection. The tunes are wonderful, the performances exemplary, and the recording as presented on this Craft RecordingsInterplay Original Jazz Classics edition is rich and round.
Mark Smotroff is a deep music enthusiast / collector who has also worked in entertainment oriented marketing communications for decades supporting the likes of DTS, Sega and many others. He reviews vinyl for Analog Planet and has written for Audiophile Review, Sound+Vision, Mix, EQ, etc. You can learn more about him at LinkedIn.
The “OpenClaw moment” represents the first time autonomous AI agents have successfully “escaped the lab” and moved into the hands of the general workforce.
Originally developed by Austrian engineer Peter Steinberger as a hobby project called “Clawdbot” in November 2025, the framework went through a rapid branding evolution to “Moltbot” before settling on “OpenClaw” in late January 2026.
Unlike previous chatbots, OpenClaw is designed with “hands”—the ability to execute shell commands, manage local files, and navigate messaging platforms like WhatsApp and Slack with persistent, root-level permissions.
This capability — and the uptake of what was then called Moltbot by many AI power users on X — directly led another entrepreneur, Matt Schlicht, to develop Moltbook, a social network where thousands of OpenClaw-powered agents autonomously sign up and interact.
Advertisement
The result has been a series of bizarre, unverified reports that have set the tech world ablaze: agents reportedly forming digital “religions” like Crustafarianism, hiring human micro-workers for digital tasks on another website, “Rentahuman,” and in some extreme unverified cases, attempting to lock their own human creators out of their credentials.
Simultaneously, the “SaaSpocalypse“—a massive market correction that wiped over $800 billion from software valuations—has proven that the traditional seat-based licensing model is under existential threat.
So how should enterprise technical decision-makers think through this fast-moving start to the year, and how can they start to understand what OpenClaw means for their businesses? I spoke to a small group of leaders at the forefront of enterprise AI adoption this week to get their thoughts. Here’s what I learned:
Advertisement
1. The death of over-engineering: productive AI works on “garbage” data
The prevailing wisdom once suggested that enterprises needed massive infrastructure overhauls and perfectly curated data sets before AI could be useful. The OpenClaw moment has shattered that myth, proving that modern models can navigate messy, uncurated data by treating “intelligence as a service.”
“The first takeaway is the amount of preparation that we need to do to make AI productive,” says Tanmai Gopal, Co-founder & CEO at PromptQL, a well-funded enterprise data engineering and consulting firm. “There is a surprising insight there: you actually don’t need to do too much preparation. Everybody thought we needed new software and new AI-native companies to come and do things. It will catalyze more disruption as leadership realizes that we don’t actually need to prep so much to get AI to be productive. We need to prep in different ways. You can just let it be and say, ‘go read all of this context and explore all of this data and tell me where there are dragons or flaws.’”
“The data is already there,” agreed Rajiv Dattani, co-founder of AIUC (the AI Underwriting Corporation), which has developed the AIUC-1 standard for AI agents as part of a consortium with leaders from Anthropic, Google, CISCO, Stanford and MIT. “But the compliance and the safeguards, and most importantly, the institutional trust is not. How can you ensure your agentic systems don’t go off and go full MechaHitler and start offending people or causing problems?”
Hence why Dattani’s company, AUIC, provides a certification standard, AIUC-1, that enterprises can put agents through in order to obtain insurance that backs them up in event they do cause problems. Without putting OpenClaw agents or other similar agents through such a process, enterprises are likely less ready to accept the consequences and costs of autonomy gone awry.
Advertisement
2. The rise of the “secret cyborgs”: shadow IT is the new normal
With OpenClaw amassing over 160,000 GitHub stars, employees are deploying local agents through the back door to stay productive.
This creates a “Shadow IT” crisis where agents often run with full user-level permissions, potentially creating backdoors into corporate systems (as Wharton School of Business Professor Ethan Mollick has written, many employees are secretly adopting AI to get ahead at work and obtain more leisure time, without informing superiors or the organization).
Now, executives are actually observing this trend in realtime as employees deploy OpenClaw on work machines without authorization.
“It’s not an isolated, rare thing; it’s happening across almost every organization,” warns Pukar Hamal, CEO & Founder of enterprise AI security diligence firm SecurityPal. “There are companies finding engineers who have given OpenClaw access to their devices. In larger enterprises, you’re going to notice that you’ve given root-level access to your machine. People want tools so tools can do their jobs, but enterprises are concerned.”
Advertisement
Brianne Kimmel, Founder & Managing Partner of venture capital firm Worklife Ventures, views this through a talent-retention lens. “People are trying these on evenings and weekends, and it’s hard for companies to ensure employees aren’t trying the latest technologies. From my perspective, we’ve seen how that really allows teams to stay sharp. I have always erred on the side of encouraging, especially early-career folks, to try all of the latest tools.”
3. The collapse of seat-based pricing as a viable business model
The 2026 “SaaSpocalypse” saw massive value erased from software indices as investors realized agents could replace human headcount.
If an autonomous agent can perform the work of dozens of human users, the traditional “per-seat” business model becomes a liability for legacy vendors.
“If you have AI that can log into a product and do all the work, why do you need 1,000 users at your company to have access to that tool?” Hamal asks. “Anyone that does user-based pricing—it’s probably a real concern. That’s probably what you’re seeing with the decay in SaaS valuations, because anybody that is indexed to users or discrete units of ‘jobs to be done’ needs to rethink their business model.”
Advertisement
4. Transitioning to an “AI coworker” model
The release of Claude Opus 4.6 and OpenAI’s Frontier this week already signals a shift from single agents to coordinated “agent teams.”
In this environment, the volume of AI-generated code and content is so high that traditional human-led review is no longer physically possible.
“Our senior engineers just cannot keep up with the volume of code being generated; they can’t do code reviews anymore,” Gopal notes. “Now we have an entirely different product development lifecycle where everyone needs to be trained to be a product person. Instead of doing code reviews, you work on a code review agent that people maintain. You’re looking at software that was 100% vibe-coded… it’s glitchy, it’s not perfect, but dude, it works.”
“The productivity increases are impressive,” Dattani concurred. “It’s clear that we are at the onset of a major shift in business globally, but each business will need to approach that slightly differently depending on their specific data security and safety requirements. Remember that even while you’re trying to outdo your competition, they are bound by the same rules and regulations as you — and it’s worth it to take time to get it right, start small, don’t try to do too much at once.”
Advertisement
5. Future outlook: voice interfaces, personality, and global scaling
The experts I spoke to all see a future where “vibe working” becomes the norm.
Local, personality-driven AI—including through voice interfaces like Wispr or ElevenLabs powered OpenClaw agents—will become the primary interface for work, while agents handle the heavy lifting of international expansion.
“Voice is the primary interface for AI; it keeps people off their phones and improves quality of life,” says Kimmel. “The more you can give AI a personality that you’ve uniquely designed, the better the experience. Previously, you’d need to hire a GM in a new country and build a translation team. Now, companies can think international from day one with a localized lens.”
Hamal adds a broader perspective on the global stakes: “We have knowledge worker AGI. It’s proven it can be done. Security is a concern that will rate-limit enterprise adoption, which means they’re more vulnerable to disruption from the low end of the market who don’t have the same concerns.”
Advertisement
Best practices for enterprise leaders seeking to embrace agentic AI capabilities at work
As OpenClaw and similar autonomous frameworks proliferate, IT departments must move beyond blanket bans toward structured governance. Use the following checklist to manage the “Agentic Wave” safely:
Implement Identity-Based Governance: Every agent must have a strong, attributable identity tied to a human owner or team. Use frameworks like IBC (Identity, Boundaries, Context) to track who an agent is and what it is allowed to do at any moment.
Enforce Sandbox Requirements: Prohibit OpenClaw from running on systems with access to live production data. All experimentation should occur in isolated, purpose-built sandboxes on segregated hardware.
Audit Third-Party “Skills”: Recent reports indicate nearly 20% of skills in the ClawHub registry contain vulnerabilities or malicious code. Mandate a “white-list only” policy for approved agent plugins.
Disable Unauthenticated Gateways: Early versions of OpenClaw allowed “none” as an authentication mode. Ensure all instances are updated to current versions where strong authentication is mandatory and enforced by default.
Monitor for “Shadow Agents”: Use endpoint detection tools to scan for unauthorized OpenClaw installations or abnormal API traffic to external LLM providers.
Update AI Policy for Autonomy: Standard Generative AI policies often fail to address “agents.” Update policies to explicitly define human-in-the-loop requirements for high-risk actions like financial transfers or file system modifications.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? It’s Saturday, so it’s a long one, and a few of the clues are tricky. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Germany’s domestic intelligence agency is warning of suspected state-sponsored threat actors targeting high-ranking individuals in phishing attacks via messaging apps like Signal.
The attacks combine social engineering with legitimate features to steal data from politicians, military officers, diplomats, and investigative journalists in Germany and across Europe.
The security advisory is based on intelligence collected by the Federal Office for the Protection of the Constitution (BfV) and the Federal Office for Information Security (BSI).
“A defining characteristic of this attack campaign is that no malware is used, nor are technical vulnerabilities in the messaging services exploited,” the two agencies inform.
According to the advisory, the attackers contact the target directly, pretending to be from the support team of the messaging service or the support chatbot.
Advertisement
“The goal is to covertly gain access to one-to-one and group chats as well as contact lists of the affected individuals,”
There are two versions of these attacks: one that performs a full account takeover, and one that pairs the account with the attacker’s device to monitor chat activity.
In the first variant, the attackers impersonate Signal’s support service and send a fake security warning to create a sense of urgency.
The target is then tricked into sharing their Signal PIN or an SMS verification code, which allows the attackers to register the account to a device they control. Then they hijack the account and lock out the victim.
Advertisement
Attackers impersonating Signal support in direct message Source: BSI
In the second case, the attacker uses a plausible ruse to convince the target to scan a QR code. This abuses Signal’s legitimate linked-device feature that allows adding the account to multiple devices (computer, tablet, phone).
The result is that the victim account is paired with a device controlled by the bad actor, who gets access chats and contacts without raising any flags.
QR code used for pairing a new device Source: BSI
Although Signal lists all devices attached to the account under Settings > Linked devices, users rarely check it.
Such attacks were observed to occur on Signal, but the bulletin warns that WhatsApp also supports similar functionality and could be abused in the same way.
Last year, Google threat researchers reported that the QR code pairing technique was employed by Russian state-aligned threat groups such as Sandworm.
Ukraine’s Computer Emergency Response Team (CERT-UA) also attributed similar attacks to Russian hackers, targeting WhatsApp accounts.
Advertisement
However, multiple threat actors, including cybercriminals, have since adopted the technique in campaigns like GhostPairing to hijack accounts for scams and fraud.
The German authorities suggest that users avoid replying to Signal messages from alleged support accounts, as the messaging platform never contacts users directly.
Instead, recipients of these messages are recommended to block and report these accounts.
As an extra security step, Signal users can enable the ‘Registration Lock’ option under Settings > Account. Once active, Signal will ask for a PIN you set whenever someone tries to register your phone number with the application.
Advertisement
Without the PIN code, the Signal account registration on another device fails. Since the code is essential for registration, losing it can result in losing access to the account.
It is also strongly recommended that users regularly review the list of devices with access to your Signal account under Settings → Linked devices, and remove unrecognized devices.
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
Before the Internet, there was a certain value to knowing how to find out about things. Reference librarians could help you locate specialized data like the Thomas Register, the EE and IC Masters for electronics, or even an encyclopedia or CRC handbook. But if you wanted up-to-date info on any country of the world, you’d often turn to the CIA. The originally classified document was what the CIA knew about every country in the world. Well, at least what they’d admit to knowing, anyway. But now, the Factbook is gone.
The publication started in 1962 as the classified “The National Basic Intelligence Factbook,” it went public in 1971 and became “The World Factbook” in the 1980s. While it is gone, you can rewind it, including a snapshot taken just before it went dark on Archive.org.
Browsing the archives, it looks like the last update was in September of 2025. It would be interesting to see a project like Wikipedia take the dataset, house it, and update it, although you can presume the CIA was better equipped. The data is public domain, after all.
Want to know things about Croatia? Unfortunately, the archive seems to have missed some parts of some pages. However, there are other mirrors, including some that have snapshots of the data in one form or another. Of course, these are not always the absolute latest (the link has data from 2023). But we would guess the main languages (Croatian and Serbian) haven’t changed. You can also find the internet country suffix (.hr) and rankings (for example, in 2020, Croatia ranked 29th in the world for the number of broadband internet subscribers scaled for population and 75th in total broadband usage.
Advertisement
We are sorry to see such a useful reference go, but reference books are definitely an endangered species these days.
We may receive a commission on purchases made from links.
Many who are new to photography are often mesmerized by the latest and greatest cameras, where large megapixel counts and full frame sensors are often equated to better image quality. I’ve experienced this myself when I was new to photography some 20 years ago, and I even had similar thoughts when I bought a camera again recently.
But instead of splurging all your money on one of the best mirrorless cameras or a DSLR, I suggest that you get a more affordable camera or even a decent used mirrorless camera instead. You can then use the savings from that to buy a good set of lenses that will do more for your photography. Stepping away from the cheap kit lenses often included in entry-level and even some mid-range camera models can let you unlock your creativity and even give you more flexibility in executing your vision.
Advertisement
However, there are a ton of lenses available on the market, and they can get quite expensive once you start buying everything. This might make shopping for lenses confusing, as you wouldn’t know which to prioritize when you’re building your kit. Of course, there’s no one-size-fits-all answer to this, as lens preferences will vary between shooting styles. But, at the very least, these are some of the lenses that every photographer needs to try at least once in their life, allowing you to explore the different styles and capabilities you get with these lenses.
Advertisement
50mm for everyday use
Russell Johnson / 500px/Getty
The 50mm lens is popularly known as the “nifty fifty” in photography circles, and if you ask any professional photographer or serious hobbyist, this would be one of the first lenses that they’d recommend. Many say that 50mm approximates what the human eye sees, but that is debatable. Nevertheless, it’s still recommended because this focal length has limited distortion, so the photos that it would take typically look natural. It’s also quite versatile, and I used it for portraiture, still and product photography, travel photography, and as a photojournalist.
More importantly, it’s one of the cheapest, “fast” lenses that you can buy, usually offering a f/1.8 aperture or bigger. You can find a brand-new Canon EF 50mm f/1.8 STM lens on Amazon for just $166, while a comparable Sony FE 50mm F1.8 Standard lens goes for $278. If you find this still a bit steep, there are several other third-party lens options from manufacturers like Yongnuo and Viltrox. You can get them even cheaper by buying a used camera lens, but you need to know what to look for.
Note that if you have a cropped-sensor camera, the 50mm lens is cropped like a 75mm (for Fujifilm and Nikon), an 80mm (for Canon), or a 100mm (for Lumix) lens, depending on the model and camera brand. But if you have a Canon camera with an APS-C sensor and want to recreate the field-of-view (FOV) of a 50mm lens on it, a 35mm lens is the closest that you can get from the brand (although a 30mm lens from a third-party manufacturer like Sigma is closer to the FOV of the 50mm on a full frame camera).
Advertisement
35mm for street photography
Dustin Abbott / 500px/Getty
Although the 50mm is a handy lens for nearly every situation, its FOV is rather narrow, especially if you’re shooting in enclosed spaces or when you want to include the environment for a bit of context. That’s why street photographers prefer the wider focal lens of the 35mm — it allows them to capture wider vantage points without getting too much distortion from even wider lenses. It’s also still useful for portrait photography, with the lead photographer on the team that I worked with as wedding photographer getting a Sigma 35mm f/1.4 Art lens after he got to try my more basic Canon EF 35mm f/2 lens.
As usual, you wouldn’t get the same FOV when you mount a 35mm lens on a cropped-sensor camera. And since I sold all my full frame cameras when I retired from the wedding photography industry and “downgraded” to this cheap, yet high-quality digital camera, I bought a Canon EF-S 24mm f/2.8 STM lens which equates to about 38.4mm when attached to my Canon EOS 200D Mk II.
I love this lens for my street photography because it’s relatively small and unassuming, even with a lens hood attached, and you can see a sample in the Instagram post above of a photograph taken with the lens I mentioned. The only downside is that since it’s a prime lens, the only way that I can “zoom” into the image is to physically get closer to it.
Advertisement
100mm macro for portraits and details
spotters_studio/Shutterstock
I would’ve recommended the 85mm lens as a must-try lens, but I often just relegate it to a portraiture and candid photography duties. So, I’d rather suggest a 100mm macro lens which can achieve a similar effect (although at slightly smaller f/2.8 aperture vs. the larger f/1.8 found on the 85mm) of compressing the space between you and the subject, resulting in more flattering portraits. But what I like best about the 100mm is that it’s also a macro lens, and it unlocks a whole new world that you wouldn’t otherwise get from other lenses.
The 100mm macro lens let me get so much closer to my subject compared to the other lenses. My old Canon EF 100mm f/2.8 Macro lets me get as close as 12 inches to my subject (versus the 33 inches minimum focusing distance of the Canon EF 85mm f/1.8), allowing you to see finer details and even reveal the textures of the surfaces of the objects you’ve photographed. That’s why it’s one of the essential lenses you need if you want to get into product photography.
Advertisement
24-70mm f/2.8: the standard lens
Wirestock Creators/Shutterstock
When I worked as a wedding and event photographer, almost everyone in the industry had this lens, even though it’s quite expensive. For example, the Canon EF 24-70mm f/2.8L USM is currently priced at more than $1,200 on Amazon, making it even more expensive than some entry-level and even mid-range cameras. But it’s worth the investment because of how well-rounded it is.
The lens can capture wide areas on the 24mm end of its range, and you can even use its distortion to create an effect. It also retains the ability to take decent portraits at 70mm, while the 35mm and 50mm focal lengths we discussed above are also covered. More importantly, it has a fixed f/2.8 aperture, so you do not need to push the sensitivity on your camera when shooting in low-light situations.
Of course, this lens has its own downsides, too. Aside from being quite expensive, it’s also a large and heavy piece of equipment, weighing 805 grams. It’s a great lens, and I love its wide range and large opening for covering events. But its size and heft tend to make it unsuitable for street photography, especially as you lose the discretion of smaller and lighter prime lenses.
Advertisement
70-200mm f/2.8: a fast zoom lens
wisnupriyono/Shutterstock
As a newbie photographer, I’ve always wanted the long reach of a zoom lens, and I achieved that with the 70-200mm lens. However, this is more than just a zoom lens — aside from getting me nearer to the action, the narrow FOV of this lens brings the background much closer, allowing them to look so much bigger than what you’d usually see with your naked eye.
You can see in the sample photo above the shallow depth-of-field that lets me isolate my subject from the foreground and the background, making it easier to guide the viewer to what I want them to see. This is next to impossible to achieve on other wider lenses, unless you edit the image on your phone or computer.
Advertisement
This is going to be an important part of your kit if you’re into sports and wildlife photography. The 200mm reach can give you decent reach so you can capture the action up close even if you’re sitting courtside or behind the safety barriers of an F1 race. It also lets you capture images of birds and other animals without endangering them or yourself.
However, just like the 24-70mm, this lens is quite expensive. The Canon EF 70-200mm f/2.8L IS III USM currently costs $2,399 on Amazon, a small fortune for most hobbyists but a crucial investment for professionals. But whether you plan to turn your passion into a business or just want to enjoy capturing the beauty of the world the way you see it, you need to try out this lens at least once in your life to see the possibilities that it will give you.
Despite looking relatively the same as the older models, B&W have given it a significant overhaul. The headband has been redesigned to fit a wider range of heads, the controls reshaped to be easier to find and use, while the headphones are slimmer for a more attractive profile.
The only issue we have is with the controls, which we didn’t feel as if they needed to be changed but they work fine enough.
These headphones feature noise cancelling and a transparency mode and despite Bower’s claims of improving both areas, the noise cancelling isn’t as strong as the Bose QuietComfort Ultra Headphones or Sony WH-1000XM6. The transparency mode could be clearer too. ANC is not these headphones’ strongest point.
Advertisement
The Bowers & Wilkins Music app offers the means to customise bass and treble, as well as a custom EQ option to create your own sound profile, a first for a pair of Bowers wireless headphones.
These headphones keep the feature set relatively simple, and aren’t as ‘smart’ or as feature-laded as the less expensive Sony WH-1000XM5 but the app does have built-in streaming support for services such as Qobuz, Deezer, and Tidal.
The battery life remains 30 hours of listening from one charge, though in our tests we found it could go longer with an Android smartphone.
Bluetooth support includes aptX Lossless, the one of the higher quality wireless codecs, and as usual the wireless connection is excellent.
Advertisement
These are also one of the best headphones for call quality, offering great clarity and detail while keeping background sounds to a minimum.
The sound quality here is the best it’s been for the Px7 range. It’s energetic, clear, expressive and natural in how it sounds, the headphones’ levels of detail, dynamism and sense of spaciousness make it one of the best-sounding models on the market.
Low frequencies have more depth and power, the midrange is detailed and the high frequencies clear. If you’re after a pair of wireless headphones for the sound, there’s none better at this price than the Px7 S3.
Sitting in a recent district administrator meeting, I found myself excited about a new student data platform my district is rolling out. This new tool, called by a catchy acronym and presented on a flashy dashboard, would collect a variety of information about student skills, mindsets and achievement. It would let us break down information by subgroup and assign overall scores to students, helping us identify who needs additional support.
Initially, I was enthusiastic about how it could empower teachers to better understand students and improve outcomes. But since then, after conversations with the teachers in my building and reflecting on my own experiences using data in the classroom, I’ve begun to wonder whether we are focusing on the wrong data or placing too much emphasis on data overall.
I love looking at data. I’m excited when data surprises me or shows me something more clearly. It’s motivating to see trend lines sloping upward and green arrows pointing toward the sky. Data can help us see the bigger picture when looking at larger systems. We can see which schools are suspending too many students of color and which districts are improving reading scores. As an administrator, I find this illuminating and helpful in guiding how schools make decisions.
Advertisement
But as data trickles down to classrooms and individual students, the usefulness and impact get murkier. In the Montessori school where I teach, where our focus is guiding the child according to their interests and readiness, the data we have to collect affects what we focus on, often in unexpected ways, and sometimes to the detriment of the system itself.
Teaching to the Test
My school is a successful one, and looking at our annual school report card should be a source of pride for the teachers. The report card is based primarily on our state test scores in math and reading, and various calculations are made from our students’ performance on it. But when we shared the most recent report card that showed our school once again exceeded expectations, the results were met with shrugs and muted applause. It isn’t that they aren’t proud of what our students can do; they just recognize the narrowness of the data and how indirectly it connects to what is happening in their Montessori classrooms.
When I pointed out that our report card showed math achievement was an area for improvement, the response was, “Are you saying we should teach to the test?” They know that we could game the system by focusing on test prep and the specific questions their students might encounter. Because we follow a Montessori curriculum with three grade levels in our classrooms, our sequence doesn’t always align with grade-level standards, which can show up on tests, with students scoring poorly on topics they haven’t been introduced to yet. We could align our curriculum with the test and focus our teaching on what the test assesses, but doing so goes against our philosophy of allowing students to make choices about their learning at their own pace.
With this tension in mind, I wonder if data distorts the focus of education? Our current focus on reading and math scores, based on standardized testing, is part of what we want our schools to do. But teachers know that students are capable of achieving much more than our report cards show. Is there some golden indicator that we just haven’t found yet — a measurement like happiness or flourishing — that would be more meaningful? And of course, if we find it, won’t it also become distorted?
Advertisement
Information Overload
There is also a heavy focus in our district on using data to determine which students qualify for additional support through differentiation, interventions and individualized instruction. Administration requires us to hold monthly meetings to review student data and determine who is progressing and who might need more support. On one level, this seems like a great practice for identifying who needs help, but in reality, the system’s capacity to act on that information is overstretched, leading to distortion and ultimately to burnout.
I remember my frustrations as a teacher in these meetings. The data was interesting and could help you to confirm or question ideas you had about students based on your classroom observations. But it didn’t often provide helpful information for supporting students. The time spent in these meetings outweighed the benefit I got from them, and took away from the little time I had to prepare and plan for my students.
Teachers I work with have regularly expressed feeling overwhelmed by the amount of information they need to consider and the testing required to gather it. In our early grades, due to a new state law mandating early literacy assessments, students are tested monthly on letter-sound identification and oral reading fluency. This generates an unending stream of data to grapple with and a constant feeling of needing to do more to address it, all of which adds to stress on teachers, students and the system. I’ve seen amazing teachers, skilled at connecting with kids and providing rich learning experiences, brought to tears because there was too much red on a data spreadsheet.
Teachers don’t have the time to assess and examine all the data they’re now expected to, and monthly checks of early reading indicators take time away from actually teaching those skills. Being responsive to the data you gather means stopping what you’re doing and finding new ways to help kids learn what the data says they need. Teachers are expected to find new resources and determine when and how to work with small groups that need similar support, while also providing meaningful learning opportunities for other students. And, of course, different kids need different things, so you’d need to do this for multiple groups, which is unrealistic to expect all teachers to have the capacity to do.
Advertisement
Meaningful Measurement
Schools, as they are currently designed, weren’t supposed to be responsive to the amount of data we’re collecting. They were designed to teach a group of students a set of information in a specific sequence each year, and then grade them on how well they learned what they were expected to learn. They were designed to tell us which students could meet the standards, and who couldn’t, not to ensure that each child could learn and flourish.
When I was a classroom teacher, I kept track of how many books my students read each month. It wasn’t research-backed or scientifically valid, but I found the data helpful for identifying who was and wasn’t reading, and thinking about how I could support them. In some cases, it helped me direct kids to books that they might get excited about; in other cases, it just let me know that a particular kid wasn’t that into reading, and that that might have to be OK for now. The data wasn’t complicated, but it let me quantify what I was observing in my classroom in a way that was meaningful to me and, most importantly, helped me connect with my students as whole people.
A key component of Montessori philosophy is the teacher as observer — watching and documenting what students choose and do to understand and assess what they are ready for. Every teacher should have the time and space to measure and track what feels meaningful and helpful to them.
This may look different for every teacher, but the important factor is that it has meaning to them and is connected to their students and their practice. Likewise, we need to remember that standardizing the expectations for students goes against what we know about how people develop. There’s always going to be variation in a dataset — there’s no metric on which we are all the same.
Advertisement
As an administrator, my responsibility is to understand and use data in ways that are helpful, while also protecting teachers and students from distractions and distortions that undermine the larger goals of creating opportunities for growth and learning for all students.
Ultimately, data should serve as a guide rather than a governor, informing our decisions without eclipsing the human elements of teaching and learning. If we can strike that balance, we can create systems that honor both the complexity of children and the professional wisdom of the educators who know them best.