Connect with us

Tech

5 Lenses Every Photographer Needs To Try At Least Once

Published

on





We may receive a commission on purchases made from links.

Many who are new to photography are often mesmerized by the latest and greatest cameras, where large megapixel counts and full frame sensors are often equated to better image quality. I’ve experienced this myself when I was new to photography some 20 years ago, and I even had similar thoughts when I bought a camera again recently.

But instead of splurging all your money on one of the best mirrorless cameras or a DSLR, I suggest that you get a more affordable camera or even a decent used mirrorless camera instead. You can then use the savings from that to buy a good set of lenses that will do more for your photography. Stepping away from the cheap kit lenses often included in entry-level and even some mid-range camera models can let you unlock your creativity and even give you more flexibility in executing your vision.

Advertisement

However, there are a ton of lenses available on the market, and they can get quite expensive once you start buying everything. This might make shopping for lenses confusing, as you wouldn’t know which to prioritize when you’re building your kit. Of course, there’s no one-size-fits-all answer to this, as lens preferences will vary between shooting styles. But, at the very least, these are some of the lenses that every photographer needs to try at least once in their life, allowing you to explore the different styles and capabilities you get with these lenses.

Advertisement

50mm for everyday use

The 50mm lens is popularly known as the “nifty fifty” in photography circles, and if you ask any professional photographer or serious hobbyist, this would be one of the first lenses that they’d recommend. Many say that 50mm approximates what the human eye sees, but that is debatable. Nevertheless, it’s still recommended because this focal length has limited distortion, so the photos that it would take typically look natural. It’s also quite versatile, and I used it for portraiture, still and product photography, travel photography, and as a photojournalist.

More importantly, it’s one of the cheapest, “fast” lenses that you can buy, usually offering a f/1.8 aperture or bigger. You can find a brand-new Canon EF 50mm f/1.8 STM lens on Amazon for just $166, while a comparable Sony FE 50mm F1.8 Standard lens goes for $278. If you find this still a bit steep, there are several other third-party lens options from manufacturers like Yongnuo and Viltrox. You can get them even cheaper by buying a used camera lens, but you need to know what to look for.

Note that if you have a cropped-sensor camera, the 50mm lens is cropped like a 75mm (for Fujifilm and Nikon), an 80mm (for Canon), or a 100mm (for Lumix) lens, depending on the model and camera brand. But if you have a Canon camera with an APS-C sensor and want to recreate the field-of-view (FOV) of a 50mm lens on it, a 35mm lens is the closest that you can get from the brand (although a 30mm lens from a third-party manufacturer like Sigma is closer to the FOV of the 50mm on a full frame camera).

Advertisement

35mm for street photography

Although the 50mm is a handy lens for nearly every situation, its FOV is rather narrow, especially if you’re shooting in enclosed spaces or when you want to include the environment for a bit of context. That’s why street photographers prefer the wider focal lens of the 35mm — it allows them to capture wider vantage points without getting too much distortion from even wider lenses. It’s also still useful for portrait photography, with the lead photographer on the team that I worked with as wedding photographer getting a Sigma 35mm f/1.4 Art lens after he got to try my more basic Canon EF 35mm f/2 lens.

As usual, you wouldn’t get the same FOV when you mount a 35mm lens on a cropped-sensor camera. And since I sold all my full frame cameras when I retired from the wedding photography industry and “downgraded” to this cheap, yet high-quality digital camera, I bought a Canon EF-S 24mm f/2.8 STM lens which equates to about 38.4mm when attached to my Canon EOS 200D Mk II. 

I love this lens for my street photography because it’s relatively small and unassuming, even with a lens hood attached, and you can see a sample in the Instagram post above of a photograph taken with the lens I mentioned. The only downside is that since it’s a prime lens, the only way that I can “zoom” into the image is to physically get closer to it.

Advertisement

100mm macro for portraits and details

I would’ve recommended the 85mm lens as a must-try lens, but I often just relegate it to a portraiture and candid photography duties. So, I’d rather suggest a 100mm macro lens which can achieve a similar effect (although at slightly smaller f/2.8 aperture vs. the larger f/1.8 found on the 85mm) of compressing the space between you and the subject, resulting in more flattering portraits. But what I like best about the 100mm is that it’s also a macro lens, and it unlocks a whole new world that you wouldn’t otherwise get from other lenses.

The 100mm macro lens let me get so much closer to my subject compared to the other lenses. My old Canon EF 100mm f/2.8 Macro lets me get as close as 12 inches to my subject (versus the 33 inches minimum focusing distance of the Canon EF 85mm f/1.8), allowing you to see finer details and even reveal the textures of the surfaces of the objects you’ve photographed. That’s why it’s one of the essential lenses you need if you want to get into product photography.

Advertisement

24-70mm f/2.8: the standard lens

When I worked as a wedding and event photographer, almost everyone in the industry had this lens, even though it’s quite expensive. For example, the Canon EF 24-70mm f/2.8L USM is currently priced at more than $1,200 on Amazon, making it even more expensive than some entry-level and even mid-range cameras. But it’s worth the investment because of how well-rounded it is.

The lens can capture wide areas on the 24mm end of its range, and you can even use its distortion to create an effect. It also retains the ability to take decent portraits at 70mm, while the 35mm and 50mm focal lengths we discussed above are also covered. More importantly, it has a fixed f/2.8 aperture, so you do not need to push the sensitivity on your camera when shooting in low-light situations.

Of course, this lens has its own downsides, too. Aside from being quite expensive, it’s also a large and heavy piece of equipment, weighing 805 grams. It’s a great lens, and I love its wide range and large opening for covering events. But its size and heft tend to make it unsuitable for street photography, especially as you lose the discretion of smaller and lighter prime lenses.

Advertisement

70-200mm f/2.8: a fast zoom lens

As a newbie photographer, I’ve always wanted the long reach of a zoom lens, and I achieved that with the 70-200mm lens. However, this is more than just a zoom lens — aside from getting me nearer to the action, the narrow FOV of this lens brings the background much closer, allowing them to look so much bigger than what you’d usually see with your naked eye. 

You can see in the sample photo above the shallow depth-of-field that lets me isolate my subject from the foreground and the background, making it easier to guide the viewer to what I want them to see. This is next to impossible to achieve on other wider lenses, unless you edit the image on your phone or computer.

Advertisement

This is going to be an important part of your kit if you’re into sports and wildlife photography. The 200mm reach can give you decent reach so you can capture the action up close even if you’re sitting courtside or behind the safety barriers of an F1 race. It also lets you capture images of birds and other animals without endangering them or yourself.

However, just like the 24-70mm, this lens is quite expensive. The Canon EF 70-200mm f/2.8L IS III USM currently costs $2,399 on Amazon, a small fortune for most hobbyists but a crucial investment for professionals. But whether you plan to turn your passion into a business or just want to enjoy capturing the beauty of the world the way you see it, you need to try out this lens at least once in your life to see the possibilities that it will give you.

Advertisement



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Sixteen Claude AI agents working together created a new C compiler

Published

on

Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you’ll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company’s Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Carlini, a research scientist on Anthropic’s Safeguards team who previously spent seven years at Google Brain and DeepMind, used a new feature launched with Claude Opus 4.6 called “agent teams.” In practice, each Claude instance ran inside its own Docker container, cloning a shared Git repository, claiming tasks by writing lock files, then pushing completed code back upstream. No orchestration agent directed traffic. Each instance independently identified whatever problem seemed most obvious to work on next and started solving it. When merge conflicts arose, the AI model instances resolved them on their own.

Advertisement

The resulting compiler, which Anthropic has released on GitHub, can compile a range of major open source projects, including PostgreSQL, SQLite, Redis, FFmpeg, and QEMU. It achieved a 99 percent pass rate on the GCC torture test suite and, in what Carlini called “the developer’s ultimate litmus test,” compiled and ran Doom.

It’s worth noting that a C compiler is a near-ideal task for semi-autonomous AI model coding: The specification is decades old and well-defined, comprehensive test suites already exist, and there’s a known-good reference compiler to check against. Most real-world software projects have none of these advantages. The hard part of most development isn’t writing code that passes tests; it’s figuring out what the tests should be in the first place.

Advertisement

Source link

Continue Reading

Tech

Alpine Skiing at Winter Olympics 2026 Free Streams

Published

on

Alpine skiing live streams at the 2026 Winter Olympics will see Austria attempt to continue their historic dominance of this event, with challenges expected from Switzerland and France.

Source link

Advertisement
Continue Reading

Tech

Ski Jumping at Winter Olympics 2026 Free Streams

Published

on

Ski jumping at the Winter Olympics should deliver plenty of drama and entertainment at Milano Cortina 2026.

Source link

Advertisement
Continue Reading

Tech

Why these startup founders are leaving Seattle for San Francisco

Published

on

Nour Gajial (left), CEO of MathGPT, and Avi Agola, co-founder of Talunt, recently left the Seattle region for San Francisco. (Photos courtesy of Gajial and Agola)

Seattle’s startup ecosystem has its strengths, and the city is a global AI hub. But for some tech entrepreneurs, the gravity of San Francisco is hard to resist — especially in the middle of an AI boom.

We caught up with early stage startup founders who recently relocated from Seattle to San Francisco — a move that echoes earlier eras when entrepreneurs with local roots ultimately built valuable companies elsewhere.

This time, founders say the pull is about being inside the “world’s AI capital” as a way to supercharge their startups.

“I knew that moving to SF — where the largest concentration of startups are — would be the best move for maximizing our success,” said Avi Agola, co-founder of recruiting platform Talunt.

Before he arrived at the University of Washington this past fall, Agola immersed himself in Seattle’s startup scene as a teenager. He worked out of Seattle founder hub Foundations, launched his own company, and sold it last year to a fellow Seattle startup.

Advertisement

Agola credits Seattle’s startup community with helping him develop credibility and understand what it takes to run a company.

But as he got Talunt off the ground, Agola packed his bags for San Francisco. Part of the decision was practical: Investors encouraged the move, and many of Talunt’s early customers are in the Bay Area.

Aviel Ginzburg, a Seattle venture capitalist who runs Foundations, said he understands the strategy.

“I think that anyone in their 20s who wants to build in startups should be living down there right now, simply for building a network to get lucky,” he said.

Advertisement

That was part of the reason Nour Gajial, CEO of MathGPT, also moved from Seattle to San Francisco.

After dropping out of Cornell to pursue her AI education startup full time, Gajial returned home to the Seattle area. She found a supportive, tight-knit tech community and a comfortable place to build.

But as MathGPT gained traction, Gajial and her co-founder started making trips to San Francisco. They noticed more startup events, younger founders, and more frequent in-person interactions with people building and funding AI companies.

“There’s always some new AI research that’s going on, or some event that will open your eyes about something,” Gajial said. “I don’t see that energy as much in Seattle.”

Advertisement

Gajial said she’s grateful to have met “some really cool founders” in Seattle. MathGPT co-founder Yanni Kouloumbis lauded the region’s talent pool. But they felt that being in Silicon Valley gives them better odds at making it big.

“We just want to put ourselves in the best possible situation for these spontaneously good things to happen to us,” Kouloumbis said.

Nistha Mitra. (Photo courtesy of Mitra)

Nistha Mitra spent three years in Seattle, where she worked at Oracle. She later launched Neuramill, an early stage company developing software for manufacturing, and noticed a clear divide between Seattle’s corporate tech culture and startup life.

“I don’t think my community in the Big Tech world had any awareness of startups and how startups work,” Mitra said.

Mitra moved to San Francisco six months ago. “In SF, everyone knows what’s going on, no matter who they are,” she said.

Advertisement

She described a hard-charging atmosphere where it’s normal behavior to work 15-hour days on your startup. Being in that environment “really changes how you perform,” Mitra said.

When she worked long days in Seattle, friends worried about her. “I feel like in SF, it’s kind of normalized, that kind of lifestyle,” she said.

The same calculus is playing out for more experienced techies.

Vik Korrapati, a Seattle-based founder who spent nearly a decade at AWS, recently announced that his AI startup Moondream is moving from Seattle to San Francisco. He framed the decision around the scale and urgency of the current AI moment.

Advertisement

Artificial intelligence, Korrapati wrote in an online post, is “the biggest platform shift we’re going to see in our working lives,” and relocating was about being “in the right place, with the right people” as his company builds high-performance vision models.

Korrapati said the move wasn’t driven by a lack of talent in Seattle, but by differences in risk tolerance and default behavior. “The issue isn’t ability. It’s default settings,” he wrote, describing a culture where many engineers optimize for stability and incremental progress rather than the uncertainty of early-stage startup work.

Ethan Byrd. (LinkedIn Photo)

In San Francisco, he said, he found more people who had already left Big Tech roles and were willing to make the startup leap. “Seattle has been good to me,” Korrapati said. “I learned how large systems work here. I got the space to spin up Moondream here. I’m not leaving angry.”

Ethan Byrd, a former engineer at AWS, Google, Meta, and Microsoft, helped launch Seattle software startup Actual AI in 2024. Now he’s working on a new startup called MyMX — and is strongly considering a move.

Seattle isn’t a bad place to build a startup, Byrd said, and he loves the city. But San Francisco is just on a different level when it comes to entrepreneurship.

Advertisement

“Everything is easier: hiring, talking to customers, raising money, hosting events,” he said. At the end of the day, as he tries to grow his new startup, Byrd said moving to Silicon Valley “just seems unavoidable.”

But not all Seattle founders are headed south.

“There’s a really good pool of talent right now, especially with the layoffs unfortunately happening,” said Ankit Dhawan, CEO of Seattle-based marketing startup BluePill. “We don’t feel any need to move out of here.”

Silicon Valley is great for fundraising and making connections. “But there comes a moment when it’s too much noise,” said Alejandro Castellano, CEO of Seattle AI startup Caddi. “You just need a place to actually focus on work.”

Advertisement

And when a trip to the Bay Area is needed — some of Caddi’s investors are based there — it’s a short flight away. “You can come back the same day,” Castellano said.

Sunil Nagaraj (left), founder of Silicon Valley venture capital firm Ubiquity Ventures, interviews Auth0 co-founder Eugenio Pace at an event at AI House last week. Nagaraj traveled to Seattle to host the event and visit with Seattle-area startups in Ubiquity’s portfolio. (GeekWire Photo / Taylor Soper)

Many Silicon Valley investors also make trips up to Seattle. Earlier this week, Sunil Nagaraj, managing partner of Palo Alto-based Ubiquity Ventures, hosted a startup event at Seattle’s AI House. During his fireside chat with Auth0 co-founder Eugenio Pace, he called out the various Seattle-based founders in the crowd that he’s backed. “Ubiquity Ventures ❤️ Seattle!!” Nagaraj wrote on LinkedIn.

Yifan Zhang, founder of AI House and managing director at the AI2 Incubator, said she wants to get more out-of-town investors connected to the Seattle region.

Zhang built her first startup in San Francisco. For certain types of founders, she said, Silicon Valley is a better place to create serendipitous relationships that can lead to a funding round or a large customer.

Yifan Zhang. (LinkedIn Photo)

“But it’s also easy to get lost in the mix, or get distracted by the hype,” Zhang noted. “It really depends on who you are, but no matter where you’re based, founders still need to do the hard work of selling and building an incredible product and scaling it.”

Seattle is still drawing in many founders from out of town. Real estate startup RentSpree moved here from Los Angeles last year, attracted to the tech talent base and concentration of other real estate and proptech companies.

Advertisement

“Seattle is really great for talent that balances both an aggressive growth perspective, but also building sustainable companies over time,” RentSpree CEO and co-founder Michael Lucarelli told GeekWire in December.

Vijaye Raji, founder and CEO of Seattle-area startup Statsig (acquired by OpenAI last year for $1.1 billion), has called it a “quiet talent” that may be under-appreciated.

Drone startup Brinc is another transplant that landed from Las Vegas. The company, now ranked No. 7 on the GeekWire 200, raised $75 million last year and employs more than 100 people. CEO Blake Resnick has cited the engineering and tech talent pool in Seattle for his decision to relocate.

The city’s technology anchors — including Microsoft, Amazon, the University of Washington, and Silicon Valley engineering centers — also help import workers who go on to launch companies. Overland AI CEO Byron Boots came to the UW’s computer science school in 2019 as an associate professor, and later helped launch the Seattle-based autonomous driving startup that just raised $100 million.

Advertisement

Caleb John, an investor and engineer at Seattle startup studio Pioneer Square Labs, previously worked in San Francisco. He noted that founders in Seattle “are not as deep in the rat race” relative to entrepreneurs elsewhere.

“Your thinking is not as clouded by the hype train,” he said in an interview with Foundations. He also cited a “really strong community of younger people who work in startups” across the Seattle region. “People just don’t know there are startup people here,” John said, noting that the startup scene has grown since he arrived in 2021.

Ginzburg said even as some founders move to San Francisco, it’s important to keep building community in Seattle. He noted that Agola, for example, still remains tethered to Seattle through the Foundations network.

Agola said he’d consider returning to Seattle at some point as his new startup grows.

Advertisement

“I don’t think the Bay is the best for long-term startup growth when it comes to post-series B,” he said. “Moving to Seattle would be the best play to keep the best talent flow while minimizing overhead costs.”

RELATED: ‘The hustle factor is real’: Why this fast-growing Seattle startup is packing its bags for Palo Alto

Source link

Advertisement
Continue Reading

Tech

Investors worried after Big Tech plans $650bn spend in 2026

Published

on

Big Tech capital expenditure for this year is predicted to rise 60pc from $410bn in 2025.

Meta, Google, Amazon and Microsoft are signalling a collective 2026 capital expenditure package of around $650bn, with AI, cloud and data centres as unsurprising high-ticket items. Wary investors, however, seem unhappy, and the Financial Times reported that Amazon, Google and Microsoft are set to lose $900bn in market cap altogether.

Big Tech capital expenditure predictions would mark a rise of 60pc from the $410bn spent in 2025 and 165pc from the $245bn spent the year before.

The four competitors see the race to provide AI compute as “the next winner-take-all or winner-takes-most market”, Gil Luria, an analyst at DA Davidson told Bloomberg. None of the companies are “willing to lose”, he added.

Advertisement

Amazon shares fell by 11pc after the company’s earnings call yesterday (5 February) in which company president and CEO Andy Jassy announced a $200bn capital expenditure (capex) plan for the year, growing more than 50pc since last year.

He reasoned that a 24pc revenue growth in Amazon’s cloud offerings and a 22pc growth in advertising is evidence that the heavy spending is paying off. This year’s spending will be focused on AI, chips, robotics and low-Earth orbit satellites, Jassy said.

Meanwhile, Microsoft announced a $37.5bn quarterly capital expenditure bill on 28 January, just slightly more than analyst estimates. But the company was the worst hit among the four for a while, dropping 18pc since the announcement.

The company had also, for the first time, disclosed the true nature of its close economic relationship with OpenAI. It reported that roughly 45pc of its $625bn expected in future cloud contracts was from the start-up, leading to investor wariness on its over-reliance on one customer.

Advertisement

Google parent Alphabet initially dropped 4pc in share price after it reported its earnings on Wednesday (4 February), but climbed back up to being just below 0.5pc since yesterday. Sales and earnings per share grew by 18pc and 31pc respectively during Q4, beating analyst expectations, while Alphabet’s cloud backlog grew by 55pc quarter-over-quarter to $240bn.

The company announced capex for the year between $175bn and $185bn, doubling expenses to meet customer demand and capitalise on the growth of the company’s AI offerings. Though despite fears of heavy spending, Gemini Enterprise is selling 8m seats and the Gemini App now has more than 750m monthly active users, which, Motley Fool reported, is keeping investors relatively content.

Lastly, Meta has announced its total expenses for 2026 to be in the range $115bn to $135bn. The growth, it said, is driven by an increased investment to support its Meta Superintelligence Labs efforts as well as its core business.

While the stocks rose 10pc after the earnings announcement, the Financial Times reports that it lost those gains after overall investor fear has pushed the tech-heavy Nasdaq down 4pc over the week.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Teaching a Generation That Questions Everything

Published

on

I’ve been teaching long enough to recognize when something fundamental is shifting in the classroom. Lately, that shift sounds like a single word echoing through my courses: why.

Why are we doing this? Why does it matter? Why should I care?

At first, it can sound like pushback, the kind of challenge that might once have been mistaken for defiance. But I don’t see it that way. When Gen Z students ask “why,” they’re not questioning authority; they’re questioning meaning. They’re trying to understand whether what they’re being asked to learn aligns with a world that already feels crowded with information, competition and contradiction.

And they’re right to ask.

Advertisement
Jeff LeBlanc

Gen Z has grown up surrounded by constant messaging — some genuine, some hollow. They’ve seen companies preach purpose while chasing profit, influencers claim authenticity while filtering reality, and institutions talk about mental health while rewarding burnout. So when they step into a classroom, they’re not looking for performance. They’re looking for proof.

In many ways, “why” has replaced the old-fashioned raise of the hand. It’s the new signal for engagement, not disengagement. These students aren’t rebellious for sport; they’re searching for relevance. When they ask “why,” they’re asking us to show them the thread between knowledge and purpose.

For educators, that’s both thrilling and challenging. The old classroom contract may no longer be enough. Gen Z expects transparency in exchange for trust. They want to know not only what they’re learning but how it connects to who they’re becoming. That expectation is reshaping how many of us teach.

I’ve noticed that when I take the time to explain why we’re doing something — even briefly — engagement rises. It doesn’t need to be a speech or a slide titled “Why It Matters.” It can be a few sentences woven into the moment: “You’ll use this when you’re leading a team someday,” or “This will help you understand how strategy actually plays out in a business setting.” Framing purpose in passing often lands more effectively than any formal statement could. It tells students that there’s intention behind what they’re being asked to do.

And when the connection isn’t obvious, I try to make the learning process itself transparent. I’ll tell them why I’ve designed a particular project or changed an assignment from last semester. I explain my reasoning the way I’d want a mentor to explain theirs — not to justify, but to include. Once they see the care that goes into the design, their tone shifts from skeptical to curious.

Advertisement

New Perspective

That shift has changed my own mindset as an instructor. I’ve started to see my role less as delivering content and more as modeling thoughtfulness — the same kind I’m asking of them. I don’t have to declare that an assignment matters; I can show that it does by connecting it to a broader purpose, by caring about it visibly.

When things don’t go perfectly, I’ve learned to acknowledge that too. I used to think admitting uncertainty would weaken credibility. It turns out it does the opposite. When I tell students, “I’m still experimenting with how to teach this,” they don’t lose confidence — they lean in. They respect honesty because it mirrors their own experience of figuring things out.

That’s the real undercurrent here: authenticity has replaced authority as the key driver of credibility. Gen Z doesn’t automatically trust titles or experience; they trust consistency between what we say and what we do. They’ve been burned too many times by institutions that preached one set of values and practiced another. In the classroom, they want something simpler — teachers who mean what they say.

This doesn’t mean lowering standards or catering to comfort. If anything, it’s raised expectations. When students believe something has meaning, they work harder. I’ve seen it when my students analyze real company challenges instead of hypothetical ones, or when they present their findings to local business leaders rather than just to me. They’re sharper, more invested, and more willing to push themselves when the stakes feel real.

Advertisement

Even small acts of transparency build trust. Explaining why feedback is framed a certain way, or why participation matters, helps students see that the structure exists for a reason. They might not always agree, but they rarely tune out.

Overcoming Defensiveness

Of course, this approach can be draining. There are days when the “whys” feel relentless — when every question seems to demand another explanation, and you wonder if they’ll ever just take your word for it. But over time, I’ve come to see their skepticism not as defiance but as discernment. They’re not trying to tear down the system; they’re trying to make it make sense.

When a student asks, “Why are we doing this?” they’re really saying, “Help me to see the point.” That’s not cynicism. You might call it curiosity with higher benchmarks. And if we can meet that question with openness instead of defensiveness, the classroom becomes a space of shared inquiry rather than guarded authority.

There’s an irony in all this. The very generation accused of being distracted is, in many ways, the most focused — just not on what older models of education assumed mattered. They’re focused on meaning. They want clarity, fairness and consistency, but they also want a sense of humanity behind it all. They crave professors who teach like people, not policies.

Advertisement

Maybe that’s the lesson for us, too. If Gen Z is asking “why,” perhaps we should start asking it of ourselves — not as a challenge, but as reflection. Why do we teach the way we do? Why do we grade like this? Why do we define learning in these terms?

Teaching a generation that questions everything isn’t easy. But it’s not resistance, it’s renewal. Their “why” invites us to rediscover our own.

Source link

Advertisement
Continue Reading

Tech

Europe’s social media age shift: Will tougher rules change how teens use the internet?

Published

on


It is just the beginning of 2026, and things are happening even faster than last year. Not only in technology, but also in regulations, laws, and in how we deal with all the information around us. As a person born in the 90s, social media was once an unknown land for me, a place that felt genuine in the beginning. It still had dangers, but it seemed less risky, or maybe our parents’ rules were stricter. I don’t want to go down the psychological path here, but I want to look at where we are headed with so much risk,…
This story continues at The Next Web

Source link

Continue Reading

Tech

UiPath pushes deeper into financial services with WorkFusion acquisition

Published

on


UiPath, the Romanian unicorn, has agreed to buy WorkFusion, bringing a specialist in AI agents for financial-crime compliance into its fold as part of a broader push into agentic automation for the banking sector. The deal closed in UiPath’s first quarter of fiscal 2027; financial terms were not disclosed. WorkFusion’s software focuses on repetitive and resource-intensive parts of compliance work, from customer screening and anti-money-laundering (AML) checks to know-your-customer (KYC) investigations. “Financial institutions need intelligent solutions to combat sophisticated financial crimes and navigate evolving compliance requirements,” said Daniel Dines, CEO of UiPath. Those capabilities now sit alongside UiPath’s existing automation…
This story continues at The Next Web

Source link

Continue Reading

Tech

Adaptive Power in iOS 26 Is Boosting Your iPhone Battery in the Background

Published

on

Other Apple Intelligence features get all the attention, but in iOS 26 one of my favorite tools does its thing quietly in the background. On iPhone models that are capable of running Apple’s AI tech, the Adaptive Power setting is at work behind the scenes to extend battery power, even on many older iPhones.

Currently, the iPhone uses as much power as it needs to perform its tasks. You can extend battery life by taking a few simple steps, such as reducing screen brightness and disabling the always-on display. Or, if your battery is running low, you can turn on Low Power Mode, which limits background activity, like fetching mail and downloading data, and dims the screen to help extend battery life. Low Power Mode also kicks in automatically when the battery level reaches 20%.

If Low Power Mode is the hammer that knocks down power consumption, Adaptive Power is the scalpel that intelligently trims energy savings here and there as needed. Based on Apple’s description that accompanies the control, the savings will be felt mostly in power-hungry situations such as recording videos, editing photos or even playing games.

Advertisement

Apple says Adaptive Power takes about a week to analyze your usage behavior before it begins actively working. And it works in the background without needing any management on your part. 

Here’s how Apple describes it in the iPhone user guide: “It uses on-device intelligence to predict when you’ll need extra battery power based on your recent usage patterns, then makes performance adjustments to help your battery last longer.”

Watch this: The iPhone 17 Pro Max Has Incredible Battery Life

Which iPhone models can use Adaptive Power?

The feature uses AI to monitor and choose when its power-saving measures should be activated, which means only phones compatible with Apple Intelligence get the feature. These are the models that have the option:

Advertisement

• iPhone 17
• iPhone 17 Pro and iPhone 17 Pro Max
• iPhone Air
• iPhone 16 and iPhone 16 Plus
• iPhone 16 Pro and iPhone 16 Pro Max
• iPhone 16e
• iPhone 15 Pro and iPhone 15 Pro Max

Although some iPad and Mac models support Apple Intelligence, the feature is only available on iPhones.

How to turn Adaptive Power on

Adaptive Power is on by default on the iPhone 17, iPhone 17 Pro, iPhone 17 Pro Max and iPhone Air. For other models, you must opt in to use it. In iOS 26, you’ll find the Adaptive Power toggle in Settings > Battery > Power Mode. To be alerted when the feature is active, turn on the Adaptive Power Notifications option.

In iOS 26, turn on the Adaptive Power setting to help extend battery life.

Screenshots by Jeff Carlson/CNET

Adaptive Power sounds like an outgrowth of Gaming Mode, introduced in iOS 18, which routes all available processing and graphics power to the frontmost app and pauses other processes in order to deliver the best experience possible — at the notable expense of battery life.

Advertisement
iPhone screenshot showing a notification that says "Adaptive Power: iPhone is adjusting performance to help extend your battery life."

When the iPhone is using Adaptive Power, a notification appears.

Screenshot by Jeff Carlson/CNET

What does this mean for your charging habits?

Although we all want as much battery life as possible all the time, judging by the description, it sounds as if Adaptive Power’s optimizations will not always be active, even if you leave the feature on. “When your battery usage is higher than usual” could include a limited number of situations. Still, considering that according to a CNET survey, 61% of people upgrade their phones because of battery life, a feature such as Adaptive Power could extend the longevity of their phones just by updating to iOS 26.

I also wonder whether slightly adjusting display brightness could be disruptive, but in my experience so far, it hasn’t been noticeable. Because the feature also selectively de-prioritizes processing tasks, the outward effects seem minimal. When it’s activated on my iPhone 16 Pro, the only indication was the Adaptive Power alert that appeared.

We’ll get a better idea about how well Adaptive Power works as more people adopt iOS 26 and start buying new iPhone models. Also, remember that shortly after installing a major software update, it’s common to experience worse battery life as the system optimizes data in the background; Apple went so far as to remind customers that it’s a temporary side effect.

Advertisement

Source link

Continue Reading

Tech

Why PlayStation Graphics Wobble, Flicker And Twitch

Published

on

Although often tossed together into a singular ‘retro game’ aesthetic, the first game consoles that focused on 3D graphics like the Nintendo 64 and Sony PlayStation featured very distinct visuals that make these different systems easy to distinguish. Yet whereas the N64 mostly suffered from a small texture buffer, the PS’s weak graphics hardware necessitated compromises that led to the highly defining jittery and wobbly PlayStation graphics.

These weaknesses of the PlayStation and their results are explored by [LorD of Nerds] in a recent video. Make sure to toggle on subtitles if you do not speak German.

It could be argued that the PlayStation didn’t have a 3D graphics chip at all, just a video chip that could blit primitives and sprites to the framebuffer. This forced PS developers to draw 3D graphics without such niceties like a Z-buffer, putting a lot of extra work on the CPU.

This problem extends also to texture mapping, by doing affine texture mapping, as it’s called on the PS. This mapping of textures is rather flawed and leads to the constant shifting of textures as the camera’s perspective is not taken into account. Although this texture mapping can be improved, the developers of the game have to add more polygons for this, which of course reduces performance. This is the main cause of the shifting and wobbling of textures.

Advertisement

Another issue on the PS was a lack of mipmapping support, which means a sequence of the same texture, each with each a different resolution. This allows for high-resolution textures to be used when the camera is close, and low-resolution textures when far away. On the PS this lack of mipmapping led to many texture pixels being rendered to the same point on the display, with camera movement leading to interesting flickering effects.

When it came to rendering to the output format, the Nintendo 64 created smooth gradients between the texture pixels (texels) to make them fit on the output resolution, whereas the PS used the much more primitive nearest neighbor interpolation that made especially edges of objects look like they both shimmered and changed shape and color.

The PS also lacked a dedicated floating point unit to handle graphics calculations, forcing a special Geometry Transformation Engine (GTE) in the CPU to handle transformation calculations, but all in integer calculations instead of with floating point values. This made e.g. fixed camera angles as in Resident Evil games very attractive for developers as movement would inevitably lead to visible artefacts.

Finally, the cartridge-based games of the N64 could load data from the mask ROMs about 100x faster than from the PS’s CDs, and with much lower latency. All of these differences would lead to entirely different games for both game consoles, with the N64 being clearly superior for 3D games, yet the PS being released long before the N64 for a competitive price along with the backing of Sony would make sure that it became a commercial success.

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025