A team of 10 students from Amity International School at Pushp Vihar in New Delhi has won the International Space Settlement Design Competition held at National Aeronautics and Space Administration (NASA).
The competition was held at the Kennedy Space Centre of the US space agency in Florida from July 26-29.
The Balderol space settlement designed by the winning team aims to establish a large-scale, sustainable community on the moon, providing a residential and working environment for 12,000 full-time residents, the Amity school said in a release.
Designed to support data centres and associated industries, the settlement will also accommodate up to 1,500 transient visitors and 4,500 annual rotational workers, including engineers and technicians, it said.
Advertisement
The team comprises Samaya Chauhan, Akshita Bhandari, Dhruv Bhandari, Aaditya Raj Verma, Namya Jain, Yash Wadhwa, Avneet Kaur Virdi, Taarush Goswami, Daksh Dhull and Arsh Arora.
They participated in the international event after winning national and Asian rounds.
Arsh Arora, a class 12 student, was also awarded the ‘Dick Edwards Leadership Award’ for his leadership skills in the company of over 60 students from different parts of the world.
“We, at Amity, are committed to the holistic development of the students and nurture their skills and talent so that they are ready for all championships, at national as well as international level,” said Dr Amita Chauhan, Chairperson, Amity International Schools.
Advertisement
Dr Ameeta Mohan, Principal, Amity International School Pushp Vihar, stated, “Our students are prepared to showcase their talent at various platforms and proper guidance and training is provided to every student, and enable them to develop their skill sets.”
With Android 15 now available for all Pixel-eligible devices and other brands sharing their rollout calendars, Google is already working on Android 16, the next major update to the OS. It’s still too early to know all the improvements the company is working on. However, recent findings suggest that Android 16 will revamp the classic “Do Not Disturb” with new customizable “Modes.”
Google would bring back the classic “Profiles” to mobile phones, in its own way
Android 16’s new Modes seem like an advanced version of the “profiles” we had on older mobile phones. If you’re not aware, the “profiles” option allowed you to set different combinations of ringtones, volume, vibration, etc. You could name each profile whatever you wanted. The option was quite useful for quickly setting an ideal configuration for each occasion. For instance, you could muffle ringtones and notifications on a profile named “meeting”.
Interestingly, smartphones gained countless features but lost the profile settings. Developers replaced them with preset options like “Silent” or “Do Not Disturb,” whose customization possibilities are limited. However, Google would change this in the next big Android update with the new “Modes.”
Android 16’s “Modes” seems highly inspired by the “Profiles” option
As spotted by Mishaal Rahman, the “Modes” option seems destined to debut in Android 16. The source spotted the feature in the latest Android 15 QPR1 Beta 3. It’s noteworthy that “Modes” appeared in a previous beta, albeit under the name “Priority Modes.” Just like the “profiles” on old mobile phones, “Modes” allows you to set different combinations of settings to suit different situations.
Advertisement
Within each Mode, users will be able to customize settings such as the mode name, trigger, display settings, notification behavior, and even the icon. There are over 40 icons to choose from, so you can easily differentiate between all your Modes. The “trigger” setting is especially interesting as it suggests that there are Modes that will automatically activate under certain conditions. However, there are no further details on what conditions you can set.
If you enable a Mode, the icon will be present in the status bar. You can access all your modes from the Settings menu or the Quick Settings panel. The feature is quite promising, and many will surely find it useful. Let’s hope Google really plans to implement it in Android 16.
NASA spent the last two weeks hoisting a 103-ton component onto a simulator and installing it to help prepare for the next Moon missions. Crews fitted the interstage simulator component onto the Thad Cochran Test Stand at Stennis Space Center near Bay St. Louis, Mississippi. The connecting section mimics the same SLS (Space Launch System) part that will help protect the rocket’s upper stage, which will propel the Orion spacecraft on its planned Artemis launches.
The Thad Cochran Test Stand is where NASA sets up the SLS components and conducts thorough testing to ensure they’ll be safe and operating as intended on the versions that fly into space. The new section was installed onto the B-2 position of the testing center and is now fitted with all the necessary piping, tubing and electrical systems for future test runs.
The interstage section will protect electrical and propulsion systems and support the SLS’s EUS (Exploration Upper Stage) in the rocket’s latest design iteration, Block 1B. It will replace the current Block 1 version and offer a 40 percent bigger payload. The EUS will support 38 tons of cargo with a crew or 42 tons without a crew, compared to 27 tons of crew and cargo in the Block 1 iteration. (Progress!) Four RL10 engines, made by contractor L3Harris, will power the new EUS.
The interstage simulator section NASA spent mid-October installing weighs 103 tons and measures 31 feet in diameter and 33 feet tall. The section’s top portion will absorb the EUS hot fire thrust, transferring it back to the test stand so the test stand doesn’t collapse under the four engines’ more than 97,000 pounds of thrust.
Advertisement
1 / 14
NASA installs the interstage simulator section
Various photos of NASA lifting and installing the SLS interstage simulator section at the Stennis Space Center.
NASA’s testing at Stennis Space Center will prepare the SLS for the Artemis IV mission, which will send four astronauts aboard the Orion spacecraft to the Lunar Gateway space station to install a new module. After that, they’ll descend to the Moon’s surface in the Starship HLS (Human Landing System) lunar lander.
Advertisement
You can catch some glimpses into NASA’s heavy lifting in the video below:
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The enterprise world is rapidly growing its usage of open source large language models (LLMs), driven by companies gaining more sophistication around AI – seeking greater control, customization, and cost efficiency.
While closed models like OpenAI’s GPT-4 dominated early adoption, open source models have since closed the gap in quality, and are growing at least as quickly in the enterprise, according to multiple VentureBeat interviews with enterprise leaders.
This is a change from earlier this year, when I reported that while the promise of open source was undeniable, it was seeing relatively slow adoption. But Meta’s openly available models have now been downloaded more than 400 million times, the company told VentureBeat, at a rate 10 times higher than last year, with usage doubling from May through July 2024. This surge in adoption reflects a convergence of factors – from technical parity to trust considerations – that are pushing advanced enterprises toward open alternatives.
Advertisement
“Open always wins,” declares Jonathan Ross, CEO of Groq, a provider of specialized AI processing infrastructure that has seen massive uptake of customers using open models. “And most people are really worried about vendor lock-in.”
Even AWS, which made a $4 billion investment in closed-source provider Anthropic – its largest investment ever – acknowledges the momentum. “We are definitely seeing increased traction over the last number of months on publicly available models,” says Baskar Sridharan, AWS’ VP of AI & Infrastructure, which offers access to as many models as possible, both open and closed source, via its Bedrock service.
The platform shift by big app companies accelerates adoption
It’s true that among startups or individual developers, closed-source models like OpenAI still lead. But in the enterprise, things are looking very different. Unfortunately, there is no third-party source that tracks the open versus closed LLM race for the enterprise, in part because it’s near impossible to do: The enterprise world is too distributed, and companies are too private for this information to be public. An API company, Kong, surveyed more than 700 users in July. But the respondents included smaller companies as well as enterprises, and so was biased toward OpenAI, which without question still leads among startups looking for simple options. (The report also included other AI services like Bedrock, which is not an LLM, but a service that offers multiple LLMs, including open source ones — so it mixes apples and oranges.)
But anecdotally, the evidence is piling up. For one, each of the major business application providers has moved aggressively recently to integrate open source LLMs, fundamentally changing how enterprises can deploy these models. Salesforce led the latest wave by introducing Agentforce last month, recognizing that its customer relationship management customers needed more flexible AI options. The platform enables companies to plug in any LLM within Salesforce applications, effectively making open source models as easy to use as closed ones. Salesforce-owned Slack quickly followed suit.
“I think open models will ultimately win out,” says Oracle’s EVP of AI and Data Management Services, Greg Pavlik. The ability to modify models and experiment, especially in vertical domains, combined with favorable cost, is proving compelling for enterprise customers, he said.
A complex landscape of “open” models
While Meta’s Llama has emerged as a frontrunner, the open LLM ecosystem has evolved into a nuanced marketplace with different approaches to openness. For one, Meta’s Llama has more than 65,000 model derivatives in the market. Enterprise IT leaders must navigate these, and other options ranging from fully open weights and training data to hybrid models with commercial licensing.
Mistral AI, for example, has gained significant traction by offering high-performing models with flexible licensing terms that appeal to enterprises needing different levels of support and customization. Cohere has taken another approach, providing open model weights but requiring a license fee – a model that some enterprises prefer for its balance of transparency and commercial support.
This complexity in the open model landscape has become an advantage for sophisticated enterprises. Companies can choose models that match their specific requirements – whether that’s full control over model weights for heavy customization, or a supported open-weight model for faster deployment. The ability to inspect and modify these models provides a level of control impossible with fully closed alternatives, leaders say. Using open source models also often requires a more technically proficient team to fine-tune and manage the models effectively, another reason enterprise companies with more resources have an upper hand when using open source.
Advertisement
Meta’s rapid development of Llama exemplifies why enterprises are embracing the flexibility of open models. AT&T uses Llama-based models for customer service automation, DoorDash for helping answer questions from its software engineers, and Spotify for content recommendations. Goldman Sachs has deployed these models in heavily regulated financial services applications. Other Llama users include Niantic, Nomura, Shopify, Zoom, Accenture, Infosys, KPMG, Wells Fargo, IBM, and The Grammy Awards.
Meta has aggressively nurtured channel partners. All major cloud providers embrace Llama models now. “The amount of interest and deployments they’re starting to see for Llama with their enterprise customers has been skyrocketing,” reports Ragavan Srinivasan, VP of Product at Meta, “especially after Llama 3.1 and 3.2 have come out. The large 405B model in particular is seeing a lot of really strong traction because very sophisticated, mature enterprise customers see the value of being able to switch between multiple models.” He said customers can use a distillation service to create derivative models from Llama 405B, to be able to fine tune it based on their data. Distillation is the process of creating smaller, faster models while retaining core capabilities.
Indeed, Meta covers the landscape well with its other portfolio of models, including the Llama 90B model, which can be used as a workhorse for a majority of prompts, and 1B and 3B, which are small enough to be used on device. Today, Meta released “quantized” versions of those smaller models. Quantization is another process that makes a model smaller, allowing less power consumption and faster processing. What makes these latest special is that they were quantized during training, making them more efficient than other industry quantized knock-offs – four times faster at token generation than their originals, using a fourth of the power.
The technical gap between open and closed models has essentially disappeared, but each shows distinct strengths that sophisticated enterprises are learning to leverage strategically. This has led to a more nuanced deployment approach, where companies combine different models based on specific task requirements.
Advertisement
“The large, proprietary models are phenomenal at advanced reasoning and breaking down ambiguous tasks,” explains Salesforce EVP of AI, Jayesh Govindarajan. But for tasks that are light on reasoning and heavy on crafting language, for example drafting emails, creating campaign content, researching companies, “open source models are at par and some are better,” he said. Moreover, even the high reasoning tasks can be broken into sub-tasks, many of which end up becoming language tasks where open source excels, he said.
Intuit, the owner of accounting software Quickbooks, and tax software Turbotax, got started on its LLM journey a few years ago, making it a very early mover among Fortune 500 companies. Its implementation demonstrates a sophisticated approach. For customer-facing applications like transaction categorization in QuickBooks, the company found that its fine-tuned LLM built on Llama 3 demonstrated higher accuracy than closed alternatives. “What we find is that we can take some of these open source models and then actually trim them down and use them for domain-specific needs,” explains Ashok Srivastava, Intuit’s chief data officer. They “can be much smaller in size, much lower and latency and equal, if not greater, in accuracy.”
The banking sector illustrates the migration from closed to open LLMs. ANZ Bank, a bank that serves Australia and New Zealand, started out using OpenAI for rapid experimentation. But when it moved to deploy real applications, it dropped OpenAI in favor of fine-tuning its own Llama-based models, to accommodate its specific financial use cases, driven by needs for stability and data sovereignty. The bank published a blog about the experience, citing the flexibility provided by Llama’s multiple versions, flexible hosting, version control, and easier rollbacks. We know of another top-three U.S. bank that also recently moved away from OpenAI.
It’s examples like this, where companies want to leave OpenAI for open source, that have given rise to things like “switch kits” from companies like PostgresML that make it easy to exit OpenAI and embrace open source “in minutes.”
The path to deploying open source LLMs has been dramatically simplified. Meta’s Srinivasan outlines three key pathways that have emerged for enterprise adoption:
Cloud Partner Integration: Major cloud providers now offer streamlined deployment of open source models, with built-in security and scaling features.
Custom Stack Development: Companies with technical expertise can build their own infrastructure, either on-premises or in the cloud, maintaining complete control over their AI stack – and Meta is helping with its so-called Llama Stack.
API Access: For companies seeking simplicity, multiple providers now offer API access to open source models, making them as easy to use as closed alternatives. Groq, Fireworks, and Hugging Face are examples. All of them are able to provide you an inference API, a fine-tuning API, and basically anything that you would need or you would get from a proprietary provider.
Safety and control advantages emerge
The open source approach has also – unexpectedly – emerged as a leader in model safety and control, particularly for enterprises requiring strict oversight of their AI systems. “Meta has been incredibly careful on the safety part, because they’re making it public,” notes Groq’s Ross. “They actually are being much more careful about it. Whereas with the others, you don’t really see what’s going on and you’re not able to test it as easily.”
This emphasis on safety is reflected in Meta’s organizational structure. Its team focused on Llama’s safety and compliance is large relative to its engineering team, Ross said, citing conversations with the Meta a few months ago. (A Meta spokeswoman said the company does not comment on personnel information). The September release of Llama 3.2 introduced Llama Guard Vision, adding to safety tools released in July. These tools can:
Detect potentially problematic text and image inputs before they reach the model
Monitor and filter output responses for safety and compliance
Enterprise AI providers have built upon these foundational safety features. AWS’s Bedrock service, for example, allows companies to establish consistent safety guardrails across different models. “Once customers set those policies, they can choose to move from one publicly available model to another without actually having to rewrite the application,” explains AWS’ Sridharan. This standardization is crucial for enterprises managing multiple AI applications.
Databricks and Snowflake, the leading cloud data providers for enterprise, also vouch for Llama’s safety: “Llama models maintain the “highest standards of security and reliability,” said Hanlin Tang, CTO for Neural Networks
Intuit’s implementation shows how enterprises can layer additional safety measures. The company’s GenSRF (security, risk and fraud assessment) system, part of its “GenOS” operating system, monitors about 100 dimensions of trust and safety. “We have a committee that reviews LLMs and makes sure its standards are consistent with the company’s principles,” Intuit’s Srivastava explains. However, he said these reviews of open models are no different than the ones the company makes for closed-sourced models.
Advertisement
Data provenance solved through synthetic training
A key concern around LLMs is about the data they’ve been trained on. Lawsuits abound from publishers and other creators, charging LLM companies with copyright violation. Most LLM companies, open and closed, haven’t been fully transparent about where they get their data. Since much of it is from the open web, it can be highly biased, and contain personal information.
Many closed sourced companies have offered users “indemnification,” or protection against legal risks or claims lawsuits as a result of using their LLMs. Open source providers usually do not provide such indemnification. But lately this concern around data provenance seems to have declined somewhat. Models can be grounded and filtered with fine-tuning, and Meta and others have created more alignment and other safety measures to counteract the concern. Data provenance is still an issue for some enterprise companies, especially those in highly regulated industries, such as banking or healthcare. But some experts suggest these data provenance concerns may be resolved soon through synthetic training data.
“Imagine I could take public, proprietary data and modify them in some algorithmic ways to create synthetic data that represents the real world,” explains Salesforce’s Govindarajan. “Then I don’t really need access to all that sort of internet data… The data provenance issue just sort of disappears.”
The adoption of open source LLMs shows distinct regional and industry-specific patterns. “In North America, the closed source models are certainly getting more production use than the open source models,” observes Oracle’s Pavlik. “On the other hand, in Latin America, we’re seeing a big uptick in the Llama models for production scenarios. It’s almost inverted.”
What is driving these regional variations isn’t clear, but they may reflect different priorities around cost and infrastructure. Pavlik describes a scenario playing out globally: “Some enterprise user goes out, they start doing some prototypes…using GPT-4. They get their first bill, and they’re like, ‘Oh my god.’ It’s a lot more expensive than they expected. And then they start looking for alternatives.”
Market dynamics point toward commoditization
The economics of LLM deployment are shifting dramatically in favor of open models. “The price per token of generated LLM output has dropped 100x in the last year,” notes venture capitalist Marc Andreessen, who questioned whether profits might be elusive for closed-source model providers. This potential “race to the bottom” creates particular pressure on companies that have raised billions for closed-model development, while favoring organizations that can sustain open source development through their core businesses.
“We know that the cost of these models is going to go to zero,” says Intuit’s Srivastava, warning that companies “over-capitalizing in these models could soon suffer the consequences.” This dynamic particularly benefits Meta, which can offer free models while gaining value from their application across its platforms and products.
Advertisement
A good analogy for the LLM competition, Groq’s Ross says, is the operating system wars. “Linux is probably the best analogy that you can use for LLMs.” While Windows dominated consumer computing, it was open source Linux that came to dominate enterprise systems and industrial computing. Intuit’s Srivastava sees the same pattern: ‘We have seen time and again: open source operating systems versus non open source. We see what happened in the browser wars,” when open source Chromium browsers beat closed models.
Walter Sun, SAP’s global head of AI, agrees: “I think that in a tie, people can leverage open source large language models just as well as the closed source ones, that gives people more flexibility.” He continues: “If you have a specific need, a specific use case… the best way to do it would be with open source.”
Some observers like Groq’s Ross believe Meta may be in a position to commit $100 billion to training its Llama models, which would exceed the combined commitments of proprietary model providers, he said. Meta has an incentive to do this, he said, because it is one of the biggest beneficiaries of LLMs. It needs them to improve intelligence in its core business, by serving up AI to users on Instagram, Facebook, Whatsapp. Meta says its AI touches 185 million weekly active users, a scale matched by few others.
This suggests that open source LLMs won’t face the sustainability challenges that have plagued other open source initiatives. “Starting next year, we expect future Llama models to become the most advanced in the industry,” declared Meta CEO Mark Zuckerberg in his July letter of support for open source AI. “But even before that, Llama is already leading on openness, modifiability, and cost efficiency.”
Advertisement
Specialized models enrich the ecosystem
The open source LLM ecosystem is being further strengthened by the emergence of specialized industry solutions. IBM, for instance, has released its Granite models as fully open source, specifically trained for financial and legal applications. “The Granite models are our killer apps,” says Matt Candy, IBM’s global managing partner for generative AI. “These are the only models where there’s full explainability of the data sets that have gone into training and tuning. If you’re in a regulated industry, and are going to be putting your enterprise data together with that model, you want to be pretty sure what’s in there.”
IBM’s business benefits from open source, including from wrapping its Red Hat Enterprise Linux operating system into a hybrid cloud platform, that includes usage of the Granite models and its InstructLab, a way to fine-tune and enhance LLMs. The AI business is already kicking in. “Take a look at the ticker price,” says Candy. “All-time high.”
Trust increasingly favors open source
Trust is shifting toward models that enterprises can own and control. Ted Shelton, COO of Inflection AI, a company that offers enterprises access to licensed source code and full application stacks as an alternative to both closed and open source models, explains the fundamental challenge with closed models: “Whether it’s OpenAI, it’s Anthropic, it’s Gemini, it’s Microsoft, they are willing to provide a so-called private compute environment for their enterprise customers. However, that compute environment is still managed by employees of the model provider, and the customer does not have access to the model.” This is because the LLM owners want to protect proprietary elements like source code, model weights, and hyperparameter training details, which can’t be hidden from customers who would have direct access to the models. Since much of this code is written in Python, not a compiled language, it remains exposed.
This creates an untenable situation for enterprises serious about AI deployment. “As soon as you say ‘Okay, well, OpenAI’s employees are going to actually control and manage the model, and they have access to all the company’s data,’ it becomes a vector for data leakage,” Shelton notes. “Companies that are actually really concerned about data security are like ‘No, we’re not doing that. We’re going to actually run our own model. And the only option available is open source.’”
Advertisement
The path forward
While closed-source models maintain a market share lead for simpler use cases, sophisticated enterprises increasingly recognize that their future competitiveness depends on having more control over their AI infrastructure. As Salesforce’s Govindarajan observes: “Once you start to see value, and you start to scale that out to all your users, all your customers, then you start to ask some interesting questions. Are there efficiencies to be had? Are there cost efficiencies to be had? Are there speed efficiencies to be had?”
The answers to these questions are pushing enterprises toward open models, even if the transition isn’t always straightforward. “I do think that there are a whole bunch of companies that are going to work really hard to try to make open source work,” says Inflection AI’s Shelton, “because they got nothing else. You either give in and say a couple of large tech companies own generative AI, or you take the lifeline that Mark Zuckerberg threw you. And you’re like: ‘Okay, let’s run with this.’”
VB Daily
Stay in the know! Get the latest news in your inbox daily
Just 4 days left! Moscone West in San Francisco will be the epicenter of innovation as 10,000 startup and VC leaders gather for TechCrunch Disrupt 2024 from October 28-30. This incredible conference is designed to inspire, spark innovative ideas, and create meaningful connections.
Final days to save! You have until October 27 at 11:59 p.m. PT to save up to $400 on individual-type tickets or double up with two Expo+ Passes for half the price of one. Don’t wait — secure your low-rate ticket or the Expo+ 2-for-1 Pass.
Don’t miss Disrupt 2024
10,000+ startup and VC leaders
Experience the ultimate networking event at Disrupt 2024, bringing together 10,000 tech pioneers, startup founders, and VC leaders for unparalleled opportunities to connect and collaborate.
350+ startups showcasing their innovations
Step into the Expo Hall and witness cutting-edge innovations from more than 350 startups, giving you a preview of the future of tech from around the world.
Advertisement
250+ industry heavyweights
Gain invaluable insights from leading industry figures as they share exclusive insights across six dedicated stages, focusing on key sectors of the tech landscape: AI, startups, VCs, fintech, SaaS, and space.
Participate in interactive Q&A Breakout Sessions and Roundtable discussions with industry leaders, tackling pressing challenges in the fast-changing tech landscape. Discover these sessions in our expanding agenda.
Startup Battlefield 200
Watch 20 exceptional startups compete in the exhilarating Startup Battlefield 200 pitch competition at Disrupt 2024, all vying for a $100,000 equity-free prize and the esteemed Disrupt Cup, judged by leading VCs.
Unmatched networking opportunities
Take your networking to the next level with the Braindate app, where you can create or explore topics for more in-depth discussions. Connect in person at the Networking Lounge powered by Braindate on level 2 for 1:1 or small-group discussions.
60+ Side Events
Keep the spirit of Disrupt 2024 alive by participating in company-hosted Side Events around San Francisco during the week. With options ranging from workshops and cocktail parties to morning runs and meetups, there’s an event for everyone!
Advertisement
Grab your ticket before prices increase
Act now to save up to $400 on tickets! You can also take advantage of our Expo+ 2-for-1 offer — bring a guest for just half the price of a single Expo+ Pass. All offers end on October 27 at 11:59 p.m. PT. Prices will go up when we open the doors on October 28.
What’s got me pondering is the Camera Control ‘button.’ In some ways, it’s a cool new feature that uses haptics well. In other ways, it’s superfluous and not fully featured.
Advertisement
I’ve been trying out the iPhone 16 Pro Max for a couple of weeks now, and when it comes to capturing a photo, l try and use Camera Control as much as possible. As I’m 37 and a millennial, I still like snapping photos on my phone in landscape orientation, so having a physical button where my finger naturally sits is good for capturing a shot without messing up the framing by tapping on the screen or trying to hit the Action button – I have this mapped to trigger the ‘torch’ anyway, which is surprisingly helpful.
I also like flicking through zoom ranges with a swipe on the Camera Control without the need to tap on small icons. The exposure control is kind of cool, though swapping between the features Camera Control can control doesn’t quite feel intuitive to me yet, and often, my taps cause me to lose the precise design of a scene.
So yeah, Camera Control is interesting. But…
Did anyone really ask for it? It feels like a feature for the sake of Apple’s mobile execs to have something new to talk about at the September Apple event. It’s just about a ‘nice to have’ feature, but it’s hardly a phone photography game changer.
Advertisement
Not my tempo
However, maybe I’ll warm to it over time. Yet, the biggest issue is the lack of AI tools at launch for Camera Control. Apple actively touts the AI features for Camera Control that can be used to smartly identify things the cameras are pointed at and serve up all manner of information. That hasn’t happened yet, with a rollout arriving post-launch when Apple Intelligence fully arrives; there’s a beta option, but I’m not willing to try that on my main phone.
I’ve yet to understand that. Sure, other phone makers have touted AI features that will come after their phones are released and may be limited to certain regions, to begin with, but at least they launch with some of the promised AI suites. The iPhone 16 range launched without any Apple Intelligence features.
This is not what I expected from Apple, a company that famously doesn’t adopt new tech until it’s refined and ready for polished prime time. So, for it to launch smartphones without next-generation smarts is baffling to me. But it’s also the primary reason why I feel torn about Camera Control; if it had Google Lens-like abilities at launch, baked into a hardware format, I can see myself being a lot more positive about Camera Control.
Of course, Apple’s use of such a camera button will undoubtedly cause other phone makers to follow suit. I only hope they don’t skimp on features when their phones launch.
Advertisement
As for Camera Control in the here and now, I’ll keep an open mind and keep using it; I’ll just cross my fingers that it’ll become seriously handy once it gets its prescribed dose of AI smarts.
You must be logged in to post a comment Login