Connect with us

Technology

Looking forward to high-level insights at GamesBeat Next 2024 | The DeanBeat

Published

on

Looking forward to high-level insights at GamesBeat Next 2024 | The DeanBeat

I’m looking forward to GamesBeat Next 2024 on Monday October 28 and Tuesday October 29 in San Francisco.

We’ve got a lot of speakers who can deliver high-level insights into the state of the game business, which has had seemingly contradictory strong financial results, unpredictable game successes and 32,000 layoffs in the past three years.

In spite of the industry uncertainty, we’ve got more than 600 people signed up and are expecting it to be sold out. Among those folks are 150 CEOs and other top leaders of the industry. We have 97 speakers, and 65% of them come from diverse backgrounds. And 41% are women. We have 42 onstage sessions and roundtables.

We expect hundreds of women to come for the event, which includes our ninth Women in Gaming Breakfast. Thank you for supporting us, as we know everyone is feeling pressure and mental stress these days. That’s why we stay together.

Advertisement

You can see our updated final agenda here. Our event is virtually sold out.

Tim Sweeney of Epic Games (left) will be a virtual speaker at GamesBeat Next, while Matt Bromberg of Unity will speak in person.
Tim Sweeney of Epic Games (left) will be a virtual speaker at GamesBeat Next, while Matt Bromberg of Unity will speak in person.

Our event theme is all about getting back to growth, not just with growth in revenues and players, but with growth in jobs for game developers as well. We’ll talk about the shifting sands we see, from changes in game engine technology for developers to the rise of creators in game marketing.

I’m glad to have returning speakers who can give us a new milestone in gaming’s progress toward big goals like the metaverse and interoperability. Tim Sweeney, CEO of Epic Games, is coming back in a prerecorded virtual conversation that catches us up on our 2021 talk in the midst of the pandemic. We will cover the progress on the path to the open metaverse and the evolution of Unreal and Fortnite.

Speaking of the metaverse, another returnee speaker is Neal Stephenson, the creator of the word “metaverse” and author of Snow Crash, the sci-fi novel from 30 years ago. He will talk in a fireside chat on “The science fiction future that we want.”

GamesBeat Next 2024 happens on October 28-29 in SF.

Riz Virk, leader of the Center for Science and the Imagination at Arizona State University is a simulation theory expert. He will be joining me on stage to quiz Stephenson about his views on the metaverse today and his thoughts for the future, especially as technology makes so much of sci-fi more real. Stephenson has a historical sci-fi book, Polostan, out now and he is a cofounder of the startups Whenere, focused on AI and storytelling, and Lamina1, on blockchain solutions for creators.

We also have Shawn Layden, former chairman of Sony Interactive Studios, and Christina Macedo, CEO of Play, talk about how focusing on making good games is crucial — and what they’re doing to support them with Web3 technology. And while our event is about technology, their talk is the only one among the sessions focused on blockchain, which is very different from what we’ve had before.

Advertisement

Our first day also includes a walk down memory lane with Peter Moore, a longtime gaming executive who launched the Dreamcast for Sega in the U.S. 25 years ago. Moore, who recently averted a brush with death thanks to his Apple Watch, will talk about lessons for today from his past that includes leadership roles at Microsoft Xbox, Electronic Arts and Unity.

GamesBeat Next 2024 will be at Convene in San Francisco.

And Matthew Bromberg, the newly minted CEO of Unity, will speak in a virtual live session at the close of day one, where we’ll talk about making decisions in the wake of Unity’s Runtime Fee controversy, where the company introduced a new price increase and then walked it back.

The kickoff session will happen at 1:20 p.m. (registration opens at noon) on Monday with Entertainment Software Association CEO Stanley Pierre-Louis and Laura Naviaux Sturr, general manager of operations at Amazon Games. They will talk about new vectors for growth and extending intellectual property to new generations of audiences.

We’ll have multiple sessions talking about AI, but our dedicated AI and games panel will focus on ethical use of AI in game development and user-generated content. It will feature Pany Haritatos, CEO of Series Entertainment; Kent Keirsey, CEO of Invoke AI; Andy Mauro, CEO of Storycraft; and moderator Hilary Mason, CEO of Hidden Door.

Day 2 happenings

GamesBeat Next 2024 features 97 speakers.

Our Women in Gaming Breakfast begins at 8:30 a.m. on Tuesday, the second day of the event, and it features a fireside chat between GamesBeat writer Rachel Kaser and Dametra Johnson-Marletti Dametra corporate vice president of digital gaming within the Microsoft consumer sales organization. She has helped grow revenue in her division from $800 million to more than $7 billion.

They will talk about inspiring the next generation of leaders and luminaries in games and how culture and representation can play a role in attracting and nurturing the next generation of gamers and creatives wishing to work in the industry.

Advertisement

Johnson-Marletti plans to give some insight on building a career in gaming, and how major companies can foster and retain the talent that will become the next-generation leaders in the games industry. She’ll also cover diversity and inclusion, representation, and how both new workers and games companies can set the new wave of talent up for long-term success.

GamesBeat Next 2024 will feature our ninth Women in Gaming Breakfast.

We’ll kick off into session with leaders of Xsolla, Electronic Arts and SciPlay talking about the best practices for mastering mobile monetization. Then we’ll drill deeper, breaking into three concurrent stages for talks related to the topics of culture, technology, growth and industry.

At lunch, we’ll gather for a panel on diversity in gaming, sponsored by Xsolla, where Xsolla vice president of marketing Bridget Stacy will lead a session on prioritizing inclusion during tough times with inspiring entrepreneurs including Sheloman Byrd, CEO of Open Ocean Games; Jessica Murrey, CEO of Wicked Saints Studios; and Jenny Xu, CEO of Talofa Games.

GamesBeat Next 2024 has 65% of its speakers from diverse backgrounds.

I’m sad I can’t mention everything, but we will hit important topics like alternative open source game engines like Godot, mental health and games where games can be considered medical treatments, millennial and GenZ gamers, pioneering VR concepts with leaders like Kerestell Smith of Gorilla Tag and pet game creator Bernard Yee, the future of game publishing, operating in an ethical way in an ambiguous time, gaming M&A and funding, analyzing games, game creators and discovery, direct-to-consumer stores, and creating transmedia IP at places like Netflix, Exploding Kittens and Sharon Tal Yguado’s Astrid Entertainment.

GamesBeat Next 2024 is our third GamesBeat Next event.

Toward the end, we’ll gather to hear Amy Hennig, co-president of new media at Skydance Interactive talk with other Skydance execs about welcoming people into your team. She’s got a big team making Marvel 1943: Rise of Hydra and is a veteran co-creator of the Uncharted series and more.

We also have a number of interesting roundtables. During one, Shelby Moledina, who has created a dark comedy short film about raising money for games when you’re a woman. I highly recommend the roundtables for those who want a more intimate experience at the event.

Game Changers session

Game Changers is back for another round of the best game startups.
Game Changers is back for another round of the best game startups.

To close the conference, Lightspeed and GamesBeat will announce the 2025 Game Changers—an annual list to celebrate and accelerate extraordinary startups in gaming and interactive technology. Lightspeed’s Moritz Baier-Lentz and l will start the session with insights from judges and past winners including Lisha Li, founder and CEO of Rosebud AI (past winner); Kylan Gibbs, CEO of Inworld AI; and Mihir Vaidya, chief sttrategy officer at Electronic Arts.

Then we will unveil the winners from each of the five key categories present live on stage: 3D technology & infrastructure, generative AI, game studios and UGC, interactive media platforms, and extended reality (AR and VR). Last year, Lightspeed showed the names of the winners on the Nasdaq Tower in Times Square.

Advertisement

Our next events

GamesBeat Next 2024 is brought to you by the small but mighty staff at VentureBeat.

And please remember we have a new event coming on gaming and its intersections with Hollywood, on December 12 in LA, the same day as The Game Awards. It’s called GamesBeat Insider Series: Hollywood and Games. It features Brian Ward, CEO of Savvy Games Group; game adaptation film maker Ari Arad, industry seer Matthew Ball of Epyllion; Eunice Lee, Scopely COO; Dmitri Johnson of Story Kitchen and the man who has conspired to bring Sonic the Hedgehog and Lara Croft to film and TV; and Erika Ewing, a cross-media leader at Lionsgate.

And be sure to look out for our extended partnership with Xsolla on the GamesBeat Global Tour where we hold dinners in cities around the globe. This past year, we held dinners in Los Angeles, Austin, Sao Paulo, Tokyo, and Seattle. 

We’ve got a great crew of speakers at GamesBeat Next 2024.

We’ve also got GamesBeat Summit 2025 returning to Los Angeles on May 19-20, 2025. 

Lastly, remember to come out of the virtual world long enough to see what’s happening in the real world. Remember to vote in this year’s presidential election. You can even do this at the headquarters of Jam City in LA, which is an actual polling place. 

We’re proud to have returning sponsors including Xsolla, Fastspring, Modulate, the Entertainment Software Association and Lightspeed as well as new sponsors such as Open World, Fastly, Ludeo, Lightspeed, RapidFire and Play. If you’d like to request sponsorship information, you can fill out this form.

Our community partners include Women-Led Games, IGDA Foundation and Black in Gaming Foundation.

Advertisement

Source link
Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

ISRO-Axiom Space collaboration to have a significant impact on global space exploration- The Week

Published

on

ISRO-Axiom Space collaboration to have a significant impact on global space exploration- The Week

The partnership between India’s space agency, ISRO and Axiom Space for the Axiom-4 mission is a major achievement in India’s space exploration efforts. Set to launch in October 2024 for the International Space Station (ISS), this mission highlights the importance of working together globally, advancing technology and the increasing involvement of private companies in space activities. Interestingly, this mission will include a crew of four astronauts, Peggy Whitson (Axiom Space), Shubhanshu Shukla (ISRO), Sławosz Uznański (POLSA/ESA) and Tibor Kapu (HUNOR) from Hungary. It also has a backup crew which includes Michael López-Alegría (Axiom Space), Prasanth Balakrishnan Nair (ISRO) and Gyula Cserényi (HUNOR) from Hungary. 

Axiom Space, established in 2016 by Michael T. Suffredini and Kam Ghaffarian, is a major provider of human spaceflight services and builder of human-rated space infrastructure. Located in Houston, Texas, the company plans to create the first-ever commercial space station, known as Axiom Station. Axiom Space conducts complete missions to the ISS while working on developing its replacement, the Axiom Station. Additionally, the company is designing next-generation spacesuits for use in low-Earth orbit (LEO) on the Moon and beyond. 

The Axiom-4 (Ax-4) is a private spaceflight heading to the ISS and is set to launch in October 2024. This mission will last around 14 days and will be managed by Axiom Space using a SpaceX Crew Dragon spacecraft. The mission will launch from the Kennedy Space Center in Florida, utilizing a Falcon 9 rocket. The astronauts will go through extensive training that includes scientific research, technology demonstrations and space outreach activities. 

“NASA will provide essential to the Axiom-4 mission, including key services, through a Special Order and a reimbursable Space Act Agreement. These services include supplying the crew, delivering cargo, offering storage and providing daily resources while in orbit. The astronauts, including those from India, will undergo training at NASA’s Johnson Space Center in Houston. This training is crucial to prepare them for the mission and ensure their safety and effectiveness in space. The agreement with NASA also includes up to seven extra days on the ISS in case of unexpected issues, allowing the mission to adapt as needed,” explained space expert Girish Linganna. 

Advertisement

He explained that there is also a special order and space act agreement which includes a detailed plan that describes the services NASA will provide for the Axiom-4 mission. “Providing food, clothes and other necessary items for the astronauts. Transporting and storing the equipment and supplies needed for the mission and making sure that astronauts have access to such essentials as power, water and air while on the ISS.  Allowing up to seven extra days on the ISS in case of delays or emergencies,” added Linganna. 

Under the reimbursable Space Act Agreement is a financial deal where Axiom Space pays NASA for the services they provide including training using NASA’s facilities and knowledge to train the astronauts, access to NASA’s training centres and other buildings and help and support for the launch and return of the mission. 

There would also be scientific experiments on Axiom-4. Under this, ISRO has planned five experiments for Axiom-4. These experiments, created in India, will explore different scientific and technological areas. Some experiments will be done with other space agencies, adding more scientific value to the mission. Besides this, NASA will help carry out these experiments by providing the necessary resources and expertise. This includes setting up and operating scientific equipment on the ISS. Broadly the experiments will focus on materials science, biology and Earth observation, using the unique microgravity environment of space to make new discoveries. 

“One of the unique technical features of this mission is the integration of advanced life support systems and autonomous docking capabilities in the Crew Dragon spacecraft. These systems are designed to provide a safer and more efficient environment for the crew, reducing the need for manual intervention and allowing astronauts to focus on their scientific and operational tasks. Additionally, the spacecraft is equipped with state-of-the-art communication systems that enable real-time data transmission and high-definition video streaming back to Earth, enhancing mission control and public engagement,” said Srimathy Kesan, founder and CEO of Space Kidz India, which is into design, fabrication and launch of small satellites, spacecraft and ground systems. 

Advertisement

She added that the mission will also involve the deployment of cutting-edge scientific instruments and experiments. For instance, the crew will conduct research on the effects of microgravity on human physiology, which is crucial for long-duration space missions. They will also test new materials and technologies that could be used in future space habitats and vehicles. These experiments are designed to push the boundaries of our current knowledge and pave the way for future innovations in space exploration. 

“A particularly exciting aspect of this mission is the zero-gravity experience it offers. This environment allows for unique scientific experiments that cannot be conducted on Earth. For example, the behavior of fluids, combustion, and biological processes in microgravity can provide insights that are impossible to obtain under normal gravitational conditions. This experience is not only scientifically valuable but also crucial for preparing astronauts for long-duration missions to the Moon and Mars,” remarked Kesan. 

By partnering with Axiom Space, ISRO leverages private sector expertise and resources, significantly reducing costs compared to traditional government-led missions. This approach aligns with the global trend of commercial space ventures, making space exploration more economically sustainable. This partnership exemplifies the growing role of commercial entities in space exploration. Unlike traditional partnerships that are often government-to-government, this collaboration involves a private company, highlighting a shift towards more diverse and inclusive space missions. 

The mission is a critical step in India’s human spaceflight program, particularly in the context of the upcoming Gaganyaan mission. The experience and insights gained will be invaluable for ISRO as it prepares for its ambitious goal of sending Indian astronauts to space independently. The mission supports a variety of scientific experiments and technological tests in the unique microgravity environment of space. This focus on diverse scientific objectives underscores the mission’s critical role in advancing our understanding of space and its applications. 

Advertisement

ISRO’s collaboration with Axiom Space is a landmark event that combines technical innovation, economic feasibility, and international cooperation. It sets a new standard for future space missions and highlights the evolving landscape of global space exploration 

There have been previous Axiom Missions which include Axiom Mission 1 (Ax-1) that was launched on April 8, 2022. This mission was the first fully privately funded and managed mission to send a crew of four astronauts to the ISS. During their 17-day stay, they carried out various scientific experiments and technology demonstrations. Besides this, there was the Axiom Mission 2 (Ax-2) which was  a private space mission operated by Axiom Space. It launched on May 21, 2023, using a SpaceX Falcon 9 rocket. The mission successfully docked with the ISS on May 22. After spending eight days on the ISS, the Dragon crew capsule, named Freedom, undocked and returned to Earth 12 hours later. This mission, which lasted 10 days, emphasised scientific research and educational outreach activities. 

The Axiom Mission 3 (Ax-3) was also a privately funded space mission to the ISS, launched on January 18, 2024. The mission lasted 21 days and concluded with a successful splashdown in the Atlantic Ocean. The goal of this mission was further scientific research and promoting international collaboration in space. 

Source link

Advertisement

Continue Reading

Technology

Android 15: everything you need to know

Published

on

Android 15: everything you need to know

Google’s next major update for smartphones is here. Android 15 rolled out to Pixel devices on October 15 and will trickle down to countless other devices over the next several months. Android 15 has eschewed visual updates and instead tidies up the interface and improves existing features. It also gets a number of under-the-hood improvements that you may toy with occasionally.

Android 15 packs a host of privacy-centric features, including the excellent new Private Space. Android 15 also brings a big boost to satellite communications, extending the functionality beyond the Pixel lineup. Let’s dive into more details about the availability and new features coming to your phone with Android 15.

Android 15 release date

Android 15 logo on a Google Pixel 8.
Joe Maring / Digital Trends

As a cheeky trick, Google released Android 15 on October 15 for supported Pixel phones and the Pixel Tablet. All Pixel phones from the Pixel 6 lineup and newer are eligible for the update. Since Pixels make up for a small chunk of the Android space, a large percentage of devices still await their respective Android updates.

As with each year, manufacturers have been adapting their custom skins to Android 15, adding their own custom visuals and features on top. Besides Google, brands such as OnePlus, Oppo, Realme, and Samsung have already previewed their new Android 15-based interfaces. Meanwhile, some other brands such as Motorola, Nothing, Vivo, and Honor have initiated open beta programs for some of their devices where anyone can try the upcoming updates. Xiaomi is the sole big brand that has yet to make any announcement about its Android 15 update.

Phones that can download Android 15

The Google Pixel 9 Pro Fold, Pixel 9 Pro, and Pixel 9's cameras.
Google Pixel 9 (left), Pixel 9 Pro, and Pixel 9 Pro Fold Andy Boxall / Digital Trends

Of the phones that can already download the Android 15 update, Google’s Pixel phones top the list. The Android 15 update is already available for the following Google devices:

In addition, a small set of phones, including the Nothing Phone 2a, Vivo X100, and the Vivo X Fold 3 Pro have already received open beta updates based on Android 15. Motorola is also rolling out the beta update for Motorola Edge 2024, but only in certain regions where the phone is known as Edge 50 Fusion.

Advertisement

Meanwhile, OnePlus has announced OxygenOS 15, its custom interface based on Android 15. Samsung, which is usually among the fastest to hop the bandwagon, has delayed the One UI 7.0 update until January, so we expect it to coincide with the Galaxy S25 series launch.

We shall have more details about other devices in the coming weeks. In the meantime, if you wish to see if your phone qualifies for Android 15, we have a comprehensive list of all the phones that will get Android 15.

Private Space is one of the biggest new features

Private Space on Android 15 running on a Google Pixel 6a held in hand.
Tushar Mehta / Digital Trends

With Android 15, Google offers a new way to hide away certain apps and files in a secure vault. Google advertises this as a feature to keep your work and office apps and files separate; it’s like running a phone within a phone — something that previously required specialized apps. Private Space can be an ideal space to tuck away your social media, banking, or dating apps.

Before you can use Private Space, you have to activate and then set it up on your Pixel phone from Settings > Security and privacy > Private space. Google recommends you use a separate email with Private Space. That’s because apps in the vault will exist in a sandboxed environment and can’t interact with the rest of the phone. It is also a good way to secure apps if you are nervous about certain apps stealing your data or abusing Android’s security permissions to access your files.

With Private Space, you can either use your phone’s existing biometrics or set up new ones (including a dedicated fingerprint). This will also be beneficial if you share the device with other people.

Advertisement

After it is set up, Private Space is accessible from the bottom of the app drawer in the Pixel Launcher, where you can add apps or privately access files. At the moment, Private Space is exclusive to Pixel phones and may not necessarily be available on other phones, since some Android manufacturers already offer some similar solutions. For instance, Samsung has a Secure Folder in One UI. Whether other manufacturers adopt the functionality is likely to become clear in the coming months.

Predictive Back updates the navigation experience

Predictive back on Android 15 running on a Google Pixel 6a held in hand.
Tushar Mehta / Digital Trends

Android 15 also brings Predictive Back, a feature that lets you get a preview of the screen that will load up when swipe from one edge for the back gesture. This is similar to the back gesture on iOS, and feels like revealing the card behind the top one in a deck. The idea is to let users know the previous screen without completing the back gesture so they can avoid it if needed. Google says it “lets the user decide whether to continue—in other words, to ‘commit’ to the back gesture—or stay in the current view.”

Unfortunately, Google’s implementation in its current form feels crude (especially compared to iOS) and only displays a small portion of the previous screen. Another disadvantage is that it currently only works in a very small set of apps — we could only spot them in the Settings app and the app drawer.

We would expect other apps to adopt the functionality but unlike Apple, Google gives developers free rein on which features to implement. So, similar to Material You and adaptive theming, developers may choose to overlook Predictive Back.

Make sure to check out Partial Screen Recording

Partial screen recording on Android 15 running on a Google Pixel 6a held in hand.
Tushar Mehta / Digital Trends

Partial Screen Recording on Android 15 lets you screen record contents on your screen selectively. While starting a screen recording, you will be prompted to choose whether you want to record a specific app or the entire screen. If you choose the first option, the screen recording will only include parts from the selected app and black out the section where you weren’t using the app.

This will prevent you from inadvertently leaking any private information through the screen recordings.

Advertisement

Simultaneously, there’s another hidden feature that lets you bypass restrictions when specific apps, such as your banking app, prevent taking screenshots. You can skirt around these restrictions by heading over to Settings > System > Developer options > Disable screen share protections. If you haven’t used Developer options before, you may need to enable them from the phone’s Settings > About phone, scrolling all the way to the bottom, and then tapping Build number seven times in quick successions.

Introducing Satellite Connectivity

Satellite connectivity features on Google Pixel 9 exclusively available in the U.S.
Google

With the Pixel 9 series that Google announced earlier this year, the company confirmed satellite connectivity as one of the features. Similar to satellite SOS services on relatively newer iPhone and the Apple Watch models, the Pixel 9’s satellite connectivity lets you call emergency services or notify top contacts in case you are ever stranded with no Wi-Fi or cellular service.

Google takes this a step further with Android 15, allowing all phones — besides the Pixel 9 series — to communicate directly with an extraterrestrial satellite. In addition to contacting first responders or alerting chosen contacts, the feature also lets you send messages to just about any phone number.

Google elaborates that any phone with the “proper hardware” will be able to communicate via satellite when necessary. It should supposedly mean phones with modems that support satellite communications, though it’s a little difficult to confirm without proper confirmation from Google.

Google says the feature will depend on carriers, and could possibly happen through special messaging apps that these telcos designate. Though privacy, encryption, and interoperability on these apps are part of a different ball game altogether, we know the functionality will likely not be free of cost. Having long conversations at the expense of artificial celestial bodies will not be economical, so there might be limitations, but these details elude us for now.

Advertisement

Notably, T-Mobile is the only carrier to have activated satellite connectivity. It recently enabled satellite-based texting in partnership with SpaceX, for all of its users in areas affected by Hurricane Helene and Hurricane Milton. However, this functionality supposedly worked irrespective of the operating system.

Whether it’s T-Mobile’s lead with the feature or Google promoting it in Android 15, we can expect satellite communications to get the due attention it deserves.

App pairs are a helpful new tool

App Pairs on Android 15 running on a Google Pixel 6a held in hand.
Tushar Mehta / Digital Trends

Android has supported multitasking in split-screen mode for palm-sized Android devices since Android 7.0 Nougat that was launched in 2016. Over the years, split-screen functionality has become fairly useful because of larger-than-ever displays and hardware that can actually handle the workloads with two apps running simultaneously.

With app pairs on Android 15, you can save sets of two apps that can be launched together in a split-screen view. App pairs can be saved on the home screen, and you can launch pairs directly by tapping the icon. Some Android tablets already support the feature, but it’s now headed to regular-sized phones.

To save an app pair, you first need to:

Advertisement

  1. Open two apps simultaneously in split screen.
  2. Open the Recent apps menu.
  3. Tap and hold the apps’ icons.
  4. Tap “Save app pair”.

These app pairs will appear on the home screen, where you can tap the icon to launch the two apps in split view over and over again. These app pairs would not save in the app drawer, so you will need to be wary while purging excess icons from the home screen.

Notification Cooldown and Adaptive Vibration

Notification cooldown on Android 15 running on a Google Pixel 6a held in hand.
Tushar Mehta / Digital Trends

With Android 15, Google plans to reduce the pressure that the barrage of notifications put upon us. For this, Google added a feature aptly known as Notification Cooldown with an objective to prevent the bombardment of notifications.

If you ever receive a string of notifications, the feature will subsequently reduce the volume of alerts so they become less annoying. Continuous pings and dings should no more interrupt your flow of thought while you’re trying to conjure up the perfect witty caption for a picture of your cats romping around their multistoried house.

Notification Cooldown currently only works if you keep the volume on for your ringtone and notification alerts. However, if you prefer your phone steadily in silent mode, Android 15 also adds Adaptive Vibrations, which reduces the intensity of vibration when the phone is still (i.e., not being used actively) and facing upward.

This should ideal prevent you from getting distracted by a string of notifications, especially when the phone is set aside. Pixels also give you the option to put your phone facing downward to send it into Do Not Disturb mode.

Notably, Google has downsized the options with these features as compared to when they were initially launched with Android 15 developer preview. This could indicate the company is working to improve them slightly more before being vocal about them.

Advertisement

HQ webcam mode to the rescue

HQ webcam mode on Android 15 running on a Google Pixel 6a held in hand.

When the global pandemic hit, our webcams really found purpose again. For many of us who continue to work from home, webcams are vital. But the potato camera that most of the cheap webcams have can impair the quality of our virtual interactions.

As a solution, Apple released Continuity Camera two years ago, allowing the iPhone to be used as your webcam. Google followed suit last year, and enabled your Android to be used as a wired (not wirelessly, alas!) webcam with any Windows, macOS, Linux, or even ChromeOS machine. With Android 15, the quality is getting a significant boost as Google adds a new “HQ” — referring to high quality — mode for the webcam.

The HQ mode makes your images noticeably sharper without adding any latency to the video feed. You can also use your Android phone for camera-dependent activities, such as streaming, without explicitly relying on expensive hardware.

The functionality was previously also facilitated on Android but through third-party apps. By adding this as a native feature, Google eliminates the need to pay to unlock high-quality and near instant camera feed sent to your PC.

USB Lockdown adds an extra security layer

USB Lockdown Android 15 running on a Google Pixel 6a held in hand.
Tushar Mehta / Digital Trends

Android’s Lockdown feature adds an extra layer of security to your phone by disabling biometrics. So, in case one of your friends or family members tries to use your phone without your permission, they cannot unlock it by just holding it up against your face or fooling you into pressing your finger on the fingerprint scanner. Even if you haven’t paid attention to lockdown, it has been around since Android 9.

With Android 15, Google goes a step further and locks access to file storage while the phone is in lockdown mode. That essentially means that anyone who tries to access your files by connecting the phone to a computer without your permission, they won’t have luck. More importantly, the feature prevents “juice jacking,” or the technique where public chargers are loaded with rogue cables that can be used to covertly steal your data.

Advertisement

Unfortunately, it doesn’t still tie into the new anti-theft features that Google recently announced for all devices running Android 10 and above. Anti-theft forces your phone to lock when it detects a sudden jerk (similar to the ominous scenario of your phone being yanked out of your hand), but doesn’t fully trigger lockdown mode.

Manual app archiving is another welcome touch

Manual app archive on Android 15 running on a Google Pixel 6a held in hand.

Unused apps can take up space on your phone for no reason, which is why last year Google — presumably, with inspiration from iOS — added a feature that would automatically archive apps that you don’t use when the phone’s storage is running low. While it deletes the app package, all your data remains intact so you can download the app again and can pick up from where you left.

Android 15 augments the feature, now allowing you to manually archive apps they don’t use but aren’t ready to delete just yet. A new Archive button is now present on the info page for particular apps. That’s another way Android raises the bar for iOS.






Source link

Continue Reading

Technology

Android 16’s “Modes” may revive “Profiles” of old mobile phones

Published

on

Android 16's "Modes" may revive "Profiles" of old mobile phones

With Android 15 now available for all Pixel-eligible devices and other brands sharing their rollout calendars, Google is already working on Android 16, the next major update to the OS. It’s still too early to know all the improvements the company is working on. However, recent findings suggest that Android 16 will revamp the classic “Do Not Disturb” with new customizable “Modes.”

Google would bring back the classic “Profiles” to mobile phones, in its own way

Android 16’s new Modes seem like an advanced version of the “profiles” we had on older mobile phones. If you’re not aware, the “profiles” option allowed you to set different combinations of ringtones, volume, vibration, etc. You could name each profile whatever you wanted. The option was quite useful for quickly setting an ideal configuration for each occasion. For instance, you could muffle ringtones and notifications on a profile named “meeting”.

Interestingly, smartphones gained countless features but lost the profile settings. Developers replaced them with preset options like “Silent” or “Do Not Disturb,” whose customization possibilities are limited. However, Google would change this in the next big Android update with the new “Modes.”

Android 16’s “Modes” seems highly inspired by the “Profiles” option

As spotted by Mishaal Rahman, the “Modes” option seems destined to debut in Android 16. The source spotted the feature in the latest Android 15 QPR1 Beta 3. It’s noteworthy that “Modes” appeared in a previous beta, albeit under the name “Priority Modes.” Just like the “profiles” on old mobile phones, “Modes” allows you to set different combinations of settings to suit different situations.

Advertisement

android 16 modes leak

Within each Mode, users will be able to customize settings such as the mode name, trigger, display settings, notification behavior, and even the icon. There are over 40 icons to choose from, so you can easily differentiate between all your Modes. The “trigger” setting is especially interesting as it suggests that there are Modes that will automatically activate under certain conditions. However, there are no further details on what conditions you can set.

If you enable a Mode, the icon will be present in the status bar. You can access all your modes from the Settings menu or the Quick Settings panel. The feature is quite promising, and many will surely find it useful. Let’s hope Google really plans to implement it in Android 16.

Source link

Continue Reading

Technology

NASA spent October hoisting a 103-ton simulator section onto a test stand to prep for the next Moon mission

Published

on

NASA spent October hoisting a 103-ton simulator section onto a test stand to prep for the next Moon mission

NASA spent the last two weeks hoisting a 103-ton component onto a simulator and installing it to help prepare for the next Moon missions. Crews fitted the interstage simulator component onto the Thad Cochran Test Stand at Stennis Space Center near Bay St. Louis, Mississippi. The connecting section mimics the same SLS (Space Launch System) part that will help protect the rocket’s upper stage, which will propel the Orion spacecraft on its planned Artemis launches.

The Thad Cochran Test Stand is where NASA sets up the SLS components and conducts thorough testing to ensure they’ll be safe and operating as intended on the versions that fly into space. The new section was installed onto the B-2 position of the testing center and is now fitted with all the necessary piping, tubing and electrical systems for future test runs.

Top-down view of the SLS interstage section installed at a test center.

NASA

The interstage section will protect electrical and propulsion systems and support the SLS’s EUS (Exploration Upper Stage) in the rocket’s latest design iteration, Block 1B. It will replace the current Block 1 version and offer a 40 percent bigger payload. The EUS will support 38 tons of cargo with a crew or 42 tons without a crew, compared to 27 tons of crew and cargo in the Block 1 iteration. (Progress!) Four RL10 engines, made by contractor L3Harris, will power the new EUS.

The interstage simulator section NASA spent mid-October installing weighs 103 tons and measures 31 feet in diameter and 33 feet tall. The section’s top portion will absorb the EUS hot fire thrust, transferring it back to the test stand so the test stand doesn’t collapse under the four engines’ more than 97,000 pounds of thrust.

Advertisement

NASA’s testing at Stennis Space Center will prepare the SLS for the Artemis IV mission, which will send four astronauts aboard the Orion spacecraft to the Lunar Gateway space station to install a new module. After that, they’ll descend to the Moon’s surface in the Starship HLS (Human Landing System) lunar lander.

You can catch some glimpses into NASA’s heavy lifting in the video below:

Advertisement

Source link

Continue Reading

Technology

The enterprise verdict on AI models: Why open source will win

Published

on

The enterprise verdict on AI models: Why open source will win

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The enterprise world is rapidly growing its usage of open source large language models (LLMs), driven by companies gaining more sophistication around AI – seeking greater control, customization, and cost efficiency. 

While closed models like OpenAI’s GPT-4 dominated early adoption, open source models have since closed the gap in quality, and are growing at least as quickly in the enterprise, according to multiple VentureBeat interviews with enterprise leaders.

This is a change from earlier this year, when I reported that while the promise of open source was undeniable, it was seeing relatively slow adoption. But Meta’s openly available models have now been downloaded more than 400 million times, the company told VentureBeat, at a rate 10 times higher than last year, with usage doubling from May through July 2024. This surge in adoption reflects a convergence of factors – from technical parity to trust considerations – that are pushing advanced enterprises toward open alternatives.

Advertisement

“Open always wins,” declares Jonathan Ross, CEO of Groq, a provider of specialized AI processing infrastructure that has seen massive uptake of customers using open models. “And most people are really worried about vendor lock-in.”

Even AWS, which made a $4 billion investment in closed-source provider Anthropic – its largest investment ever – acknowledges the momentum. “We are definitely seeing increased traction over the last number of months on publicly available models,” says Baskar Sridharan, AWS’ VP of AI & Infrastructure, which offers access to as many models as possible, both open and closed source, via its Bedrock service. 

The platform shift by big app companies accelerates adoption

It’s true that among startups or individual developers, closed-source models like OpenAI still lead. But in the enterprise, things are looking very different. Unfortunately, there is no third-party source that tracks the open versus closed LLM race for the enterprise, in part because it’s near impossible to do: The enterprise world is too distributed, and companies are too private for this information to be public. An API company, Kong, surveyed more than 700 users in July. But the respondents included smaller companies as well as enterprises, and so was biased toward OpenAI, which without question still leads among startups looking for simple options. (The report also included other AI services like Bedrock, which is not an LLM, but a service that offers multiple LLMs, including open source ones — so it mixes apples and oranges.)

Image from a report from the API company, Kong. Its July survey shows ChatGPT still winning, and open models Mistral, Llama and Cohere still behind.

But anecdotally, the evidence is piling up. For one, each of the major business application providers has moved aggressively recently to integrate open source LLMs, fundamentally changing how enterprises can deploy these models. Salesforce led the latest wave by introducing Agentforce last month, recognizing that its customer relationship management customers needed more flexible AI options. The platform enables companies to plug in any LLM within Salesforce applications, effectively making open source models as easy to use as closed ones. Salesforce-owned Slack quickly followed suit.

Oracle also last month expanded support for the latest Llama models across its enterprise suite, which includes the big enterprise apps of ERP, human resources, and supply chain. SAP, another business app giant, announced comprehensive open source LLM support through its Joule AI copilot, while ServiceNow enabled both open and closed LLM integration for workflow automation in areas like customer service and IT support.

Advertisement

“I think open models will ultimately win out,” says Oracle’s EVP of AI and Data Management Services, Greg Pavlik. The ability to modify models and experiment, especially in vertical domains, combined with favorable cost, is proving compelling for enterprise customers, he said.

A complex landscape of “open” models

While Meta’s Llama has emerged as a frontrunner, the open LLM ecosystem has evolved into a nuanced marketplace with different approaches to openness. For one, Meta’s Llama has more than 65,000 model derivatives in the market. Enterprise IT leaders must navigate these, and other options ranging from fully open weights and training data to hybrid models with commercial licensing.

Mistral AI, for example, has gained significant traction by offering high-performing models with flexible licensing terms that appeal to enterprises needing different levels of support and customization. Cohere has taken another approach, providing open model weights but requiring a license fee – a model that some enterprises prefer for its balance of transparency and commercial support.

This complexity in the open model landscape has become an advantage for sophisticated enterprises. Companies can choose models that match their specific requirements – whether that’s full control over model weights for heavy customization, or a supported open-weight model for faster deployment. The ability to inspect and modify these models provides a level of control impossible with fully closed alternatives, leaders say. Using open source models also often requires a more technically proficient team to fine-tune and manage the models effectively, another reason enterprise companies with more resources have an upper hand when using open source.

Advertisement

Meta’s rapid development of Llama exemplifies why enterprises are embracing the flexibility of open models. AT&T uses Llama-based models for customer service automation, DoorDash for helping answer questions from its software engineers, and Spotify for content recommendations. Goldman Sachs has deployed these models in heavily regulated financial services applications. Other Llama users include Niantic, Nomura, Shopify, Zoom, Accenture, Infosys, KPMG, Wells Fargo, IBM, and The Grammy Awards. 

Meta has aggressively nurtured channel partners. All major cloud providers embrace Llama models now. “The amount of interest and deployments they’re starting to see for Llama with their enterprise customers has been skyrocketing,” reports Ragavan Srinivasan, VP of Product at Meta, “especially after Llama 3.1 and 3.2 have come out. The large 405B model in particular is seeing a lot of really strong traction because very sophisticated, mature enterprise customers see the value of being able to switch between multiple models.” He said customers can use a distillation service to create derivative models from Llama 405B, to be able to fine tune it based on their data. Distillation is the process of creating smaller, faster models while retaining core capabilities. 

Indeed, Meta covers the landscape well with its other portfolio of models, including the Llama 90B model, which can be used as a workhorse for a majority of prompts, and 1B and 3B, which are small enough to be used on device. Today, Meta released “quantized” versions of those smaller models. Quantization is another process that makes a model  smaller, allowing less power consumption and faster processing. What makes these latest special is that they were quantized during training, making them more efficient than other industry quantized knock-offs – four times faster at token generation than their originals, using a fourth of the power.

Technical capabilities drive sophisticated deployments

The technical gap between open and closed models has essentially disappeared, but each shows distinct strengths that sophisticated enterprises are learning to leverage strategically. This has led to a more nuanced deployment approach, where companies combine different models based on specific task requirements.

Advertisement

“The large, proprietary models are phenomenal at advanced reasoning and breaking down ambiguous tasks,” explains Salesforce EVP of AI, Jayesh Govindarajan. But for tasks that are light on reasoning and heavy on crafting language, for example drafting emails, creating campaign content, researching companies, “open source models are at par and some are better,” he said. Moreover, even the high reasoning tasks can be broken into sub-tasks, many of which end up becoming language tasks where open source excels, he said. 

Intuit, the owner of accounting software Quickbooks, and tax software Turbotax, got started on its LLM journey a few years ago, making it a very early mover among Fortune 500 companies. Its implementation demonstrates a sophisticated approach. For customer-facing applications like transaction categorization in QuickBooks, the company found that its fine-tuned LLM built on Llama 3 demonstrated higher accuracy than closed alternatives. “What we find is that we can take some of these open source models and then actually trim them down and use them for domain-specific needs,” explains Ashok Srivastava, Intuit’s chief data officer. They “can be much smaller in size, much lower and latency and equal, if not greater, in accuracy.”

The banking sector illustrates the migration from closed to open LLMs. ANZ Bank, a bank that serves Australia and New Zealand, started out using OpenAI for rapid experimentation. But when it moved to deploy real applications, it dropped OpenAI in favor of fine-tuning its own Llama-based models, to accommodate its specific financial use cases, driven by needs for stability and data sovereignty. The bank published a blog about the experience, citing the flexibility provided by Llama’s multiple versions, flexible hosting, version control, and easier rollbacks. We know of another top-three U.S. bank that also recently moved away from OpenAI.

It’s examples like this, where companies want to leave OpenAI for open source, that have given rise to things like “switch kits” from companies like PostgresML that make it easy to exit OpenAI and embrace open source “in minutes.”

Advertisement

Infrastructure evolution removes deployment barriers

The path to deploying open source LLMs has been dramatically simplified. Meta’s Srinivasan outlines three key pathways that have emerged for enterprise adoption:

  1. Cloud Partner Integration: Major cloud providers now offer streamlined deployment of open source models, with built-in security and scaling features.
  2. Custom Stack Development: Companies with technical expertise can build their own infrastructure, either on-premises or in the cloud, maintaining complete control over their AI stack – and Meta is helping with its so-called Llama Stack.
  3. API Access: For companies seeking simplicity, multiple providers now offer API access to open source models, making them as easy to use as closed alternatives. Groq, Fireworks, and Hugging Face are examples. All of them are able to provide you an inference API, a fine-tuning API, and basically anything that you would need or you would get from a proprietary provider.

Safety and control advantages emerge

The open source approach has also – unexpectedly – emerged as a leader in model safety and control, particularly for enterprises requiring strict oversight of their AI systems. “Meta has been incredibly careful on the safety part, because they’re making it public,” notes Groq’s Ross. “They actually are being much more careful about it. Whereas with the others, you don’t really see what’s going on and you’re not able to test it as easily.”

This emphasis on safety is reflected in Meta’s organizational structure. Its team focused on Llama’s safety and compliance is large relative to its engineering team, Ross said, citing conversations with the Meta a few months ago. (A Meta spokeswoman said the company does not comment on personnel information). The September release of Llama 3.2 introduced Llama Guard Vision, adding to safety tools released in July. These tools can:

  • Detect potentially problematic text and image inputs before they reach the model
  • Monitor and filter output responses for safety and compliance

Enterprise AI providers have built upon these foundational safety features. AWS’s Bedrock service, for example, allows companies to establish consistent safety guardrails across different models. “Once customers set those policies, they can choose to move from one publicly available model to another without actually having to rewrite the application,” explains AWS’ Sridharan. This standardization is crucial for enterprises managing multiple AI applications.

Databricks and Snowflake, the leading cloud data providers for enterprise, also vouch for Llama’s safety: “Llama models maintain the “highest standards of security and reliability,” said Hanlin Tang, CTO for Neural Networks

Intuit’s implementation shows how enterprises can layer additional safety measures. The company’s GenSRF (security, risk and fraud assessment) system, part of its “GenOS” operating system, monitors about 100 dimensions of trust and safety. “We have a committee that reviews LLMs and makes sure its standards are consistent with the company’s principles,” Intuit’s Srivastava explains. However, he said these reviews of open models are no different than the ones the company makes for closed-sourced models.

Advertisement

Data provenance solved through synthetic training

A key concern around LLMs is about the data they’ve been trained on. Lawsuits abound from publishers and other creators, charging LLM companies with copyright violation. Most LLM companies, open and closed, haven’t been fully transparent about where they get their data. Since much of it is from the open web, it can be highly biased, and contain personal information. 

Many closed sourced companies have offered users “indemnification,” or protection against legal risks or claims lawsuits as a result of using their LLMs. Open source providers usually do not provide such indemnification. But lately this concern around data provenance seems to have declined somewhat. Models can be grounded and filtered with fine-tuning, and Meta and others have created more alignment and other safety measures to counteract the concern. Data provenance is still an issue for some enterprise companies, especially those in highly regulated industries, such as banking or healthcare. But some experts suggest these data provenance concerns may be resolved soon through synthetic training data. 

“Imagine I could take public, proprietary data and modify them in some algorithmic ways to create synthetic data that represents the real world,” explains Salesforce’s Govindarajan. “Then I don’t really need access to all that sort of internet data… The data provenance issue just sort of disappears.”

Meta has embraced this trend, incorporating synthetic data training in Llama 3.2’s 1B and 3B models

Advertisement

Regional patterns may reveal cost-driven adoption

The adoption of open source LLMs shows distinct regional and industry-specific patterns. “In North America, the closed source models are certainly getting more production use than the open source models,” observes Oracle’s Pavlik. “On the other hand, in Latin America, we’re seeing a big uptick in the Llama models for production scenarios. It’s almost inverted.”

What is driving these regional variations isn’t clear, but they may reflect different priorities around cost and infrastructure. Pavlik describes a scenario playing out globally: “Some enterprise user goes out, they start doing some prototypes…using GPT-4. They get their first bill, and they’re like, ‘Oh my god.’ It’s a lot more expensive than they expected. And then they start looking for alternatives.”

Market dynamics point toward commoditization

The economics of LLM deployment are shifting dramatically in favor of open models. “The price per token of generated LLM output has dropped 100x in the last year,” notes venture capitalist Marc Andreessen, who questioned whether profits might be elusive for closed-source model providers. This potential “race to the bottom” creates particular pressure on companies that have raised billions for closed-model development, while favoring organizations that can sustain open source development through their core businesses.

“We know that the cost of these models is going to go to zero,” says Intuit’s Srivastava, warning that companies “over-capitalizing in these models could soon suffer the consequences.” This dynamic particularly benefits Meta, which can offer free models while gaining value from their application across its platforms and products.

Advertisement

A good analogy for the LLM competition, Groq’s Ross says, is the operating system wars. “Linux is probably the best analogy that you can use for LLMs.” While Windows dominated consumer computing, it was open source Linux that came to dominate enterprise systems and industrial computing. Intuit’s Srivastava sees the same pattern: ‘We have seen time and again: open source operating systems versus non open source. We see what happened in the browser wars,” when open source Chromium browsers beat closed models.

Walter Sun, SAP’s global head of AI, agrees: “I think that in a tie, people can leverage open source large language models just as well as the closed source ones, that gives people more flexibility.” He continues: “If you have a specific need, a specific use case… the best way to do it would be with open source.”

Some observers like Groq’s Ross believe Meta may be in a position to commit $100 billion to training its Llama models, which would exceed the combined commitments of proprietary model providers, he said. Meta has an incentive to do this, he said, because it is one of the biggest beneficiaries of LLMs. It needs them to improve intelligence in its core business, by serving up AI to users on Instagram, Facebook, Whatsapp. Meta says its AI touches 185 million weekly active users, a scale matched by few others. 

This suggests that open source LLMs won’t face the sustainability challenges that have plagued other open source initiatives. “Starting next year, we expect future Llama models to become the most advanced in the industry,” declared Meta CEO Mark Zuckerberg in his July letter of support for open source AI. “But even before that, Llama is already leading on openness, modifiability, and cost efficiency.”

Advertisement

Specialized models enrich the ecosystem

The open source LLM ecosystem is being further strengthened by the emergence of specialized industry solutions. IBM, for instance, has released its Granite models as fully open source, specifically trained for financial and legal applications. “The Granite models are our killer apps,” says Matt Candy, IBM’s global managing partner for generative AI. “These are the only models where there’s full explainability of the data sets that have gone into training and tuning. If you’re in a regulated industry, and are going to be putting your enterprise data together with that model, you want to be pretty sure what’s in there.”

IBM’s business benefits from open source, including from wrapping its Red Hat Enterprise Linux operating system into a hybrid cloud platform, that includes usage of the Granite models and its InstructLab, a way to fine-tune and enhance LLMs. The AI business is already kicking in. “Take a look at the ticker price,” says Candy. “All-time high.”

Trust increasingly favors open source

Trust is shifting toward models that enterprises can own and control. Ted Shelton, COO of Inflection AI, a company that offers enterprises access to licensed source code and full application stacks as an alternative to both closed and open source models, explains the fundamental challenge with closed models: “Whether it’s OpenAI, it’s Anthropic, it’s Gemini, it’s Microsoft, they are willing to provide a so-called private compute environment for their enterprise customers. However, that compute environment is still managed by employees of the model provider, and the customer does not have access to the model.” This is because the LLM owners want to protect proprietary elements like source code, model weights, and hyperparameter training details, which can’t be hidden from customers who would have direct access to the models. Since much of this code is written in Python, not a compiled language, it remains exposed.

This creates an untenable situation for enterprises serious about AI deployment. “As soon as you say ‘Okay, well, OpenAI’s employees are going to actually control and manage the model, and they have access to all the company’s data,’ it becomes a vector for data leakage,” Shelton notes. “Companies that are actually really concerned about data security are like ‘No, we’re not doing that. We’re going to actually run our own model. And the only option available is open source.’”

Advertisement

The path forward

While closed-source models maintain a market share lead for simpler use cases, sophisticated enterprises increasingly recognize that their future competitiveness depends on having more control over their AI infrastructure. As Salesforce’s Govindarajan observes: “Once you start to see value, and you start to scale that out to all your users, all your customers, then you start to ask some interesting questions. Are there efficiencies to be had? Are there cost efficiencies to be had? Are there speed efficiencies to be had?”

The answers to these questions are pushing enterprises toward open models, even if the transition isn’t always straightforward. “I do think that there are a whole bunch of companies that are going to work really hard to try to make open source work,” says Inflection AI’s Shelton, “because they got nothing else. You either give in and say a couple of large tech companies own generative AI, or you take the lifeline that Mark Zuckerberg threw you. And you’re like: ‘Okay, let’s run with this.’”


Source link
Continue Reading

Technology

4 days left: The doors to Disrupt 2024 open and ticket prices rise

Published

on

TechCrunch Disrupt

Just 4 days left! Moscone West in San Francisco will be the epicenter of innovation as 10,000 startup and VC leaders gather for TechCrunch Disrupt 2024 from October 28-30. This incredible conference is designed to inspire, spark innovative ideas, and create meaningful connections.

Final days to save! You have until October 27 at 11:59 p.m. PT to save up to $400 on individual-type tickets or double up with two Expo+ Passes for half the price of one. Don’t wait — secure your low-rate ticket or the Expo+ 2-for-1 Pass.

Don’t miss Disrupt 2024

10,000+ startup and VC leaders

Experience the ultimate networking event at Disrupt 2024, bringing together 10,000 tech pioneers, startup founders, and VC leaders for unparalleled opportunities to connect and collaborate.

350+ startups showcasing their innovations

Step into the Expo Hall and witness cutting-edge innovations from more than 350 startups, giving you a preview of the future of tech from around the world.

Advertisement

250+ industry heavyweights

Gain invaluable insights from leading industry figures as they share exclusive insights across six dedicated stages, focusing on key sectors of the tech landscape: AI, startups, VCs, fintech, SaaS, and space.

200+ deep-dive sessions 

Participate in interactive Q&A Breakout Sessions and Roundtable discussions with industry leaders, tackling pressing challenges in the fast-changing tech landscape. Discover these sessions in our expanding agenda.

Startup Battlefield 200

Watch 20 exceptional startups compete in the exhilarating Startup Battlefield 200 pitch competition at Disrupt 2024, all vying for a $100,000 equity-free prize and the esteemed Disrupt Cup, judged by leading VCs.

Unmatched networking opportunities

Take your networking to the next level with the Braindate app, where you can create or explore topics for more in-depth discussions. Connect in person at the Networking Lounge powered by Braindate on level 2 for 1:1 or small-group discussions.

60+ Side Events

Keep the spirit of Disrupt 2024 alive by participating in company-hosted Side Events around San Francisco during the week. With options ranging from workshops and cocktail parties to morning runs and meetups, there’s an event for everyone!

Advertisement

Grab your ticket before prices increase

Act now to save up to $400 on tickets! You can also take advantage of our Expo+ 2-for-1 offer — bring a guest for just half the price of a single Expo+ Pass. All offers end on October 27 at 11:59 p.m. PT. Prices will go up when we open the doors on October 28.

Secure your ticket at a discounted rate today.

TechCrunch Disrupt 2024

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com