Connect with us

Technology

Kevin Bacon, Julianne Moore, Thom Yorke, and 10K+ creators sign warning against AI use of their work

Published

on

Kevin Bacon looking scared in the Guardians of the Galaxy Holiday Special

More than 10,000 professional actors, musicians, writers, and other creators have signed a petition urging against AI using their work without permission for training. British composer Ed Newton-Rex wrote the statement and set up the signature collection. The ranks of signers include many famous names. They range from Hollywood stars like Kevin Bacon and Julianne Moore to record-selling musicians and composers like Thom Yorke of Radiohead and Abba’s Björn Ulvaeus and best-selling authors Harlan Coben and Ted Chiang. The statement itself is brief and to the point:

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Microsoft & OpenAI are paying millions to outlets to implement AI

Published

on

Featured image for Microsoft wants to reactivate the Three Mile Island nuclear plant

The tech industry seems to have decided that AI-powered developments are the future. Major improvements in efficiency and process automation helped the rapid establishment of this approach. Part of the industry’s new path is to try to promote the implementation of AI in all possible segments. Currently, Microsoft and OpenAI are paying millions of dollars to media outlets to implement AI tools.

Microsoft and OpenAI paying up to $10 million to media outlets to use AI tools

Artificial intelligence can be quite convenient for the journalism segment. It is especially efficient in tasks such as summarizing or transcribing content, to name a few examples. It also functions effectively in certain proofreading services, enhancing the quality of your writing. To illustrate this, the Microsoft and OpenAI project aims to provide newsrooms with access to the experience while simultaneously receiving funding.

Microsoft will carry out the project in rounds, selecting a few outlets in each one. The first round of the program will offer funding to Newsday, The Minnesota Star Tribune, The Philadelphia Inquirer, Chicago Public Media, and The Seattle Times. The $10 million is broken down into $2.5 million in cash and $2.5 million in “software and enterprise credits” from Microsoft and OpenAI each.

The program involves hiring an intern for a two-year period. The fellow will deploy AI-powered tools using Microsoft Azure and OpenAI credits in the media outlet. The tools are not designed to replace writers or researchers. Rather, the project seeks to develop tools that assist them and increase their efficiency.

Advertisement

“While nothing will replace the central role of reporters, we believe that AI technology can help in the research, investigation, distribution, and monetization of important journalism,” said Tom Rubin, the head of intellectual property and content at OpenAI. In addition to the latter and Microsoft, the Lenfest Institute for Journalism is a driving force behind the project.

Copyright lawsuits are still pending

Notably, Microsoft and OpenAI are currently facing lawsuits over copyrights on content used to train AI models. The companies have already reached agreements with some major platforms, such as Vox Media. However, there is still a list of big names seeking compensation they deem appropriate for using their content. The list includes “The New York Times, The Intercept, Raw Story, AlterNet, the Center for Investigative Reporting, and Alden Global Capital, the hedge fund behind the New York Daily News and the Chicago Tribune,” as reported by The Verge.

Source link

Advertisement
Continue Reading

Technology

Amazon is reportedly working on a low-cost storefront to rival Temu

Published

on

Amazon is reportedly working on a low-cost storefront to rival Temu

Amazon may be working on a secondary online sales platform that would compete with the absurdly low prices of Chinese retailer Temu. The Information that it has seen internal information sent to Amazon merchants that detail some of the price caps for this new storefront.

The outlet claims the upper limit of prices are set at $8 for jewelry, $9 for bedding, $13 for guitars and $20 for sofas that are shipped from its fulfillment center in Guangdong, China under this new “Low-Cost Store.” According to the site’s sources, orders from this storefront would have slower shipping timelines of nine to 11 days, but would also charge lower fulfillment fees to sellers. A seller would be charged between $1.77 and $2.05 to ship a 4-8 ounce item through the Low-Cost Store, compared with a $2.67 to $4.16 charge for an item of that weight shipped under Fulfillment by Amazon from a domestic warehouse, according to The Information.

Amazon has not set price limits on its eponymous online storefront, so this new platform will be a markedly different strategy from its usual approach. It’s more in line with the pricing policy followed by Temu, which launched in 2022. In just two years, the bargain basement ecommerce platform has garnered a reputation for selling items of as well as questions about relying on .

Source link

Advertisement
Continue Reading

Technology

Runway Act-One: AI motion capture with your smartphone camera

Published

on

Runway Act-One: AI motion capture with your smartphone camera

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


AI video has come incredibly far in the years since the first models debuted in late 2022, increasing in realism, resolution, fidelity, prompt adherence (how well they match the text prompt or description of the video that the user typed) and number.

But one area that remains a limitation to many AI video creators — myself included — is in depicting realistic facial expressions in AI generated characters. Most appear quite limited and difficult to control.

But no longer: today, Runway, the New York City-headquartered AI startup backed by Google and others, announced a new feature “Act-One,” that allows users to record video of themselves or actors from any video camera — even the one on a smartphone — and then transfers the subject’s facial expressions to that of an AI generated character with uncanny accuracy.

Advertisement

The free-to-use tool is gradually rolling out “gradually” to users starting today, according to Runway’s blog post on the feature.

While anyone with a Runway account can access it, it will be limited to those who have enough credits to generate new videos on the company’s Gen-3 Alpha video generation model introduced earlier this year, which supports text-to-video, image-to-video, and video-to-video AI creation pipelines (e.g. the user can type in a scene description, upload an image or a video, or use a combination of these inputs and Gen-3 Alpha will use what its given to guide its generation of a new scene).

Despite limited availability right now at the time of this posting, the burgeoning scene of AI video creators online is already applauding the new feature.

As Allen T. remarked on his X account “This is a game changer!”

It also comes on the heels of Runway’s move into Hollywood film production last month, when it announced it had inked a deal with Lionsgate, the studio behind the John Wick and Hunger Games movie franchises, to create a custom AI video generation model based on the studio’s catalog of more than 20,000 titles.

Advertisement

Simplifying a traditionally complex and equipment-heavy creative proccess

Traditionally, facial animation requires extensive and often cumbersome processes, including motion capture equipment, manual face rigging, and multiple reference footages.

Anyone interested in filmmaking has likely caught sight of some of the intricacy and difficulty of this process to date on set or when viewing behind the scenes footage of effects-heavy and motion-capture films such as The Lord of the Rings series, Avatar, or Rise of the Planet of the Apes, wherein actors are seen covered in ping pong ball markers and their faces dotted with marker and blocked by head-mounted apparatuses.

Accurately modeling intricate facial expressions is what led David Fincher and his production team on The Curious Case of Benjamin Button to develop whole new 3D modeling processes and ultimately won them an Academy Award, as reported in a prior VentureBeat report.

Advertisement

Yet in the last few years, new software and AI-based startups such as Move have sought to reduce the equipment necessary to perform accurate motion capture — though that company in particular has concentrated primarily on full-body, more broad movements, whereas Runway’s Act-One is focused more on modeling facial expressions.

With Act-One, Runway aims to make this complex process far more accessible. The new tool allows creators to animate characters in a variety of styles and designs, without the need for motion-capture gear or character rigging.

Instead, users can rely on a simple driving video to transpose performances—including eye-lines, micro-expressions, and nuanced pacing—onto a generated character, or even multiple characters in different styles.

As Runway wrote on its X account: “Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.”

Advertisement

The feature is focused “mostly” on the face “for now,” according to Cristóbal Valenzuela, co-founder and CEO of Runway, who responded to VentureBeat’s questions via direct message on X.

Runway’s approach offers significant advantages for animators, game developers, and filmmakers alike. The model accurately captures the depth of an actor’s performance while remaining versatile across different character designs and proportions. This opens up exciting possibilities for creating unique characters that express genuine emotion and personality.

Cinematic realism across camera angles

One of Act-One’s key strengths lies in its ability to deliver cinematic-quality, realistic outputs from various camera angles and focal lengths.

This flexibility enhances creators’ ability to tell emotionally resonant stories through character performances that were previously hard to achieve without expensive equipment and multi-step workflows.

Advertisement

The tool’s ability to faithfully capture the emotional depth and performance style of an actor, even in complex scenes.

This shift allows creators to bring their characters to life in new ways, unlocking the potential for richer storytelling across both live-action and animated formats.

While Runway previously supported video-to-video AI conversion as previously mentioned in this piece, which did allow users to upload footage of themselves and have Gen-3 Alpha or other prior Runway AI video models such as Gen-2 “reskin” them with AI effects, the new Act-One feature is optimized for facial mapping and effects.

As Valenzuela told VentureBeat via DM on X: “The consistency and performance is unmatched with Act-One.”

Advertisement

Enabling more expansive video storytelling

A single actor, using only a consumer-grade camera, can now perform multiple characters, with the model generating distinct outputs for each.

This capability is poised to transform narrative content creation, particularly in indie film production and digital media, where high-end production resources are often limited.

In a public post on X, Valenzuela noted a shift in how the industry approaches generative models. “We are now beyond the threshold of asking ourselves if generative models can generate consistent videos. A good model is now the new baseline. The difference lies in what you do with the model—how you think about its applications and use cases, and what you ultimately build,” Valenzuela wrote.

Safety and protection for public figure impersonations

As with all of Runway’s releases, Act-One comes equipped with a comprehensive suite of safety measures.

Advertisement

These include safeguards to detect and block attempts to generate content featuring public figures without authorization, as well as technical tools to verify voice usage rights.

Continuous monitoring also ensures that the platform is used responsibly, preventing potential misuse of the tool.

Runway’s commitment to ethical development aligns with its broader mission to expand creative possibilities while maintaining a strong focus on safety and content moderation.

Looking ahead

As Act-One gradually rolls out, Runway is eager to see how artists, filmmakers, and other creators will harness this new tool to bring their ideas to life.

Advertisement

With Act -ne, complex animation techniques are now within reach for a broader audience of creators, enabling more people to explore new forms of storytelling and artistic expression.

By reducing the technical barriers traditionally associated with character animation, the company hopes to inspire new levels of creativity across the digital media landscape.

It also helps Runway stand out and differentiate its AI video creation platform against the likes of an increasing swath of competitors, including Luma AI from the U.S. and Hailuo and Kling from China, as well as open source rivals such as Genmo’s Mochi 1, which also just debuted today.


Source link
Continue Reading

Technology

Feds clear way for EVTOL startups to bring flying vehicles to U.S. airspace

Published

on

Joby Aviation's hydrogen eVTOL

Federal regulators have cleared the path for electric vertical takeoff and landing aircraft to share U.S. airspace with planes and helicopters — a win for the burgeoning industry and a timely decision for startups like Joby Aviation and Archer Aviation that are expected to launch air taxi networks commercially in 2025. 

The Federal Aviation Administration published Tuesday its much-anticipated final ruling on the integration of “powered-lift” vehicles, a category the FAA revived two years ago to accommodate eVTOLs and one that describes aircraft that can take off and land like helicopters but then transition to forward flight like airplanes. 

“Powered-lift aircraft are the first new category of aircraft in nearly 80 years and this historic rule will pave the way for accommodating wide-scale Advanced Air Mobility (AAM) operations in the future,” FAA Administrator Mike Whitaker said in a statement. Whitaker announced the rule during the NBAA-Business Aviation Convention & Exhibition in Las Vegas. 

The ruling also contains guidelines for pilot training and clarifies operating rules. For example, aside from a new type of powered-lift pilot certification, the ruling includes an expanded ability for operators to train and qualify pilots using flight simulation training devices. 

Advertisement

The operating rules are tailored specifically to powered-lift vehicles and, as such, allow eVTOLs the flexibility to switch between helicopter and airplane rules as needed.

Joby, Archer, Beta Technologies, and Wisk Aero — which are building aircraft for urban air taxi networks, defense, cargo, and medical logistics — have worked closely with the FAA since 2022 to develop this new set of rules for training, operations, and maintenance. 

“[The ruling] aligns with all the hopes that we had been designing for,” Greg Bowles, head of government affairs at Joby Aviation, told TechCrunch. “So the way that we’ve designed the operating system, the cockpit we’ve designed, the way we’ve designed for energy reserves, all align with the FAA rule.”

Bowles also noted that Joby will be able to begin commercial operations once it receives its type certification from the FAA, which means the design of the startup’s aircraft and other major aircraft components meet required safety and airworthiness standards. Joby is in the fourth of five stages of type certification, and recently received a $500 million capital injection from Toyota to help it get across the finish line.

Advertisement

Source link

Continue Reading

Technology

Kevin Bacon, Kate McKinnon, and other creatives warn of ‘unjust’ AI threat

Published

on

Kevin Bacon, Kate McKinnon, and other creatives warn of ‘unjust’ AI threat

Thousands of creatives, including famous actors like Kevin Bacon and Kate McKinnon, along with other actors, authors, and musicians, have signed a statement warning that the unpermitted use of copyrighted materials to train AI models threatens the people who made those creative works. 11,500 names are on the list of signatories so far.

Here is the one-sentence statement:

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”

The statement was published by Fairly Trained, a group advocating for fair training data use by AI companies. Fairly Trained CEO Ed Newton-Rex told The Guardian that generative AI companies need “people, compute, and data” to build their models, and while they spend “vast sums” on the former two, they “expect to take the third – training data – for free.” Newton-Rex founded Fairly Trained after he quit Stability AI, accusing generative AI of “exploiting creators.”

There are also some notable names not appearing among the signatories. Scarlett Johansson, who had a high-profile spat with OpenAI after accusations it modeled GPT-4o’s voice after her, isn’t on the list. Neither are actors like Dame Judi Dench and John Cena, who signed up to have Meta AI’s voice chat system replicate them.

Advertisement

Source link

Continue Reading

Technology

Kevin Bacon, Kate McKinnon, and other creatives warn of ‘unjust’ AI threat

Published

on

Kevin Bacon, Kate McKinnon, and other creatives warn of ‘unjust’ AI threat

Thousands of creatives, including famous actors like Kevin Bacon and Kate McKinnon, along with other actors, authors, and musicians, have signed a statement warning that the unpermitted use of copyrighted materials to train AI models threatens the people who made those creative works. 11,500 names are on the list of signatories so far.

Here is the one-sentence statement:

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”

The statement was published by Fairly Trained, a group advocating for fair training data use by AI companies. Fairly Trained CEO Ed Newton-Rex told The Guardian that generative AI companies need “people, compute, and data” to build their models, and while they spend “vast sums” on the former two, they “expect to take the third – training data – for free.” Newton-Rex founded Fairly Trained after he quit Stability AI, accusing generative AI of “exploiting creators.”

There are also some notable names not appearing among the signatories. Scarlett Johansson, who had a high-profile spat with OpenAI after accusations it modeled GPT-4o’s voice after her, isn’t on the list. Neither are actors like Dame Judi Dench and John Cena, who signed up to have Meta AI’s voice chat system replicate them.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com