Connect with us

Technology

Midjourney launches AI image editor: how to use it

Published

on

Midjourney launches AI image editor: how to use it

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Midjourney, the hit AI image generation startup founded and run by former Magic Leap engineer David Holz, is wowing users with a new feature unveiled last night: AI image editing.

As a good portion of Midjourney’s 20 million+ users (including some of us at VentureBeat) likely know, Midjourney previously allowed users to upload their own images gathered outside of the service to its alpha web interface and/or Discord server to serve as a reference for its AI image generator diffusion models — the latest one being Midjourney 6.1. After receiving an uploaded reference image, the Midjourney AI model is able generate new images based on the user’s provided file.

However, this reference feature didn’t actually make any alterations to the source image — merely using it as a kind of loose starting point.

Advertisement

Now, with Midjourney’s new “Edit” feature, users can upload any image of their choosing and actually edit sections of it with AI, or change the style and texture of it from the source to something totally different, such as turning a vintage photograph into anime — while preserving most of the image’s subjects and objects and spatial relationships.

It even works on doodles and hand drawings that the submits, turning scribbles into full art pieces in seconds.

Midjourney posted a video demo showing how to use the new features which we’ve embedded below:

VentureBeat uses Midjourney and other AI tools to create content for our website, social channels and other formats.

Advertisement

Note that despite its popularity, Midjourney is one of several AI companies being sued by a class action of human artists for alleged copyright infringement due to its scraping of human-created works without express permission, authorization, consent, or compensation to train its models. The case remains in court for now.

The Midjourney Image Editor only appears to be restricted to its latest AI model, Midjourney 6.1, which makes sense.

In a message to Midjourney’s Discord community, Holz wrote that: “All of these things are very new, and we want to give the community and human moderation staff time to ease into it gently…”

As a consequence, the new Midjourney Editor feature is for now restricted to users who have generated more than 10,000 images with the service, those with annual paid memberships, and those who have been a subscriber for a year or more.

Advertisement

However, if you fit those criteria, you can use the new Midjourney Image Editor by following the directions below.

How to find and start using Midjourney’s Image Editor

The new Midjourney Image Editor is only available on the alpha web interface, available at alpha.midjourney.com.

Once there and signed in, the qualifying user should see a new button along the left sidebar menu about halfway down with an icon showing a small pencil on a pad. Hovering over will show that it reads “Edit” (or the text will automatically display on its own persistently if your browser window is wide enough).

Clicking on this should pull up the new Editor screen, which should prompt the user with two major options “Edit from URL” and “Edit Uploaded Image.”

The latter requires the user to have a file saved on their machine, whereas the former can accept a wide range of images hosted on various websites such as Wikimedia Commons, if the user simply pastes in the correct link to the web-hosted image. For purposes of this article, I included a URL to the following image of a concept car from Wikimedia Commons.

Advertisement

Once a copy of the file is uploaded to Midjourney via the URL or the user’s own file repository, the image should appear in the middle of the new editor screen like so:

You’ll note there are a wide variety of options and various buttons on the left inner sidebar menu that users can select to modify the image with Midjourney 6.1, including “1. Erase” which allows the user to remove and paint over portions of the image with AI using a brush and a text prompt, “2. Move/Resize” which allows the user to move the image around the virtual canvas and extend its edges with new matching AI imagery, and “3. Restore” which is the inverse of Erase and lets the user retain any portions of the source image that they accidentally painted over with the Erase brush.

The user can control the brush size with a slider on the left sidebar as well as the “scale” of the image, zooming in or out, and the aspect ratio itself with more presets below that.

There’s also a “Suggest Prompt” button which Midjourney explains via a helpful hover over text is designed to aid the user in generating a prompt describing the image they’ve just uploaded — in case they want to alter that prompt or use it to generate a whole new similar image. The suggested prompt text should automatically appear in the prompt entry box/bar at the top of the screen.

Looking at our concept car example, I went ahead and used the Erase brush tool on the driver and used the text prompt entry bar at the top of the Midjourney web interface to replace the driver with a “flaming skeleton driving.” After I typed my text prompt in the top entry bar/box, I hit the button marked “Submit Edit” or enter on my keyboard to apply the changes.

As with Midjourney’s raw image generator, the Editor creates four versions automatically for each text prompt — visible on the right sidebar under the “Submit” button.

Here is the best result from my experiment:

The user can then choose to keep making new changes to this resulting image, upscale with Midjourney’s build in upscaler via a button below, or download it as is.

Retexturing turns images into new adaptations in different styles

In addition, the discerning reader and Midjourney user will note there was also another whole set of options for the Editor found by clicking the tab marked “Retexture” on the left sidebar.

As Midjourney itself explains in the left sidebar after licking this option: “Retexture will change the contents of the input image while trying to preserve the original structure. For good results, avoid using prompts that are incompatible with the general structure of the image.”

As you’ll see in the above screenshot I’ve embedded, the Rexture screen has far less going on than the regular Edit screen. In fact, basically the only option is to use the prompt text entry bar/box at the top of the screen to spell out what kind of retexturing you want done to the source image you/the user provided.

After entering this, the user can hit “Submit Rexture” and viola, Midjourney will use AI to apply the new texture and adapt the image according to the user’s prompt, again generating four versions for them to choose from.

In my case, I tried a bunch of different styles including anime, cave painting, colored sand, grotesque ooze, and cyberpunk styles, among others. See some of the retexturing examples I received below. One cautionary note in my limited tests so far — the retexturing feature does appear to warp and remove some detail from the resulting source image, as well as gender swap the subjects and add extraneous new details as well. However, this is part of the fun with using Midjourney or other generative AI creative tools — seeing what the model spits out based on your guidance!

Advertisement

Warm reception among AI image creators on X

The AI image and art community on the social network X applauded Midjourney’s new editor — which had been rumored for several weeks. Already, some of the leading AI creators have tried it out and posted their examples, many of which are impressive. Here’s a sampling:

If you’re a Midjourney user who meets the criteria outlined above, go ahead and log in and try it out! Let me know your thoughts: carl.franzen@venturebeat.com. Midjourney has also been open about its plans to launch a 3D or video editor, which may come later this year.


Source link
Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Some Samsung Exynos chips have a severe security flaw

Published

on

Featured image for New Exynos 2500 version emerges with enhanced specifications

Some Samsung smartphones, powered by the company’s Exynos chipsets, have a high-severity security flaw. The vulnerability can allow threat actors to gain elevated access privileges and embed malware.

Samsung smartphones with certain Exynos SoCs have a security flaw

Samsung designs and builds its own Exynos SoC (System on a Chip). These chipsets usually power entry-level and mid-range Android smartphones. Some Exynos chipsets are also embedded in wearable devices.

Cybersecurity researchers from Google’s Threat Analysis Group (TAG) have reportedly discovered a security flaw inside some of the Exynos chips. The advisory about the vulnerability mentions it is being tracked as CVE-2024-44068. It has a severity rating of 8.1, which translates to “high severity”.

Specifically, Samsung Exynos mobile processors versions 9820, 9825, 980, 990, 850, and W920 are impacted. Attempting to explain the security flaw TAG stated, “This 0-day exploit is part of an EoP chain. The actor can execute arbitrary code in a privileged camera-server process. The exploit also renamed the process ‘[email protected],’ probably for anti-forensic purposes.”

Advertisement

How to stay safe from this security vulnerability

As stated by Google’s research team, the Samsung Exynos chipsets suffer from a “0-day” exploit. Moreover, the researchers have cautioned the vulnerability is being exploited in the wild. If that’s not concerning enough, attackers may club this flaw with other attacks.

The impacted Samsung Exynos chipsets are powering the Galaxy S10 series, the Galaxy Note 10 and 10+, the Galaxy S20 series, as well as the Samsung Galaxy A51 5G and Samsung Galaxy A71 5G smartphones. In the wearable space, the Exynos W920 is embedded inside a few Samsung Galaxy Watches.

Google’s TAG security team alerted Samsung about the vulnerability earlier this year. Samsung addressed the vulnerability on October 7 with a patch. The tech giant even issued a security advisory. To stay protected from this security flaw, Samsung Galaxy smartphone, and Galaxy Watch users must install the latest security updates.

Source link

Advertisement

Continue Reading

Technology

Bluesky’s upcoming premium plan won’t give paid users special treatment

Published

on

Bluesky’s upcoming premium plan won’t give paid users special treatment

Bluesky has revealed how it plans to start making money without necessarily having to rely on ads. The platform will remain free to use for everyone, though it’s working on a premium subscription that will provide access to profile customization tools (remember when Myspace offered that for free?) and higher quality .

One thing that you won’t get as a paid user, though, is any preferential treatment. Unlike certain other social platforms, Bluesky won’t boost the visibility of premium members’ posts. Nor will they get any kind of blue check, chief operating officer Rose Wang.

In addition, Bluesky is planning a tip jar of sorts for creators. “We’re proud of our vibrant community of creators, including artists, writers, developers and more, and we want to establish a voluntary monetization path for them as well,” it said in a blog post. “Part of our plan includes building payment services for people to support their favorite creators and projects.” Bluesky will reveal more details down the line, though it’s not clear whether the platform plans to take a cut of any such payments.

Bluesky revealed its initial monetization plans in an announcement of its Series A funding round. It has raised $15 million from investors. Even though the lead investor in this round is Web3 VC company Blockchain Capital, Bluesky “will not hyperfinancialize the social experience (through tokens, crypto trading, NFTs, etc).”

Advertisement

“Bluesky is powered by a 20-person core team, moderators, and support agents,” Wang . “Our biggest costs are team and infrastructure. Subscription revenue helps us improve the app, grow the developer ecosystem and gives us time to explore business models beyond traditional ads.”

The platform now has more than 13 million users, with from X following that service’s temporary ban in Brazil. (Analysts at Appfigures estimate that 3.6 million Bluesky app downloads came from Brazil, around 36 percent of the total figure.) Others made the switch after X made certain changes to its platform, including a revamp of .

Source link

Continue Reading

Technology

Meta just beat Google and Apple in the race to put powerful AI on phones

Published

on

Meta just beat Google and Apple in the race to put powerful AI on phones

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Meta Platforms has created smaller versions of its Llama artificial intelligence models that can run on smartphones and tablets, opening new possibilities for AI beyond data centers.

The company announced compressed versions of its Llama 3.2 1B and 3B models today that run up to four times faster while using less than half the memory of earlier versions. These smaller models perform nearly as well as their larger counterparts, according to Meta’s testing.

The advancement uses a compression technique called quantization, which simplifies the mathematical calculations that power AI models. Meta combined two methods: Quantization-Aware Training with LoRA adaptors (QLoRA) to maintain accuracy, and SpinQuant to improve portability.

Advertisement

This technical achievement solves a key problem: running advanced AI without massive computing power. Until now, sophisticated AI models required data centers and specialized hardware.

Tests on OnePlus 12 Android phones showed the compressed models were 56% smaller and used 41% less memory while processing text more than twice as fast. The models can handle texts up to 8,000 characters, enough for most mobile apps.

Meta’s compressed AI models (SpinQuant and QLoRA) show dramatic improvements in speed and efficiency compared to standard versions when tested on Android phones. The smaller models run up to four times faster while using half the memory. (Credit: Meta)

Tech giants race to define AI’s mobile future

Meta’s release intensifies a strategic battle among tech giants to control how AI runs on mobile devices. While Google and Apple take careful, controlled approaches to mobile AI — keeping it tightly integrated with their operating systems — Meta’s strategy is markedly different.

By open-sourcing these compressed models and partnering with chip makers Qualcomm and MediaTek, Meta bypasses traditional platform gatekeepers. Developers can build AI applications without waiting for Google’s Android updates or Apple’s iOS features. This move echoes the early days of mobile apps, when open platforms dramatically accelerated innovation.

The partnerships with Qualcomm and MediaTek are particularly significant. These companies power most of the world’s Android phones, including devices in emerging markets where Meta sees growth potential. By optimizing its models for these widely-used processors, Meta ensures its AI can run efficiently on phones across different price points — not just premium devices.

Advertisement

The decision to distribute through both Meta’s Llama website and Hugging Face, the increasingly influential AI model hub, shows Meta’s commitment to reaching developers where they already work. This dual distribution strategy could help Meta’s compressed models become the de facto standard for mobile AI development, much as TensorFlow and PyTorch became standards for machine learning.

The future of AI in your pocket

Meta’s announcement today points to a larger shift in artificial intelligence: the move from centralized to personal computing. While cloud-based AI will continue to handle complex tasks, these new models suggest a future where phones can process sensitive information privately and quickly.

The timing is significant. Tech companies face mounting pressure over data collection and AI transparency. Meta’s approach — making these tools open and running them directly on phones — addresses both concerns. Your phone, not a distant server, could soon handle tasks like document summarization, text analysis, and creative writing.

This mirrors other pivotal shifts in computing. Just as processing power moved from mainframes to personal computers, and computing moved from desktops to smartphones, AI appears ready for its own transition to personal devices. Meta’s bet is that developers will embrace this change, creating applications that blend the convenience of mobile apps with the intelligence of AI.

Advertisement

Success isn’t guaranteed. These models still need powerful phones to run well. Developers must weigh the benefits of privacy against the raw power of cloud computing. And Meta’s competitors, particularly Apple and Google, have their own visions for AI’s future on phones.

But one thing is clear: AI is breaking free from the data center, one phone at a time.


Source link
Continue Reading

Technology

Bluesky raises $15M Series A, plans to launch subscriptions

Published

on

blue sky with white clouds

Decentralized social app Bluesky announced on Thursday that it has raised a $15 million Series A round, following its $8 million seed raise last year. This funding comes as Bluesky sees increased growth, in part from X users who are troubled by recent changes to the block feature, as well as the move to allow third parties to train AI on users’ public posts. Within the last month alone, Bluesky has added around 3 million new users, bringing its total user base to about 13 million.

Bluesky was initially incubated inside Twitter as former CEO Jack Dorsey’s vision for what the future of social media should look like. But the social network and developer of the open source AT Protocol is no longer affiliated with Dorsey, who left the startup’s board earlier this year. Still, many of the initial goals for Bluesky remain consistent: like Mastodon, Bluesky’s AT Protocol is decentralized, meaning that individual people will be able to set up their own social servers and apps, and people outside of the company have transparency into how and what is being developed.

“With this fundraise, we will continue supporting and growing Bluesky’s community, investing in Trust and Safety, and supporting the ATmosphere developer ecosystem,” Bluesky’s blog announcement reads. “In addition, we will begin developing a subscription model for features like higher quality video uploads or profile customizations like colors and avatar frames.”

The Bluesky team has been quick to tell users that this paid tier will not be like X, where subscribers get exclusive blue check marks and algorithmic up-ranking, making their posts more visible.

Advertisement

“The way twitter did subscriptions was basically a blueprint for how bluesky shouldn’t do them,” Bluesky developer Paul Frazee posted. “‘Pay to win’ features like getting visibility or having a bluecheck because youre a subscriber is just wrong, and ruins the network for everyone.”

The Series A round is led by Blockchain Capital with participation from Alumni Ventures, True Ventures, SevenX, Darkmode’s Amir Shevat, and Kubernetes co-creator Joe Beda. The presence of a crypto-focused firm might alarm skeptics, especially since CEO Jay Graber used to be a software engineer for a crypto company, Zcash, but Bluesky has proactively assured users that the company is not pivoting to web3.

“Our lead, Blockchain Capital, shares our philosophy that technology should serve the user, not the reverse — the technology being used should never come at the expense of the user experience,” Bluesky said in its announcement. “This does not change the fact that the Bluesky app and the AT Protocol do not use blockchains or cryptocurrency, and we will not hyperfinancialize the social experience (through tokens, crypto trading, NFTs, etc.)”

Graber also announced that Kinjal Shah, a General Partner at Blockchain Capital, will be joining the board of Bluesky.

Advertisement

“[Shah] shares our vision for a social media ecosystem that empowers users and supports developer freedom, and it’s been a great experience working with her. With her support, we are well positioned to grow,” Graber wrote.

Source link

Continue Reading

Technology

Bluesky is working on a subscription, but it won’t give you a blue check

Published

on

Bluesky is working on a subscription, but it won’t give you a blue check

Bluesky is working on a premium subscription that will add features like higher-quality video uploads and some profile customization options. Unlike the premium subscription offered by X, however, Bluesky’s paid tier won’t boost the visibility of your posts, nor will it give your account a “verified” status. Bluesky, in a post on its blog, also notes that Bluesky “will always be free to use.”

“Subscription revenue helps us improve the app, grow the developer ecosystem, and gives us time to explore business models beyond traditional ads,” Bluesky chief operating officer Rose Wang wrote in a post. “Paid subscribers won’t get special treatment elsewhere in the app, like upranking premium accounts or blue checks next to their names.”

Source link

Continue Reading

Technology

This cheap mini PC packs an Intel Core i3, four 10GbE and 2.5GbE Ethernet ports and can even run Windows — so could it be the perfect home web server?

Published

on

This cheap mini PC packs an Intel Core i3, four 10GbE and 2.5GbE Ethernet ports and can even run Windows — so could it be the perfect home web server?

The iKOOLCORE R2 Max is a compact yet powerful mini PC that comes with either the Intel N100 or the more powerful Intel Core i3-N305, making it capable of handling various tasks such as content creation, virtualization, and office work.

Despite its small size, measuring just 15.7 x 11.8 x 4 cm, the R2 Max is well-equipped with four high-speed Ethernet ports – two 10GbE ports powered by Marvell AQC113C-B1-C chips and two 2.5GbE ports running on Intel i226-v controllers.

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com