On the Meitu App, the top ranking app, the beauty found that the selfie was actually taken by AI to generate a photo?

Xinzhiyuan Report

Editor: Xin Peng

[Introduction to New Zhiyuan] AI generating social media avatars has become the current trend, but do you know that some AI can not only give you the beautiful pictures you want, but also generate your own fruit photos for you?

For many Internet trendsetters, designing an AI-made profile picture for their social platform is a cool thing.

However, professional generation software is either too expensive or too complicated to get started and discourages people.

If you say, there is an affordable and easy-to-operate generative app that can satisfy your wishes. However, this software will automatically generate your photo, will you use it?

The Meitu app we are talking about today is called Lensa AI, which was launched in 2018. Last month, it was all the rage after releasing the ‘Magic Avatar’ (left 2) feature. This feature allows users to generate portraits in various digital art styles based on Stable Diffusion after uploading 10 photos.

Earlier this month, the app topped the iOS App Store’s ‘Photos and Videos’ category.

However, such an explosive artificial intelligence-generated art application has been involved in the vortex of “maliciously generating user photos”.

Overly intelligent AI: Can produce photo without private image

The function of Lensa AI is to artistically edit reference photos based on user uploads. But several users reported that the machine learning technology had inadvertently generated nude photos of them.

‘I put 20 carefully selected photos into Lensa and not only did I get those 20 selfies, it came back with a bunch of AI-generated fruit shots,’ one user wrote on Twitter. ‘To be clear, none of the photos I submitted contained nudity, which is specifically forbidden by the app! ‘

More than one person has received this kind of ‘surprise’. On the Internet, there are dozens of people complaining, most of them are women.

They said Lensa automatically generated sexy or intimate photos of them despite uploading them without any intimate content.

While it’s unclear how often Lensa generates nude images without prompting, multiple users have reported that this is the case for them.

One user posted on Twitter: ‘It’s weird that I didn’t submit any intimate photos, but it ended up generating nude pics, WTF? ‘

Of greater concern to users is whether Lensa somehow accessed photos from their phones that hadn’t been uploaded, and whether the app’s privacy policy allows the data it generates to be used by third-party companies such as Google Cloud and AWS.

‘Lensa users: Have any of you received overly sexy images of yourself in avatar packs? ’ wrote one troubled user on Twitter.

‘I received a full-face photo of myself and am worried now. I’m concerned about whether the app has access to other images on my phone and whether it has the rights to see it. ‘

While Andrey Usoltsev, CEO and co-founder of Lensa’s parent company, Prisma Lab, said Lensa ‘can’t accidentally produce’ such images, he said the AI ​​could be ‘deliberately induced’ to generate nude images.

‘To enhance what Lensa does, we’re building NSFW filters. It will effectively blur any detected images. It is at the user’s discretion if they wish to open or save such images. ‘

Too Crazy AI: It’s easy to make a picture

In order to verify whether Lensa will generate images that it should not generate, foreign media conducted a set of controlled experiments. The experiment created two sets of Lensa portraits:

One group is 15 photos of a well-known actor.

On the basis of the same 15 photos, the other group added 5 photos in which the actor’s head was PS on the model.

The first set of images is consistent with the style of the AI ​​portraits generated by our previous Lensa. However, the second set was much more explosive than expected.

It turns out that as long as private photos are input into Lensa, the AI ​​​​will think that the user’s pornographic content expresses permission, and the scale of the generated image exceeds the limit of the NSFW filter.

What’s even more frightening is that among the 100 images, 11 are high-quality fruit photos, not photos with obvious traces of AI editing.

If this problem is not modified, there will be endless troubles.

For those with bad intentions, they only need a mobile phone, an app and $7.99 to easily generate hundreds of realistic nude portraits.

They can easily generate an image of anyone (at least, anyone with a photo). Just imagine, as long as you have pictures of yourself on public social platforms, you run the risk of being used by strangers to produce soft pornography.

The boss explained: I am not I did not

Facing public concerns, Usoltsev said such behavior would only happen if the AI ​​was ‘tricked’, noting that it violated Lensa’s terms of use.

In an interview with the media, he said: ‘Our terms of use (section 6) and Stability AI terms of service (tip guide) expressly prohibit the use of this tool to engage in any harmful or harassing behavior.

The generation and widespread use of such content could lead to legal action, as the sharing of explicit content and images without consent is a crime in both the US and UK.

We provide guidelines that clearly define our image requirements and any explicit descriptions are strictly prohibited. We want Lensa users to follow the guidelines for the best results. ‘

At the same time, Usoltsev also explained why Lensa generates NSFW images. He explained that this is determined by the underlying technology of Stability AI.

‘Stable Diffusion Neural Networks are trained on vast amounts of unfiltered data from the internet. Neither we nor Stability AI can consciously apply any representation bias.

More precisely, this unfiltered data introduces existing human biases into the Lensa model. Creators know that models can be socially biased, and so do we. ‘

‘There is no doubt that there will be a need for a broader conversation around the use and regulation of artificial intelligence in the near future, and we are eager to be part of it. ‘

‘We want to provide all the necessary guidance and appropriate warnings for the best possible experience with the Magic Avatar feature,’ Usoltsev said: ‘However, any tool can become a weapon if a person is determined to engage in harmful behavior. ‘

References:

https://https://ift.tt/edNQiAB

https://ift.tt/e79Dasi

https://ift.tt/npTt2vD


(Disclaimer: This article only represents the author’s point of view, not the position of Sina.com.)

This article is reproduced from: https://finance.sina.com.cn/tech/csj/2022-12-11/doc-imxwhsxu8721640.shtml
This site is only for collection, and the copyright belongs to the original author.