

2D Objects: Attach 2D images to a head shot, such as adding sprite ears or glasses.Experience with scripting or 3D animation can be helpful. Trigger: Different facial movements to trigger various actions, such as smiling, blinking or raising your eyebrows.Distort: Stretch a face in all directions, make bulging eyes or crooked noses.Photo: Similar to Face Pain but only requires the user to have a single, face-on photo of the subject.Designed for Lenses that show off make-up, costumes and accessories.

Face Paint: Concentrates on face substitution, mapping a face to create facial art.Seven templates for user’s to experiment with will be included on Face Lens Studio to start with. Like the custom world Lenses that had previously been made available, user-created Face Lenses can be sent to Snapchat for approval before being made available for use by other Snapchat users. Snapchat are opening up the Lens Studio tool to users, enabling anyone to create their own AR face lenses for the first time, in a way similar to the World Lenses that had previously been offered. For many people who wish to experiment with creating immersive content, Snapchat lenses can offer a simple and easily accessible way to get to grips with the idea. Touch and manipulation controls allow a user to then move the cloned sticker around the scene.Social media lives and dies on its content.That material is then applied to a Screen Image (in 2D mode) or a Mesh in the Cutout prefab by Snap (in 3D mode).A custom material graph combines the original freeze frame texture with the alpha mask to produce a texture that contains only pixels belonging to our cloned object.The freeze frame texture is then fed into our SnapML model to create an output texture, which will become the alpha channel masking out the background around our object.When the box is tapped, a screen crop texture crops the capture target and copies the frame, saving it to another texture is used to create our cloned object.An orthographic camera starts with the image from the world camera and overlays the UI onto it before rendering to a separate render target that’s used for Live views-this way, the UI won’t be seen when watching a recorded Snap.A perspective camera with device tracking builds a 3D map of the world, identifies horizontal planes, and renders what it sees to a render target.To achieve this experience, we used a fairly complicated render pipeline in Lens Studio. Move the cloned object in either 2D or 3D space.Manipulate a cropping box with just one finger.We started with a list of user experience requirements. With a model in hand, we needed to build the rest of the cloning experience around it in Lens Studio. Using our own expertise designing small, efficient neural networks, along with some inspiration from the impressive U²-Net model, we created a saliency model that produced high-quality segmentations that also fits well under Lens Studio’s 10mb asset limit. This task is generally known as saliency detection and is closely related to image segmentation tasks (with two classes, background and foreground). In this case, we wanted to look at an image or a section of an image, and separate out the foreground object from the background. The first step of any AI / ML problem is to define the task. In this post, I want to provide a quick behind-the-scenes look at how the Lens works and how we leveraged state-of-the-art AI models with Snap Lens Studio and SnapML to create the cloning effect.
