Google Pixel 3’s Photobooth Mode To Detect When You Are Kissing Someone In A Selife
Google Pixel Photobooth's new update will detect d some intresting features which will recognize five key facial expressions such as “should” trigger capture: “smiles, tongue-out, kissy/duck face, puffy-cheeks and surprise.
Google seems to have taken a step ahead by updating its Pixel camera app’s Photobooth mode. Yes, in its new update, Google has launched some intresting features which will recognize five key facial expressions such as “should” trigger capture: “smiles, tongue-out, kissy/duck face, puffy-cheeks, and surprise. Yes, it will now detect when you are kissing someone in selfie.
Prior to this kissing features, Google had earlier updated its features including Night Sight and Super Res Zoom, which were widely liked by the consumers. This AI Kissing detection will work on version 6.2 of Google Camera, Photobooth, which will detect kissing moment. You can download Google Pixel Camera 6.2 version from here.
As of now, we had features in Photobooth such as Artificial Intelligence and Machine Learning to detect when the people in the frame (one or more) had their eyes open, and one of five expressions: tongues out, smiling, puffy cheeks, making a kissy/ducky face, or looking surprised. When one of these occurred, it automatically clicked the shot.
In this update, it has become easier for users to shoot selfies like solo, couples, or even groups—that capture you at your best. Once you enter Photobooth mode and click the shutter button, it will automatically take a photo when the camera is steady and it sees that the subjects have good expressions with their eyes open.
As part of the Pixel Camera’s update, the app also helps users know when they’re looking their best for a photo. A white bar on the left side of the display in GIF above responds to users’ actions. When everyone’s looking at the camera and making a nice face it expands to the full width of the display and the phone takes a picture
Usually, people who kiss don’t necessarily look towards the camera or have their eyes open, so a different model had to be used to train the algorithm. They have explained how each frame is scored, more weight is attributed to the action happening in the foreground versus the background, and a buffer is used to see if one of the next frames has a higher score than a previous one. That helps in only saving the highest-score frames.
“We’re excited by the possibilities of automatic photography on camera phones,” Google’s engineers wrote in a blog post.