You are reading the article Meta To Face ‘Accountability’ After 14 updated in November 2023 on the website Moimoishop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Meta To Face ‘Accountability’ After 14
Execs from Instagram and Pinterest have been ordered to appear before an inquest into the death of a 14-year-old girl, Molly Russell. Russell took her own life after using the apps to view extensive material on self-harm, depression, and suicide.
It’s the latest development in growing concerns about the impact of social media platforms on the mental health of teenagers, girls especially …
BackgroundConcerns about the impact of social networks on the mental health of teenagers were crystalized last year, when an internal report carried out by Instagram concluded that it was harmful to as many as 20% of teenage girls using the app. Most worryingly of all, it was shown to increase the risk of suicide.
It can increase anxieties about physical attractiveness, social image, and money, and even increase suicide risk, according to Facebook’s own research […]
For the past three years, Facebook has been conducting studies into how its photo-sharing app affects its millions of young users. Repeatedly, the company’s researchers found that Instagram is harmful for a sizable percentage of them, most notably teenage girls.
“We make body image issues worse for one in three teen girls,” said one slide from 2023, summarizing research about teen girls who experience the issues.
“Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.”
Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed.
Instagram owner Meta said that the report only highlighted the worst-case scenarios, but the company subsequently “paused” its Instagram for Kids project and pledged to make the app healthier for teens.
Apple CEO Tim Cook is among those who have expressed concern about the potential harm technology can do to mental health.
Molly Russell inquestBBC News reports on the latest development.
Almost five years after she took her own life, the inquest into the death of teenager Molly Russell is due to begin.
Molly, 14, killed herself in 2023 after viewing material about self-harm, suicide and depression, on social media sites such as Instagram and Pinterest […]
In the last six months of her life, Molly used her Instagram account up to 120 times a day, liking more than 11,000 pieces of content. She is thought to have used the image-sharing site Pinterest more than 15,000 times over the same period.
The coroner, Andrew Walker, has already been warned that some of the content is “pretty dreadful” and difficult even for adults to look at for extended periods of time […]
Meta, which owns Instagram, and Pinterest are officially taking part in the inquest, which is due to last two weeks. It will hear evidence from executives from both companies, after they were ordered by the coroner to appear in person.
Meta is likely to be questioned about a number of internal documents revealed by the former employee and whistleblower Frances Haugen. These include research carried out by the company into the impact of the platform on the mental health of young people.
While the inquest is taking place in the UK, many believe the case will receive close attention in the US and elsewhere. Matthew Bergman, a lawyer from the Social Media Victims Law Centre, says that Meta execs being questioned is an important development.
Regardless of the outcome, the fact that Meta senior personnel have been forced to testify in a proceeding like this one is a significant step toward accountability.
It’s not just Meta that is in the spotlight over this issue. An investigation last year revealed how TikTok’s algorithm can send people deeper and deeper into dark places.
Help is availableIf you are considering self-harm, or would simply like someone to talk to, there are people ready to help. You do not need to be considering suicide to call.
The 988 Suicide and Crisis Lifeline is available 24 hours a day, seven days a week. You can either phone or text from anywhere in the US. You can also find mental health resources on the organization’s website.
In the UK, the Samaritans are also available 24/7. Call 116-123, or text SHOUT to 85258.
In other countries, Google “Suicide helpline” to find local help.
Photo: Max Bender/Unsplash
FTC: We use income earning auto affiliate links. More.
You're reading Meta To Face ‘Accountability’ After 14
Learn How To Build Face Detection System
This article was published as a part of the Data Science Blogathon.
d different pre-trained models for the image-classification tasks and we have a fair understanding of object detection, and we looked at the different architectures that can be used for solving the object detection problems.
To summarize object detection involves identifying objects along with their locations. In this article, we’ll understand one of the use cases of object detection, which is face detection.
Table of Contents1. Introduction to Face Detection
2. Application of Face Detection
3. Understanding the Problem Statement: WIDER FACE
4. Converting the annotations of the WIDER FACE dataset as per Detectron2
5. Steps to solve the Face Detection problems
Let’s start…….
Introduction to Face DetectionSimilar to object detection, in face detection problems, the purpose is to identify faces from the image, along with their locations.
And we can see in the above images that we can have a single face in an image and multiple faces. In this article, we’ll learn both of these tasks. That is, how to detect faces when there is a single face in the image and the same for multiple phases in the image.
So by the end of this article, you’ll be able to build models that can detect either a single face or multiple faces from the image.
Let’s first look at some interesting applications of face detection.
Application of Face DetectionFace detection can be used to check if a person is wearing a mask or not. The steps for building such a model would be to detect the faces from the image first and then predict if the faces contain a mask or not.
This is just one of the use cases of face detection. Face Recognition can be another use case. Here, we have to identify the person present in the image. So, to solve this problem, we’ll first have to detect the faces from the image and then identify the faces.
There can be multiple other applications, like installing CCTV cameras that can identify persons to prevent ATM machine robberies or to build a user-friendly check-in system at the airports.
Understanding the Problem Statement: WIDER FACESo in this section, we’ll understand the problem statement that we have picked for this article.
The objective of this article is to build a model that can detect faces from the images. So by the end of this article, we’ll be able to build a model that can take images as shown here. As an input, detect the faces in that image that returns the bounding boxes for the locations of these faces present in the image.
So we’ll be training the model, and we know that in order to train the model, we need the training data. So let’s look at the dataset that we’ll be using in order to train the face detection model.
In this module, we’ll be using the wider face dataset, which is benchmark data for the face detection tasks. It contains more than 32000 images comprising of approximately 0.4 million faces and this data set contains images and faces which have a large variation in scale, occlusion pose and so on.
So scale basically refers to the image size, and based on the size of the image, the dataset is grouped into three scales: small, medium and large.
Similarly, occlusion also has three categories, which is no occlusion, partial occlusion and heavy occlusion. The wider dataset contains images from a range of 60 even categories like the parade, festival, football, tennis and many more.
So the diversity of the data set is huge, and these even categories are further divided into three sets based on the ease of detection which is easy, medium and hard.
The categories marked in the above image, green belong to the easy set, those in red belong to the medium Set, and the blue ones belong to the hard Set. In this article, we’ll be working with images belonging to the easy Set.
Also, we’ll be using the detectoron2 library in order to build the face detection model, since it provides the state-of-the-art implementation for object detection tasks.
And by now, we know that in order to work with detectron2, our dataset must follow a specific format. So in the next section, we’ll first look at the format of the wider phase data set and then see how the dataset should be pre-processed in order to use it for the directron2 library.
Converting the Annotations of the WIDER FACE Dataset as per Detectron2Till now we have understood the face detection problem and the dataset which we’ll be using to build the face detection model. In the above section, we discussed that we’ll be using detectron2 to build the model.
So, let’s quickly have an overview of detectron2. detectoron2 is a platform for object detection and segmentation tasks. It is created by the research team of Facebook AI. And detectron2 implements state-of-the-art architectures like faster R-CNN retina net and so on.
Now, let’s look at the format of the dataset we currently have. So, this is what the annotations of the wider face dataset look like. We have images containing faces, and corresponding to these images, we have the annotations.
The first thing here represents the file name then we have the number of faces present in a particular image. And since this image has a single face the value here is one. Finally, we have the bounding box coordinates for this image. These bounding boxes follow the following format. We have the x-min,y-min values followed by the height and width of the bounding box. Also, We have the values representing the different measures of variabilities that we have discussed in the above section, Including the blur expression, illumination, occlusion and so on.
So this is what the annotations of wider face data sets look like. Now here is another example from the dataset.
So first of all we have the image name, followed by the number of faces in the image that is 3 in this case, followed by the bounding box coordinates. The bounding boxes for each face are represented using a separate dictionary. For this image, since we have three faces, there will be three different dictionaries.
Now, the above image is the annotation from the wider phase dataset. And in order to use this dataset in detectron2, we must convert the annotations to this format as shown below.
So, we first have a dictionary for the bounding boxes. and instead of having x-min, y-min, width and height for detectron2. We should have the x-min, y-min, x-max and y-max values that we’ll be using here. The category id here represents the category of this image. Then we have to pass the file name, which will be the name of the above image, followed by the height, width and image id.
So this is the format that detectron2 expects as input and hence we’ll have to pre-process our data accordingly. So while creating the face detection model we’ll be pre-processing these input formats into the format required for detectron2.
So that’s all on the data preparation part. in the next section, we’ll discuss the steps that we’ll be following to build the face detection model, and then finally we will work on the implementation.
Steps to Solve the Face Detection ProblemIn this section, we will look at the steps that we’ll be following, while building the face detection model using detectron2. So we’ll start with these steps:-
Install Dependencies
Loading and pre-processing the data
Creating annotations as per Detectron2
Register the dataset
Fine Tuning the model
Evaluating model performance
I have already uploaded the dataset on the drive and in order to run the notebooks at your end, you should also upload the data set on your drive.
So, here are the files which are available in the dataset. we have an chúng tôi file, that includes the names of classes belonging to the easy set. Then we have the wider space chúng tôi which contains the annotations for images in the training and the validation set. WIDER_train contains the training images and WIDER_val contains the validation images.
Building a face detection model to detect single faces
Till now we have understood what is face detection, how we have to annotate the data set as per detectron2 and the steps that we’ll be following to build our face detection model.
Now as already discussed, there can be scenarios where we have only a single face in the image and there can be situations where the task is to detect multiple faces in the images.
So in this article, we’ll solve both of these tasks. We’ll first build a model that can detect a single face from the image, and then we’ll build another model which will be able to detect multiple objects from the image. So, in this article, our focus will be on building a model that can detect a single face. So to build a face detection system that can detect a single face from the image, a few steps that we are going to follow, which we have discussed in the last section.
1. Install DependenciesWe’ll first start with installing all the dependencies. First of all, we are installing the 5.1 version of the PYML library. This is a prerequisite for detectoron2. And if we have an older version of this library some of the functionalities of detectron2 might not work correctly.
Then we are installing the detectron2 library and we have seen this step earlier while we were implementing the RetinaNet model.
# install required version !pip install pyyaml==5.1 # installing detectron2 wheels/cu101/torch1.5/index.html 2. Loading and pre-processing the dataThe next step is to load the dataset and preprocess it. since the data set is present in google drive, we have to first mount the drive.
# mounting drive from google.colab import drive drive.mount('drive')Next, we are extracting the files which are in the zip format. So we are extracting our training, validation, and annotations. Now we are reading the annotation files for both the training and the validation set.
# extracting files !unzip '/content/drive/My Drive/Wider_dataset/WIDER_train.zip' !unzip '/content/drive/My Drive/Wider_dataset/WIDER_val.zip' !unzip '/content/drive/My Drive/Wider_dataset/wider_face_split.zip'So to read the files, we’ll be using the pandas library and hence importing that here first. And then we are specifying the paths for the annotation files of the training and the validation set and then read them using the read_csv function of pandas.
# reading files import pandas as pd # specify path of the data path_train = '/content/wider_face_split/wider_face_train_bbx_gt.txt' path_val = '/content/wider_face_split/wider_face_val_bbx_gt.txt' # reading data train = pd.read_csv(path_train,header=None) val = pd.read_csv(path_val,header=None)Let’s quickly look at the CSV files. So here are the first 10 rows from the train CSV file that we have. Here we have the file names followed by the number of faces in that image, and then the bounding box coordinates and the same part. And you can see is repeated. So currently the format is not readable. So let’s convert this format to a more meaningful and readable form. Which should look something like this. We would want the format where we have names in a separate column then numbers of faces as a separate variable and then the bounding box coordinates. hence we’ll reformat the data set accordingly.
# pre-processing data # this function accepts the dataframe and returns modified dataframe def reformat(df): # fetch values of first column values = df[0].values # creating empty lists names=[] num_faces=[] bbox=[] # fetch values into corresponding lists for i in range(len(values)): # if an image if ".jpg" in values[i]: # no. of faces num=int(values[i+1]) # append image name to list names.append(values[i]) # append no. of faces to list num_faces.append(num) # create bbox list box=[] for j in range(i+2,i+2+num): box.append(values[j]) # append bbox list to list bbox.append(box) return pd.DataFrame({'name':names,'num_faces':num_faces,'bbox':bbox})we are going to convert the training and the validation data sets and store it as train and val.
# pre-processing the data train = reformat(train) val = reformat(val)So let’s look at the first few rows of these data sets. So we are first printing the head of the train. you can see that this data set is now in the required format.
# first 5 rows of the pre-processed data train.head()Source:- Author
let us also look at the shape of the training data and the shape of the validation data. so we can see that in the training data we have 12 880 rows and in the validation, we have 3226 rows.
# shape of the training data train.shapeThe output is:- (12880, 3)
# shape of validation data val.shapeThe output is:- (3226, 3)
Next, we’ll do some pre-processing on this data set so here we are adding the complete path before the file name and for training, we are adding the path wider_ train/images and for validation, the path will be wider_val/images.
# adding full path train['name'] = train['name'].apply(lambda x: 'WIDER_train/images/'+x ) val['name'] = val['name'].apply(lambda x: 'WIDER_val/images/'+x )After applying this the new dataset would look something like the below image. So we’ll have the complete path for all of these images.
# first 5 rows train.head()Source:- Author
Next, we are converting the bounding box coordinates to floating points using the float_function of the numpy library. Hence we are first importing the numpy library here and then converting the bounding box coordinates for both training and the validation set and again printing the first five rows of the data set.
# converting bbox to floating point import numpy as np train['bbox'] = train['bbox'].apply (lambda row:[ np.float_(annos.split()) for annos in row] ) val['bbox'] = val['bbox'].apply (lambda row:[ np.float_(annos.split()) for annos in row] ) # first 5 rows train.head()Source:- Author
So the above image, the output confirms that the bounding boxes are converted into floating points. Here so you can see this is bbox column value 27.0, 26.0, and so on.
well as the validation data sets.
# extracting class names train['class']= train['name'].apply(lambda x:x.split("/")[2].split("--")[1]) val['class'] = val['name'].apply(lambda x:x.split("/")[2].split("--")[1]) # first 5 rows train.head()Source:- Author
you can see that now we have a new column class that has the class against each of these images.
Next, we are going to read the chúng tôi file which contains the names of the classes belonging to the easy set. So we are reading this file here using the read_csv function and printing the values that we get.
# reading class names easy_df = pd.read_csv('drive/My Drive/Wider_dataset/easy.txt',header=None) easy_labels = easy_df[0].values # easy labels easy_labelsHere is a list of the classes that belong to the easy set. so we have a total of 20 classes here.
Source:- Author
Now we’ll select only those images which belong to any of these above-mentioned classes. So here we are running a for loop for only the easy classes and fetching the rows which have the easy categories.
# creating empty dataframes train_df, val_df= pd.DataFrame(), pd.DataFrame() # fetching rows of easy classes only for i in easy_labels: train_df = pd.concat( [train_df, train[train['class']==i]] ) val_df = pd.concat( [val_df, val[val['class']==i]] )We have taken a subset from the data set for both training and validations. As discussed earlier the aim of this article is to build a face detection model that will work for single faces and hence we are only selecting those images which have a single face so we are doing this for both our training and validation.
Now before we go ahead, let’s quickly check the train shape and the validation shape.
# shape of dataframe train_df.shape, val_df.shapeSo we have 1000 images in train and 274 images in the validation data. Next, We will see how to convert the annotations of a wider dataset to the annotations of Detectron2.
3. Creating annotations as per Detectron2Now, we’ll convert the annotations of our dataset as per the requirements of the detectoron2 library. So here is what the annotations should look like. We’ll first have the file name and then the number of faces in that image followed by the bounding box coordinates which are of the format Xmin, Ymin, width, and height, and finally the class.
So we want the annotations to be in the below format. We’ll first see how to get this format for a single image and then we’ll write a generalized function that will convert the annotations format for all of these images.
# custom annotation format idx=0 values = train_df.values[idx] print(values)Source:- Author
# for dealing with images import cv2 # create annotation dict record = {} # image name filename = values[0] # height and width of an image height, width = cv2.imread(filename).shape[:2] # creating fields record["file_name"] = filename record["image_id"] = 0 record["height"] = height record["width"] = width # different ways to represent bounding box from detectron2.structures import BoxMode # create bbox list objs = [] # for every face in an image for i in range(len(values[2])): # fetch bbox coordinates annos = values[2][i] # unpack values x1, y1, w, h = annos[0], annos[1], annos[2], annos[3] # find bottom right corner x2, y2 = x1 + w, y1 + h # create bbox dict obj = { "bbox": [x1, y1, x2, y2], "bbox_mode": BoxMode.XYXY_ABS, "category_id": 0, "iscrowd": 0 } # append bbox dict to bbox list objs.append(obj) # assigning bbox list to annotation dict record["annotations"] = objs # standard annotation format recordThe output is:-
Source:- Author
def create_annotation(df): # create list to store annotation dict dataset_dicts = [] # for each image for idx, v in enumerate(df.values): # create annotation dict record = {} # image name filename = v[0] # height and width of an image height, width = cv2.imread(filename).shape[:2] # assign values to fields record["file_name"] = filename record["image_id"] = idx record["height"] = height record["width"] = width # create list for bbox objs = [] for i in range(len(v[2])): # bounding box coordinates annos = v[2][i] # unpack values x1, y1, w, h = annos[0], annos[1], annos[2], annos[3] # find bottom right corner x2, y2 = x1 + w, y1 + h # create bbox dict obj = { "bbox": [x1, y1, x2, y2], "bbox_mode": BoxMode.XYXY_ABS, "category_id": 0, "iscrowd": 0 } # append bbox dict to a bbox list objs.append(obj) # assign bbox list to annotation dict record["annotations"] = objs # append annotation dict to list dataset_dicts.append(record) return dataset_dicts # create standard annotations for training and validation datasets train_annotation = create_annotation(train_df) val_annotation = create_annotation(val_df) # standard annotation of an image train_annotation[0]Source:- Author
4. Register the datasetTo let detectron2 know how to obtain a dataset, we will implement a function that returns the items in your dataset and then tell detectron2 about this function. For this, we will follow these steps:
1. Register your dataset (i.e., tell detectron2 how to obtain your dataset).
2. Optionally, register metadata for your dataset.
We are going to register it on detectron2. So we are first importing the required functions from chúng tôi which are dataset catalog and meta catalog. We are registering the data and naming it as face_train.
from chúng tôi import DatasetCatalog, MetadataCatalog # register dataset DatasetCatalog.register("face_train", lambda d="train": create_annotation(train_df))We are also registering the metadata where we are defining the class as a face since we want to detect faces.
# register metadata MetadataCatalog.get("face_train").set(thing_classes=["face"])Next, let us visualize a few samples from the training set. So for that, we are first importing the visualizer function from detectoron2 and cv2_imshow. In order to visualize the images, we are going to import random. Since we are going to randomly pick the images. So here we are getting the names of the classes from the metadata catalog and we are printing the names.
# for drawing bounding boxes on images from detectron2.utils.visualizer import Visualizer # for displaying an image from google.colab.patches import cv2_imshow # for randomly selecting images import random # get the name of the classes face_metadata = MetadataCatalog.get("face_train") print(face_metadata)So we are randomly picking five images from the training set and this is done using “random.sample” then we are reading the images using cv2.imread and using the visualizer function. We are going to visualize the bounding boxes using draw_dataset_dict. And also We are plotting the bounding boxes for the faces and finally using “cv2_imshow“. Now we are going to print the image along with these bounding boxes.
# randomly select images for d in random.sample(train_annotation, 5): # read an image img = cv2.imread(d["file_name"]) # create visualizer visualizer = Visualizer(img[:, :, ::-1], metadata=face_metadata, scale=0.5) # draw bounding box on image vis = visualizer.draw_dataset_dict(d) # display an image cv2_imshow(vis.get_image()[:, :, ::-1]Till now the data preparation part is done in the next section we’ll train our face detection model.
5. Fine-Tuning the modelNow our dataset is ready. Let’s train the model. We will take a pre-trained model and fine-tune it as per our problem. So we’ll be using the retina net pre-trained model which is trained on the coco data set. Since our wider face data set is different from the coco data set will retrain the entire architecture of the pre-trained model as per our problem.
So here we are first importing a few helper functions which are model_zoo.
In order to load the pre-trained model default trainer will be used to train the model and get_cfg which will be used to get the configurations of the pre-trained model.
Next, we are defining the configuration instance and first of all, we are specifying the path of the pre-trained model in this configuration file, and then we are loading the weight of the “retainnet ” pre-trained model which is trained on the coco detection data set.
# to obtain pretrained models from detectron2 import model_zoo # to train the model from detectron2.engine import DefaultTrainer # set up the config from detectron2.config import get_cfg # interact with os import os # define configure instance cfg = get_cfg() # Get a model specified by relative path under Detectron2’s official configs/ directory. cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/retinanet_R_50_FPN_1x.yaml")) # load pretrained weights cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/retinanet_R_50_FPN_1x.yaml")Next, we have to define the name of the train and the test data set in the configuration file and these data sets must be registered in “detectron 2”. We have registered the training data set as face_train and hence we are giving the name here. Currently, we do not want any test data and hence we are keeping this blank.
# List of the dataset names for training. Must be registered in DatasetCatalog cfg.DATASETS.TRAIN = ("face_train",) cfg.DATASETS.TEST = ()Now, we are defining a few hyperparameters for our model. Our model is defined, and we have changed the hyperparameters, it’s time to train the model.
# no. of images per batch cfg.SOLVER.IMS_PER_BATCH = 2 # set base learning rate cfg.SOLVER.BASE_LR = 0.001 # no. of iterations cfg.SOLVER.MAX_ITER = 1000 # only has one class (face) cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1So we are creating a directory, which will save the weights for the model, as the training progresses. Exist_score will tell us if the directory already exists or not. Setting this as true will mean that the directory already exists.
Now, using the modified configuration file we are creating a trainer using the default_trainer function, and using resume or load, we can set if we want to start the training from scratch, or resume using the pre-trained weights.
If the resume is set to true, and the last checkpoint exists, it will load the checkpoints, and start training on top of that.
If this is set to false, which is in our case, the training will start from the first iteration. So since we are training the model from the first iteration, we are going to set this resume is equal to false.
The summary of the model will be printed after every iteration, which will include the total loss, the classification loss, regression loss, the time taken to train for a few iterations, the learning rate after that particular iteration, and the maximum memory used.
# create directory to save weights os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) # create a trainer with given config trainer = DefaultTrainer(cfg) # If resume==True, and last checkpoint exists, resume from it, load all checkpointables (eg. optimizer and scheduler) and update iteration counter. # Otherwise, load the model specified by the config (skip all checkpointables) and start from the first iteration. trainer.resume_or_load(resume=False) # train the model trainer.train()The output of this training model is:-
Source:- Author
All right we can see that now the training is complete, as the training progresses, we can see that the loss keeps decreasing, for every subsequent iteration. If you observe both the classification loss and the regression loss, you will see a decreasing trend here.
To visualize the results of this training more efficiently, and in a better way, we’ll use the tensorboard. Tensorboard provides visualization needed for machine learning and deep learning. We can track and visualize the metrics like loss and accuracy. We can also visualize the parameters learned by the model and so on so.
# Look at training curves in tensorboard: %load_ext tensorboard %tensorboard --logdir outputWe are going to use this visualization tool in order to visualize the training of our model.
So let us first visualize the total loss, and here we can see that we have plots for the regression loss and the classification loss. We can see that both for regression as well as for classification the loss keeps decreasing with the increasing number of iterations.
And if we check the plot for the learning rate and how it changes with the number of iterations, we can see that the learning rate keeps increasing with the increasing number of iterations. Similarly, we can visualize other factors as well.
6. Evaluating model performanceFinally, we’ll evaluate the performance of this model on the validation set. And in order to use any data set in detectron2, we must register it first. So we are going to register the validation data set similar to how we register the training data set.
we are going to name this as face_val we are also registering the metadata where we are defining the classes that are present in this data set which is the face.
# register validation dataset DatasetCatalog.register("face_val", lambda d="val": create_annotation(val_df)) # register metadata MetadataCatalog.get("face_val").set(thing_classes=["face"])Next, we are loading the weights of the model which were saved as “model _final.pth” during the training. And we are keeping the score threshold as 0.8.
# load the final weights cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # set the testing threshold for this model cfg.MODEL.RETINANET.SCORE_THRESH_TEST = 0.8 # List of the dataset names for validation. Must be registered in DatasetCatalog cfg.DATASETS.TEST = ("face_val", )predictor and pass the updated configuration file inside it.
# set up predictor from detectron2.engine import DefaultPredictor # Create a simple end-to-end predictor with the given config that runs on single device for a single input image. predictor = DefaultPredictor(cfg)Let us now visualize the predictions on a few of the images, from the validation set. So we are going to randomly pick the images from the validation set, and here we are randomly picking three images. We are reading these images and taking the predictions from the predictor function, then creating the visualizations for these images.
So we are drawing the bounding boxes and printing these bounding boxes over the images and plotting the output using the cv2_imshow function. So here you can see that on this image the face is clearly detected with 95 % accuracy.
# create standard annotations for validation data dataset_dicts = create_annotation(val_df) # randomly select images for d in random.sample(dataset_dicts,3): # read an image im = cv2.imread(d["file_name"]) # make predictions outputs = predictor(im) # create visualizer v = Visualizer(im[:, :, ::-1], metadata=face_metadata, scale=0.5) # draw predictions on the image v = v.draw_instance_predictions(outputs["instances"].to("cpu")) # display image cv2_imshow(v.get_image()[:, :, ::-1])Similarly on this image also we have the face detected with 98 %probability.
Source:- Author
let us now check the performance on the entire validation set. So for that, we are importing the coco evaluator and inference on the data set. These two functions are present within the detectorn2 evaluation module. In order to load the images from the validation set, we are going to use the build_detection_test_loader and we have imported that here as well.
So here we are using the coco evaluator that we imported, and creating the evaluators.
To this function we have to pass the validation data, the cfg file, and using this, the coco evaluator will be able to evaluate the performance on this validation set.
So val_loader is the loader for the validation set and the evaluator is used to evaluate the validation set. So here we have given in our val_loader and evaluator, to the function inference on the data set.
from detectron2.evaluation import COCOEvaluator, inference_on_dataset from chúng tôi import build_detection_test_loader # create a evaluator using COCO metrics evaluator = COCOEvaluator("face_val", cfg, False, output_dir="./output/") # create a loader for test data val_loader = build_detection_test_loader(cfg, "face_val") # runs the model on each image in the test data and produces the results inference_on_dataset(trainer.model, val_loader, evaluator)Source:- Author
So we have the results here it will print the total inference time taken, and then the model performance. So we got an average precision value of approximately 93 %, for an iou of 0.5 and for iou of 0.75.
We have approximately 70%. So this is how we can build a face detection model that can detect single faces from the images.
ConclusionIn this article, we solved the problem of detecting a single face in the image. Fundamentals of face detection are critical for solving the business challenge and developing the necessary model. When it comes to working with image data, the most difficult task is figuring out how to detect faces from images that can be applied to the model. While working on image data you have to analyze a few tasks such as face detection, bounding box.
I hope the articles helped you understand how to deal with image data, how to detect faces from images, we are going to use this technique, and apply it in a few domains such as the medical, sports analysis domain.
In real-life scenarios, there can be situations when we have multiple faces in a single image so in the next article let us will work on building a model that can detect multiple faces from the images. Hope you enjoyed reading this article on the To Build Face detection Systems.
Thank you.
About the AuthorHi, I am Kajal Kumari. have completed my Master’s from IIT(ISM) Dhanbad in Computer Science & Engineering. As of now, I am working as Machine Learning Engineer in Hyderabad. Here is my Linkedin profile if you want to connect with me.
End NotesThanks for reading!
If you want to read my previous blogs, you can read Previous Data Science Blog posts from here.
The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.
Related
Are Meta Descriptions A Google Ranking Factor?
They can help search engines like Google understand what a web page is all about, too.
If you run a site audit using one of many SEO tools, you may find a flag or warning about missing or duplicate meta descriptions.
This could suggest that you need to make sure each page has a unique meta description, as part of your SEO strategy.
But are meta descriptions actually a factor in Google’s search ranking algorithm?
The Claim: Meta Descriptions As A Ranking FactorThe idea here is that if you write an optimized meta description, it will help you rank higher in Google search results.
Since we’re talking about a field with fairly limited space, the conventional wisdom is that you should use your target keyword phrase in the meta description as SEO best practice.
Yoast is considered one of the definitive experts as far as meta descriptions go.
At the time of publication, the Yoast WordPress SEO plugin was in use on over 7.9 million sites.
And here’s what Yoast recommends as far as meta descriptions go:
Keep it up to 155 characters.
Use your focus keyword.
Make sure it matches the content of the page.
If and how often you use the focus keyword in your meta description is part of the SEO evaluation Yoast provides:
But does Google actually use it to determine your ranking?
The Evidence For Meta Descriptions As A Ranking FactorIn a video published on the Google Search Central channel in August 2023, Google’s Search Advocate Martin Splitt said of meta descriptions:
“Please don’t forget to add them to your mobile pages. They matter a lot for Googlebot, as well.”
Almost immediately, an SEO professional tweeted Splitt asking for any additional details.
Splitt responded that the meta description and page title not only provide searchers a first impression but also:
“…helps Google Search to get a short summary of what you consider important about the page.”
Now, this caught a bit of attention.
The widely-held belief among SEO pros is that meta descriptions lost any ranking value they may have had a long time ago.
As Ann Donnelly wrote even back in 2011,
“Most of us know that while the search engines no longer consider the meta description in their ranking factors, this element of your page is still important in getting traffic to your site.”
Could it be that after all this time, Google actually does use meta descriptions as a ranking factor?
No.
Here’s why.
The Evidence Against Meta Descriptions As A Ranking FactorJohn Mueller was quick to clarify:
Now, there’s a healthy skepticism amongst SEO pros that Google perhaps isn’t entirely honest and open about ranking factors. Maybe you choose not to take Mueller at his word.
Even so, meta descriptions as a ranking signal just doesn’t hold water.
First, it’s ridiculously easy to manipulate. Just put the keywords you want to rank for in there and voilà!
Instant signal to Google that you should rank for that keyword phrase.
Back then, on-page optimization was quite formulaic and you could literally change up keywords in your title, meta description, subheadings, etc., and see rankings change dramatically.
And that’s exactly why the meta description lost any value as a ranking signal.
Matt Cutts’ 2009 explanation of why meta keywords were removed from the algorithm sheds some light on their thinking around meta descriptions at the time, as well:
“About a decade ago, search engines judged pages only on the content of web pages, not any so-called ‘off-page’ factors such as the links pointing to a web page.
…Because the keywords meta tag was so often abused, many years ago Google began disregarding the keywords meta tag.
Even though we sometimes use the description meta tag for the snippets we show, we still don’t use the description meta tag in our ranking.”
Even today, the meta description you assigned to that page might not appear in search results.
In fact, a 2023 experiment by the team at Yoast found that Google “often” came up with its own description to use in the search snippet.
There didn’t seem to be any rhyme or reason as to why Google found some of the meta descriptions provided inadequate, either.
Michiel Heijmans noted:
“It didn’t matter if we’d created long or short meta descriptions and whether the description was written with a high or low keyword density.”
They also found that in two-thirds of cases, Google used content from the first paragraph on the page to populate the search snippet.
More recently, Portent found that Google rewrites meta descriptions over 70% of the time.
Meta Descriptions As A Ranking Factor: Our VerdictGoogle does not use the meta description as a search ranking signal and hasn’t since sometime between 1999 and 2003-04.
That doesn’t mean they aren’t an important element of your SEO strategy.
The direct benefits of meta descriptions can include:
Helping to differentiate your content from competitors in the SERPs.
Intriguing and engaging searchers, compelling them to check you out.
Brand exposure.
Indirectly, the additional user behavior signals resulting from more – and more engaged – site visitors can support your SEO.
But on their own, meta descriptions aren’t a ranking factor and haven’t been for a long time.
See Brian Harnish’s SEO Best Practices: How to Create Awesome Meta Descriptions for helpful tips.
Featured Image: Paulo Bobita/Search Engine Journal
Meta Report: Secondhand Hardware A ‘Risk E
Through 2005, the secondhand server market will account for less than 5 percent of installed production platforms. In a buyer’s market, as server platforms become more commoditized, they become easily substituted; therefore, secondhand value will deteriorate nearer to 35 percent to 40 percent compounded (from about 25 percent) per year.
However, the secondhand market will pressure server vendors to reduce margins and increase discounts as platforms become less differentiable. Likewise, the inventory of computer scrap is becoming more of an issue for vendors. Purchase decisions for electronic devices are increasingly influenced by environmental concerns.
In Germany, for example, legislation has been enacted that forces computer manufacturers to recycle existing equipment when upgrading a customer to new systems. In many companies worldwide, waste disposal and resource preservation issues are being made high priorities. We expect component recycling and reuse to become more of a manufacturer’s legal requirement and this, too, will increase the cost of goods and erode margins.
From 2002, despite the global economic recovery, we do not expect hardware prices to recover or the secondhand market to be an attractive proposition for production platforms.
We believe users should not consider the secondhand market for production systems, because this will have negative warranty, maintenance, and service-level impact, as well as increased costs. Although discounts may initially look attractive, users must balance potential savings against increased risks and opportunity costs. These include difficulty in finding particular configurations or current product lines, and unpredictable time-to-in-service-deployment (due to shipping, reinstallation, and vendor certification).
We recommend targeting any savings toward improving infrastructure planning and developing more unified (e.g., replicable) platforms and services, thereby minimizing future integration and life-cycle costs. However, legitimate refurbished or resold hardware from the vendor or its channels is a viable option.
The following is a summary checklist of the potential savings from street prices and the levels of risk from different channels of the secondhand hardware. These are:
Refurbished platforms from the manufacturer: Using traditional ex-demo or evaluation (refurbished) hardware is a good way of saving. The technology may not be the hottest from the manufacturer (almost always it’s the previous season’s stock), but some savings can be had. Additionally, complex projects that require a consistent platform deployed to many sites during a set time frame will require a level of consistency not provided by the piecemeal availability of refurbished hardware. However, point-project or development platforms can only be fulfilled using this source if a full warranty is provided.
In-channel used inventory with reseller warranty: In-channel inventory returned to a dealer or distributor is fine if not opened! However, any other condition moves the warranty from the manufacture to the dealer on a case-by-case basis with the dealer or distributor. This secondhand market should not be encouraged, as users will be too exposed to the dealer’s support without necessary escalation to the manufacturer.
Brokerage with limited warranty: A secondhand dealer will not be able to support the platforms at all, even though upfront costs may be attractive. Brokers should not be considered for infrastructure platforms.
Auction with hardware at reduced book value: A limited sale or return warranty may be available from the auction house, but the risk associated with this as a source of secondhand hardware is too high.
Cold call from brokerage “one-off offer”: These offers by cold call that appear too good to be true are. Clients should not consider this as a source of secondhand platforms.
Infrastructure planning and development staff should assist traditional finance procurement departments in assessing the cost of the secondhand hardware. A glut of nearly new hardware is available on the market that looks attractive to finance departments (compared to new), but several issues must be considered when using secondhand hardware for production platforms. These are:
E-business fit: The rules for secondhand platforms align to the e-business platform layers. Too much risk is associated with secondhand hardware for production database and application server platforms. Extreme caution must be used for commoditized Web servers. In-house development systems are a legitimate target for secondhand hardware has long as the management and risk is addressed by the developer community or infrastructure operations.
Mechanical devices – storage and printers: The nearly new disk tape and printer market may appear lucrative, but the sheer nature of these devices is purely mechanical. No easy way of tracking previous use exists.
Non-transferable warranty: Infrastructure is not like a car! The warranty is generally not transferable from dealership to dealership or owner to owner. Therefore, any secondhand hardware must be from a legitimate channel. Risk-takers using secondhand hardware in production environments will be exposed to unsupported infrastructure with invalid warranties that offset potential savings.
Service levels and maintenance, risk reduction: Non-maintained or out-of-warranty hardware is at high risk when setting service-level agreements. The client’s support channel will not set support levels on illegitimate hardware, and non-declaration will invalidate any support contract. This will also affect any legitimate hardware and software licenses from the vendor in client infrastructure portfolios.
It is natural that finance departments will be attracted to secondhand hardware savings. But to expect them to select the best-used platforms and best fit for infrastructure layers from the best source is unrealistic. Infrastructure developers must assist finance departments in selecting the equipment and determining the true overall cost.
Business Impact:In general, users should not consider the secondhand hardware market for production systems. They should, however, leverage the surplus in the resale channel to strengthen their hardware platform negotiations. Vendors’ surpluses and refurbishments with full warranty can be considered for internal or development systems.
Bottom Line: The initial savings of secondhand hardware is offset by the cost of integration and maintenance (including hidden opportunity costs).
Philip Dawson is a consultant for META Group, an IT consulting firm based in Stamford, Conn.
Best Iphone 14 And 14 Pro Cases With Stand In 2023
Do you want protection against shocks while reading, binge-watching movies, or FaceTiming for your new iPhone 14? You can get all these by using an iPhone 14 case with a stand. The stand will enable you to go hands-free and view your iPhone in landscape or portrait mode. Also, these cases are sturdy enough to withstand drops. I have rounded up the best kickstand cases for iPhone 14 and 14 Pro. Check out below!
1. Spigen tough armor case – Editor’s choice
Spigen is the most well-known brand in the iPhone accessories industry. The Tough Armor with MagSafe compatibility offers multi-layered protection for better drop safety. Besides, it has Spigen’s signature Air Cushion Technology and Extreme Protection Tech of military-grade shock absorption standards. Also, it’s made of PC, TPU, and Impact Foam for longevity.
The raised edges and lips safeguard the screen and camera from scratches and damage. Also, the tactile buttons provide you with reliable feedback and simple pressing. The built-in kickstand is durable and convenient for hands-free viewing. I liked its added grip at the corners and dual-tone matte finish. However, the hole cutout design didn’t work for me.
Pros
Extra grip
Optimal slimness
Lightweight
Cons
Weak magnets to hold on car vents
2. OtterBox Defender Series case – Just classic
OtterBox Defender Series, renowned for its tough case, comes with a multi-layer structure. Its DROP+ protection can endure 4X more drops than the military standards. Besides, the polycarbonate shell with synthetic rubber slipover is shock-absorbing and has 50% recycled plastic. So, your iPhone is safe from damaging drops, scratches, and bumps.
I liked the port covers that prevent dust and dirt buildup. Though there are no built-in magnets, the case supports Qi and MagSafe wireless charging. Additionally, the supplied polyester holster is a 2-in-1 belt clip and a hands-free kickstand. The material is fully appropriate for 5G networks and includes a lifetime limited OtterBox guarantee.
Pros
Textured edges
Added bumpers
Port covers
Cons
No magnets in case
Bulky holster
3. ESR metal kickstand case – Crystal clear
This ESR crystal clear case is durable, thanks to the scratch-resistant acrylic back. Its Air-Guard corners, raised screen edges, and camera guard provide certified protection against drops, shocks, and bumps. Also, the soft shock-absorbing and non-slip polymer sides of this iPhone 14 Pro kickstand case provide a nice grip and great in-hand feel.
Besides, its highly modifiable kickstand offers 3 stand modes. So, you can adjust the hands-free viewing angle to 60 degrees and use your phone in landscape or portrait mode. Also, your stand will remain steady for a longer period thanks to a sturdy hinge and aluminum alloy patented design. This compact case offers wireless charging, so you can juice your phone without removing it.
Pros
1.2mm Raised edges and 0.5mm camera lip
Reinforced air-guard corners
Scratch-resistance
Cons
Turns yellow
4. SUPCASE Unicorn Beetle Pro case – 360° Full-body protection
SUPCASE Unicorn Beetle Pro case has a back cover, screen protector, and a holster with a belt clip. It’s the winner of CNET’s “Best Case Scenario” drop test (20ft protection). Actually, the case is made of dual-layer hybrid polycarbonate back and shock-absorbing TPU bumper for extreme durability.
Besides, the front cover has a built-in screen protector. So, your display is shielded against scratches without compromising on touch sensitivity. The built-in kickstand enables both portrait and landscape viewing. There is a rotating and removable belt clip with a swivel for simple usage. What’s more? It is compatible with wireless charging.
Pros
20ft Drop protection
Reliable screen protector
Cons
Bulky
5. Encased kickstand case – Thin screen protection
Encased Kickstand Series armor case has a scratch-resistant clear PC backplate and multi-layer protection design. Also, the mil-standard shockproof and ultra-protective bumper safeguards your iPhone from 10 ft drop damage. The reinforced camera guards keep the camera frame away from the surface.
Besides, the supplied high-clarity screen protector is made of 2x toughened tempered glass. The case offers a seamless fit and wraps your iPhone to protect it from all sides. Also, the sturdy metal kickstand is made to endure and is constructed for robustness. Therefore, it won’t pop out or break as plastic patterns do.
Pros
10ft Drop protection
Reinforced corners
Durable metal hinge
Cons
Tempered glass screen protector cracks easily
6. TORRAS MarsClimber case – Premium matte finish
TORRAS iPhone 14 Pro case with stand has a contemporary bezel design, side laser texture, and translucent matte coating imported from Germany. So, it offers an outstanding ergonomic grip and a smooth feel without gathering lint. Besides, the back panel is covered with a nano-oleophobic and hydrophobic coating.
Therefore, it prevents smudges, fingerprints, and scratches. The best part is the case is only 0.04 inches and is lightweight. Also, the dark grey hue will never fade or get filthy, so it’s long-lasting. You will get 8ft mil-grade drop protection thanks to the flexible TPU frame, 4 corners with internal X-SHOCK tech, and 360° honey-comb anti-shock airbags on both sides.
Pros
3D Airbags design
Internal anti-shock cushion
360° Honeycomb pattern
Cons
Slippery in hand
7. MyBat Pro Stealth Series case – With ring holder
The MyBat Pro Stealth iPhone 14 Pro case with kickstand is tough and has dual-layered military-grade protection to withstand shocks and bumps. Its non-slip surface provides a good grip, and the elevated bezel edge prevents scratches. Besides, the case guards against germs and bacteria thanks to the anti-microbial lining.
You may use the stylish and unobtrusive ring holder as a vertical kickstand. So, enjoy hands-free movie watching or FaceTiming your buddies. With the case’s integrated metal plate, you can easily attach your iPhone with any magnetic mount. But it doesn’t support wireless charging. Besides, the snug fit ensures easy access to all buttons and ports.
Pros
Non-slip bumper grip
Anti-microbial lining
Built-in ring holder
Cons
Not wireless charging compatible
8. SHEILDON wallet case – For leather aficionados
SHEILDON high-quality genuine leather case has precisely crafted oil wax cowhide leather. Therefore, the surface is scratch-resistant, shinier, and more streamlined. Your phone is protected from scratches, drops, and bumps thanks to a soft, full-body casing with a shockproof edge. Also, the thicker lips surrounding the lens safeguard your camera.
You can store 4 cards and bills in wallet slots. The magnetic closure and RFID-blocking technology keep everything secure. Besides, the folding stand enables viewing in landscape orientation. The precise cutouts enable you to access all functionalities conveniently. But the magnetic closing mechanism will become less effective if you insert many cards.
Pros
Genuine cowhide leather
Invisible kickstand
Magnetic closure
Cons
Not suitable for Magsafe chargers
So, that’s all for today, folks!
The iPhone 14 or 14 Pro kickstand cases are best for using your device in hands-free mode. Some cases come with built-in screen protectors, but sometimes they are bulky or do not support MagSafe or wireless charging. Besides, there are several cases for iPhone 14 and 14 Pro. Check these out before making a purchase.
Explore more…
Author Profile
Ava
Ava is an enthusiastic consumer tech writer coming from a technical background. She loves to explore and research new Apple products & accessories and help readers easily decode the tech. Along with studying, her weekend plan includes binge-watching anime.
Daily Authority: ⚖ Qualcomm And Arm Face Off
Kris Carlon / Android Authority
😾 Good morning! Have you ever tried giving eye drops to a cat? We’ve had to start doing this as our feline is recovering from a minor eye ailment. Thank goodness she doesn’t choose violence, but it’s still a two-person job.
Chip giants call to Arms
Kris Carlon / Android Authority
A major news story broke late yesterday when Arm announced that it was suing Qualcomm and its subsidiary Nuvia. This could have long-term repercussions for both the mobile and Arm PC space.
Qualcomm is a major chipmaker in the mobile and computing spaces thanks to its Snapdragon family of processors.
The company used to rely on custom CPU designs for its flagship smartphone processors but switched to semi-custom CPU tech based on Arm IP in 2023.
In other words, it went from designing its own CPU tech to taking Arm’s existing CPU designs and tweaking them.
However, Qualcomm officially acquired Nuvia last year as part of a plan to return to using custom CPU technology.
Nuvia was formed by former Apple chip engineers and initially focused on Arm-powered datacenter processors. You can read about them over here.
But Qualcomm sees Nuvia as a key tool that will help it beat Apple’s processors in the computer and smartphone spaces.
After all, Apple’s processors, particularly in the computer segment, have been miles ahead of Qualcomm’s chips.
And yes, current Qualcomm computer chips are based on Arm CPU technology.
Spanner in the worksArm’s announcement of a lawsuit might throw a spanner into the Qualcomm/Nuvia works.
Arm said in a statement that Qualcomm and Nuvia breached “certain license agreements” and committed trademark infringement.
Arm asserts that the two companies should “destroy certain Nuvia designs” per a contractual agreement.
The release alleges Qualcomm tried to transfer Nuvia’s Arm licenses without Arm’s consent.
However, Arm also claimed that Nuvia’s licenses for Arm tech expired in March 2023, presumably due to Nuvia’s acquisition.
Arm says Qualcomm therefore “breached the terms of the Arm license agreement by continuing development under the terminated licenses.”
A source dished out more details to Android Authority, echoing some of Arm’s claims.
The source reiterated that Qualcomm would be required to destroy certain Nuvia designs and start over if Arm didn’t give consent for the next phase of development.
It seems consent wasn’t forthcoming for this next phase but that the chip designs remained the same.
Arm’s Phil Hughes also told Android Authority that Nuvia’s Arm licenses barred the startup from being acquired without Arm’s consent.
It’s alleged that Arm’s consent wasn’t sought for Qualcomm to acquire Nuvia.
What next for Qualcomm and Nuvia?Qualcomm asserted that Arm has no right, “contractual or otherwise,” to interfere with company efforts.
“Arm’s complaint ignores the fact that Qualcomm has broad, well-established license rights covering its custom-designed CPUs, and we are confident those rights will be affirmed,” the company added.
Qualcomm was planning to start sampling the first PC chips with Nuvia CPUs to OEMs in H2 2023, with device launches slated for 2023.
If Arm gets its way and forces Nuvia to destroy some CPU designs, this timeline could be thrown out of the window.
Qualcomm would be forced to keep using its current PC chips. This wouldn’t be ideal as these chips have a reputation for being underpowered compared to Apple’s SoCs.
We’re also expecting Nuvia CPUs to arrive in smartphones after the first Nuvia laptop chips launch, but this timeline could theoretically be pushed back too.
Here’s hoping Arm and Qualcomm reach a resolution that doesn’t involve any major delays, as this could put the Windows-on-Arm and Android ecosystems as a whole on the backfoot compared to Apple.
For what it’s worth, Nuvia was previously under fire from Apple after the Cupertino giant filed a lawsuit against it.
RoundupThursday Thing
Amazon has Amazon Prime, but it turns out Disney is considering a similar concept. The Wall Street Journal reports that Disney is exploring its own membership program, citing people familiar with discussions.
The program is referred to as Disney Prime internally, although that apparently won’t be the final name.
Still, the name does serve perhaps as evidence of the Amazon Prime inspiration.
The idea is to offer discounts and perks for streaming, merchandise, theme parks, and resorts.
Disney confirmed to WSJ that it was exploring the idea of a membership program, without further details.
The company is apparently working to bring merch buying integration to Disney Plus as well.
Either way, it seems like membership programs are the next digital frontier. Even Walmart offers Walmart Plus, complete with Paramount Plus streaming.
Have a great day!
Hadlee Simons, Editor.
Update the detailed information about Meta To Face ‘Accountability’ After 14 on the Moimoishop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!