Trending November 2023 # Transformers For Image Recognition At Scale # Suggested December 2023 # Top 12 Popular

You are reading the article Transformers For Image Recognition At Scale updated in November 2023 on the website Moimoishop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Transformers For Image Recognition At Scale

This article was published as a part of the Data Science Blogathon

Introduction

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In the vision, attention is either applied in conjunction with convolutional networks or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer can perform very well on image classification tasks when applied directly to sequences of image patches.

How many words is an image worth?

A picture is worth a thousand words? It is not possible to fully describe a picture using words. But the papers tell us that an image worth 16×16 words.  In this blog, I gonna explain image recognition using transformers. It’s a really interesting paper published by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby  on 22 Oct 2023.  In this model, authors borrowed the most dominant attention model architecture in Natural Language Processing from the paper “Attention all you need” by Ashish. In this paper, they didn’t modify the attention layers in the transformer adore. The most important trick they do is to break an image into small patches of image( perhaps 16×16 as in the title). But how these patches are divided?

What’s special about this paper?

It is special because here we won’t use any Convolutional Network Layers. It works based on a standard transformer encoder to perform image processing tasks. Transformers doesn’t have any assumptions but CNN has a lot of assumptions about the image. In fact, Transformers are originally designed for NLP. I would recommend reading the article by Jay Alammar.

Using a transformer for image processing is more challenging, in NLP we pass a sequence of tokens as input but here we pass image patches as input, fitting an image to a transformer is really challenging, but in the paper, the image is divided into small patches and passed through the transformer.

It is a simple, scalable architecture, and performs state-of-art, especially when trained on large datasets such as JFT-300M. It’s also relatively cheap to pre-train the model. Transformers completely replaced LSTM in NLP.

Self-attention to images

How to apply self-attention to images? Just like in NLP, how one word pays attention to other words ( to find the relation between the words ). Using this concept to the image, the model transformer passes one pixel to attend to every other pixel.  For example, let us take an image of 4096 x 2160 pixels (DCI 4K), the computational cost too high and remember the attention layer tank capacity to the number of pixels tank capacity is high.

If you have a 1000x1M pixels image then the cost will be tremendously different. Let’s say 100×1000 pixels, the cost will be 100 times different in the self-attention layer.

How Vision Transformers work

Firstly, Split an image into patches. Image patches are treated as words in NLP. We have patch embedding layers that are input to transformer blocks. The sequence of pictures will have its own vectors. List of vectors as a picture because a picture is 16 times 16 words region transformer.

Vision Transformers (ViT)

As discussed earlier, an image is divided into small patches here let’s say 9, and each patch might contain 16×16 pixels.  The input sequence consists of a flattened vector ( 2D to 1D ) of pixel values from a patch of size 16×16. Each flattened element is fed into a linear projection layer that will produce what they call the “Patch embedding”.

Position embeddings are then linearly added to the sequence of image patches so that the images can retain their positional information. It injects information about the relative or absolute position of the image patches in the sequence.

An extra learnable ( class) embedding is attached to the sequence according to the position of the image patch. This class embedding is used to predict the class of the input image after being updated by self-attention.

The classification is performed by just stacking an MLP Head on top of the Transformer, at the position of the extra learnable embedding that we added to the sequence.

Patch embedding

The most important part of this paper is how to break down the image into patches. An image is represented as

                                               3D Image (X)  ∈  resolution  R HxWxC                                

reshape the 3D image into flattened 2D patches

                                              Patch Image ( Xp)  ∈  R Nx(P^2 . C)

Where sequence length N = H . W / P2  and (P, P) is the resolution of each image patch.

Each patch is a D dimension vector with a trainable linear projection.

[class] token

Similar to BERT’s [class] token, we prepend a learnable embedding to the sequence of embedded patches (z00 = xclass ).

z0 = [xclass; x1pE; x2pE; · · · ; xNp E] + Epos,    E ∈ R(P^2C)×D, Epos ∈ R(N+1)×D

Xclass is a class label and XNp is patch images N  ∈ 1 to n

Using the transformer encoder to pre-train we always need a Class label at the 0th position. When we pass the patch images as inputs we always need to prepend one classification token as the first patch as shown in the figure.

Positional encodings / Embeddings

Since Transformers need to learn the inductive biases for the task they are being trained for, it is always beneficial to help that learning process by all means. Any inductive bias that we can include in the inputs of the model will facilitate its learning and improve the results.

Position embeddings are added to the patch embeddings to retain positional information. In Computer Vision, these embeddings can represent either the position of a feature in a 1-dimensional flattened sequence or they can represent a 2-dimensional position of a feature.

1-dimensional:  a sequence of patches, works better

2-dimensional: X-embedding and Y-embedding

Relative: Define the relative distance of all possible pairs.

       Position Embedding formula as per attention mechanism

Model architecture

If we do not provide the transformer with the positional information, it will have no idea of the images’ sequence (which comes first and the images that follow it). This sequence of vector images is then fed into the transformer encoder.

every block, and residual connections after every chúng tôi MLP contains two layers with a GELU non-linearity. Finally, an extra learnable classification module (the MLP Head) is added to the transformer encoder, giving the network’s output classes.

GELU  is GAUSSIAN ERROR LINEAR UNIT

zℓ ` = MSA(LN(zℓ−1)) + zℓ−1,                ℓ   = 1 . . . L

zℓ = MLP(LN(zℓ `)) + zℓ `                           ℓ    = 1 . . . L

Hybrid architecture

The classification input embedding and position embeddings are added as described above.

E = [xclass; x 1pE; x 2pE; · · · ; x Np E] + Epos, E ∈ R (P^2 ·C)×D, Epos ∈ R (N+1)×D

Fine-tuning and Higher resolution

Supervised learning is used to do pretraining on large datasets ( e.x.; ImageNet). Pre-trained prediction head and attached a zero-initialized D × K feedforward layer, where K is the number of downstream classes ( e.x; 10 downstream classes in ImageNet ).

length.

The Vision Transformers can handle arbitrary sequence lengths (up to memory constraints), however, if sequence lengths too long the pre-trained position embeddings may no longer be meaningful.

2D interpolation of the pre-trained position embeddings is performed, according to their location in the original image. Note that this resolution adjustment and patch extraction are the only points at which an inductive bias about the 2D structure of the images is manually injected into the Vision Transformers.

Datasets

Dataset  Images

ImageNet 1000 1.3 Million

ImageNet-21k 21000 14 Million

JFT 18000 303 Million

The authors of the paper have trained the Vision Transformer on a private Google JFT-300M dataset containing 300 million (!) images, which resulted in state-of-the-art accuracy on a number of benchmarks ( Image below).

Model variants

Params 

ViT-Base 12 768 3072 12 86M

ViT-Large 24 1024 4096 16 307M

ViT-Huge 32 1280 5120 16 632M

Details of Vision Transformer model variants

The “Base” and “Large” models are directly adopted from BERT and the larger “Huge” models. For instance, ViT-L/16 means the “Large” variant with 16×16 input patch size. The transformer’s sequence length is inversely proportional to the square of the patch size, thus models with smaller patch size are computationally more expensive.

Comparison to state-of-the-art

Models – ViT-H/14 and ViT-L/16 – to state-of-the-art CNNs from the literature. Big Transfer (BiT) , which performs supervised transfer learning with large ResNets and Noisy Student, which is a large EfficientNet trained using semi-supervised learning on ImageNet and JFT300M with the labels removed. Currently, Noisy Student is the state of the art on ImageNet and BiT-L on the other datasets reported here. All models were trained on TPUv3 hardware, and less number of TPUv3-core-days ( 2500 TPU days ) taken to pre-train each of them.

Model size vs data size

ImageNet, Imagenet-21, and JFT-300 datasets are small, medium, and huge respectively. For the small dataset, Resnet ( Bit) really performed well but as we scale up the dataset ViT is performing very well. Vision Transformer performed very well on JFT-300 dataset. Localization is implemented on a very huge dataset during training. Localization like learning rate decay, dropout, and SGD with momentum.

ResNets perform better with smaller pre-training datasets but plateau sooner than ViT, which performs better with larger pre-training. ViT-b is ViT-B with all hidden dimensions halved.

Scaling Data Study

The above figure shows,  transfer performance versus total pre-training compute/computational costs. A few patterns can be observed.

performance (average over 5 datasets).

Second, hybrids slightly outperform ViT at small computational budgets, but the difference vanishes for larger models. This result is somewhat surprising since one might expect convolutional local feature processing to assist ViT at any size.

Third, Vision Transformers appear not to saturate within the range tried, motivating future scaling efforts.

Attention pattern analysis Self-supervised pre-training

significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training. We leave the exploration of contrastive pre-training to future work.

Summary / Conclusion

Transformers solve a problem that was not only limited to NLP, but also to Computer Vision tasks.

Huge models (ViT-H) generally do better than large models (ViT-L) and wins against state-of-the-art methods.

Vision transformers work better on large-scale data.

Attention Rollouts are used to compute the attention maps.

Like the GPT-3 and BERT models, the Visual Transformer model also can scale.

Large-scale training outperforms inductive bias.

Convolutions are translation invariant, locality-sensitive, and lack a global understanding of images

So does this mean that CNNs are extinct? No! CNN still very much effective for tasks like object detection and image classification. As ViT works on large datasets, so we can make use of ResNet and EfficientNet models which are state-of-the-art convolutional architectures for all types (small, medium, and large )datasets. However, transformers have been a breakthrough in natural language processing tasks such as language translation and show quite a promise in the field of computer vision.

Please do share if you like my post.

Reference

Images are taken from Google Images and published papers.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

You're reading Transformers For Image Recognition At Scale

Chatgpt For Market Research: Gather Insights From Customer Reviews At Scale

As the digital world continues to expand, businesses now have greater access to customer feedback. With increased connectivity, customers are more willing than ever to express their opinions. However, the abundance of feedback and its unorganized nature present challenges in deriving meaningful insights. Consequently, businesses are finding it difficult to keep up with the real-time analysis of customer data.

Unstructured data holds immense value for businesses, as it offers insights into customer behavior, preferences, and emotions that structured data fails to capture. Nonetheless, extracting these insights requires the implementation of appropriate tools and strategies.

In this sense, artificial intelligence (AI) has recently emerged as a potent force, with one technology standing out as a game changer: ChatGPT.

Chatgpt: Groundbreaking Technology To Completely Transform The Market Research Landscape

ChatGPT is an artificial intelligence (AI) platform that blends natural language processing (NLP) and machine learning (ML) to create hypotheses, insights, and exceptional analysis at scale. It can help to expedite market research, condense and analyze massive amounts of data, and give a firm foundation of audience insights to inform marketing strategy.

Its real-time user interactions, personalized coaching, and lightning-fast replies set it apart from other technologies that have assisted us in our work. This technology has the ability to completely transform the market research landscape.

ChatGPT, however, is more than simply a tool; it is a way of thinking about how to blend human creativity with data-driven technology.

Unlocking Insights: How ChatGPT is changing Market Research

In almost no time, AI can integrate hundreds of data sources, detect patterns, and isolate essential facts and findings. As a result, the time constraints connected with this type of research are essentially eliminated.

Handle large volume of data:

With the exponential growth of digital data, businesses need to be able to analyze and process vast amounts of information quickly and efficiently.

Natural language processing and machine learning capabilities of ChatGPT will allow it to analyze and process large huge amounts of unstructured data, such as customer feedback and social media posts, in real-time.

For example, a global e-commerce company can use Chatgpt to analyze customer reviews and feedback from multiple countries and languages. By using Chatgpt’s language translation capabilities, the company can gather insights from a larger pool of customers and identify global trends and patterns. Now this data can be used to adjust their marketing strategies and product offerings in each market.

This enables businesses to gain meaningful insights into customer behavior, preferences, and trends, leading to improved decision-making and better customer experiences.

Analyze unstructured data:

However, this type of data can be challenging to analyze using traditional methods. ChatGPT will allow to analyze unstructured data quickly and efficiently, identifying key themes, sentiments, and trends.

By leveraging ChatGPT’s ability to analyze unstructured data, brands can gain a deeper understanding of customer needs and preferences, leading to improved decision-making and better customer experiences.

Real-Time insights

In the world of market research, time is of the essence. To stay well ahead of the competition, organizations must understand their customers’ needs and preferences as quickly as possible.

Imagine you are a company that has just launched a new product. You want to know how your customers are responding to it, and more importantly, why.

Personalize market research:

The capacity of ChatGPT to understand natural language and handle massive volumes of data makes it an excellent tool for customized marketing. Companies can use ChatGPT to evaluate customer data, learn their preferences, and personalize marketing campaigns to specific customers. Customers tend to respond positively to marketing communications that are targeted particularly to them, which can lead to more effective marketing and higher consumer engagement.

For example, a clothing retailer can use Chatgpt to gather personalized insights into their customers’ fashion preferences. By asking questions about their customers’ style and clothing choices, Chatgpt can provide the retailer with valuable data on which styles are most popular and why. This sort of data can then be used to inform future product designs and marketing campaigns, resulting in increased customer satisfaction and sales.

Less expensive, more readily available

AI is making market research less expensive and more accessible to businesses of all sizes. For example, chatbots powered by AI can gather customer feedback and provide insights at a fraction of the cost of hiring a market research firm. This means that small businesses can now benefit from market research insights without breaking the bank.

Real-life Examples of ChatGpt Adoption in Market Research 1.Coca-Cola

The Coca-Cola Company will use OpenAI’s generative AI technology for marketing and customer experiences, among other things, making it one of the first large consumer products corporations to officially reveal usage of the much-touted technology.

Expedia announced the beta launch of a new in-app travel planning experience powered by ChatGPT. Expedia already incorporates artificial intelligence (AI) and machine learning (ML) into their platform to provide a unified experience from planning to post-booking. AI and machine learning are utilized to provide personalized and relevant trip options to travelers based on characteristics such as hotel location, room type, date ranges, price points, and much more.

ChatGpt’s Role in Market Research

In short, it appears that Chatgpt and market research have a long, happy, and productive future together.  As AI improves at understanding human emotions, expressions, speech, and language usage, it will become more accurate and valuable. AI is already assisting in information gathering, processing, and analysis; we can expect it to continue and expand into a crucial market research ally.

Author Bio

Acronis True Image 2023 Review: Fast And Powerful Image Backup

We recommend True Image for SMB and pro-sumers, and the anti-ransomware features will be of value to some, but the average user might be better off with something easier and more lightweight.

It would be quite easy to simply say that Acronis True Image 2023 New Generation is the best backup program; It’s the fastest all-around imaging program we’ve tested, it’s dead reliable, and it sports every imaginable option. But backup in the real world isn’t about feature sets. While we love True Image in the enterprise or SMB venues, its heavy system footprint and cornucopia of options are likely a wee bit much for the home user.

Design and features

True Image 2023 has an attractive interface that’s relatively easy to use—once you’re familiar with it. It’s light years better than it was before a re-design several years ago, but there are still little oddities, such as thin scroll bars with light-colored sliders on a dark background (the exact opposite of the Windows standard) that require more cognitive effort than they should.

IDG

It’s pretty, but don’t grab the dark portion of the slider—that’s the background. Little time-wasters like this appear throughout the interface.

Design oddities or no, True Image 2023 is a titan when it comes to features. In addition to imaging your system, whole disks, partitions, and groups of files, there’s a boot-time startup recovery option and an optional hidden partition for images. There’s every conceivable backup option: incremental, differential, super flexible scheduling, pre/post operation commands, email notifications, just to name a few. There’s also a one-way (mirroring) function for syncing a folder to a destination, as well as client apps for your mobile devices to keep those backed up.

Those are just the features in the Essential version, which costs $50. The $40-yearly Plus version offers 50GB of online storage as well as backup of your Facebook page, phone support, and updates. The $100-yearly Premium version ups that to 1TB of online storage plus Active Protection, which checks images and the program itself to see if anything has been altered, to fend off ransomware. Those prices reflect that fact that the Plus and Premium versions are true subscription software—only the restore functions are available if you don’t re-up. $30 will buy you the local backup features in perpetuity.

IDG

Acronis Active Protection is a hedge against ransomware, but it’s available only in the $100 Premium version.

The Premium version also features Asign (not a misspelling), which signs your backups for authentication and stores the record with Blockchain, an online service and repository for digital assets. (Note that blockchain, or block chains, also refers to the technology/methodology employed—the company simply uses the name.)

All True Image versions provide bootable recovery media with the ability to restore to dissimilar hardware, i.e., not the same type of hardware that the backup was created on.

IDG

Be careful buying True Image. When Acronis says subscription, they mean it. A perpetual license is another $30.

Performance

On the other hand, True Image’s speed and hefty feature set come with a price: a whopping six processes running in the background by default. While any performance hit will largely be unnoticeable in a relatively modern, multi-core system, loading them noticeably lengthened our boot times. There’s a lot of stuff in the system tray as well. We’d very much like the ability to configure True Image to use less resources and keep a lower profile.

Asleep At The Wheel: Searching For Super

LAS VEGAS—Cool, intelligent car? Check. Controller wristwatch? Check. Now all you need is the leather jacket and 1980s perm to be Michael Knight.

Mobile device connections, active safety features and autonomous driving are turning cars into your own “personal robot,” as Nvidia CEO Jen-Hsun Huang describes it.

”The car will be your most important personal computer,” he told reporters at Nvidia’s press conference on Sunday. The company wants its upcoming 192-core Tegra K1 graphics chip to be used for HD video playback and 3D gaming for passengers, as well as for driver assistance programs such as collision avoidance. Along with GM, Honda and other carmakers, Nvidia is part of Google’s Open Automotive Alliance (OAA), announced Monday, that will bring the Android platform to cars in 2014 in a standardized infotainment ecosystem.

”There will be no big bang to get an autonomously driving car.”

Audi, another OAA member, showed off the second generation of its zFAS car “brain,” a tablet-sized piece of hardware that piloted an A7 sedan onto the stage during an Audi keynote presentation. The device was also parking Audis all by itself outside the Las Vegas convention center.

When viewed through rose-tinted spectacles, all the zFAS needs is a prissy accent and a turbo boost, and you’d have your own personal KITT.

Baby steps

Car enthusiasts at CES who are looking forward to super-intelligent, self-driving cars want to know when they’ll be able to fall asleep at the wheel while “driving” to work.

The most important step for BMW is high-resolution map data, Frickenstein said after speaking at a panel on how technology is changing driving. “Then, we can drive autonomously on the road.”

Induct Technology’s autonomous Navia shuttle.

Since then, production cars have been getting autonomous features such as driver assist, but cars at CES were taking the next step.

Just outside the convention center, France’s Induct Technology was demonstrating its just-launched Navia, a $250,000 self-driving shuttle designed to ferry people around university campuses, airports and other zones with limited traffic. The company calls it the world’s first self-driving commercially available vehicle. The Las Vegas Monorail zipped by overhead, of course, but it uses a purpose-built track.

”We use mainly SLAM (simultaneous location and mapping) lasers to map and detect obstacles,” Induct marketing and communications director Max Lefevre said as he ushered me into the all-electric shuttle. Soon it was silently transporting us around a test track. “The lasers see up to 200 yards, and the vehicle knows to either slow down or stop if there’s an obstacle.”

This new relationship between car and driver evokes many science-fiction scenarios, but if you ask automotive insiders when the future of completely self-driving cars will arrive, don’t hold your breath.

Some Induct customers will have a Navia fleet this year, Lefevre said, but he wouldn’t identify them. The shuttle has been extensively tested in areas full of pedestrians, he said, adding that legislative changes are needed for wider deployment.

I sat in the rear seat as the Taurus hurtled toward an intersection while another Ford vehicle to the right approached at speed from behind cars blocking the view. In what seemed like a second or two before impact, the Taurus alerted its driver to stop with flashing LEDs projected on the windshield, a sound alarm and vibrations in the seats. He then slammed on the brakes.

The NHTSA has been evaluating V2V tests and is expected to announce a policy for bringing it to commercial implementation in a few weeks, according to Farid Ahmed-Zaid, a technical expert in Ford’s Active Safety Department. While the technology could reduce fatal collisions dramatically, Ahmed-Zaid admitted that, “If GPS fails, then you don’t have anything.”

Eliminating the little hassles

Bosch’s self-parking car. 

Some industry observers are concerned that making cars smarter, more aware and more independent could erode driver skills. That could become an inevitable effect of automobile evolution, just as fewer people today can operate a manual transmission than in motoring’s early days.

There is also concern that loading smart cars with even more navigation features, cloud-linked data services and social media functions will only increase distracted driving. But those features are also seen as desirable, because as cars drive themselves more, drivers will need to be entertained. Android apps in the new OAA alliance will soon be competing with apps under the iOS in the Car standard announced by Apple last summer.

BMW’s i3 electric production car, available in the second quarter 2014 with a list price starting at $41,350, can already link with driver smartphones via the BMW i Remote app, sharing info on battery charge, whether doors are open or closed and other vehicle features. In a spin on this, still at the concept stage, BMW and Samsung showed how the phone maker’s Galaxy Gear smartwatch can link to the i3 and display information on drivers’ wrists (pictured at top), allowing them to command the car’s horn to sound if they’ve lost their i3 in a large parking lot.

If CES 2014 is anything to judge by, cars are getting increasingly connected to drivers and increasingly autonomous. This new relationship between car and driver evokes many science-fiction scenarios, but if you ask automotive insiders when the future of completely self-driving cars will arrive, don’t hold your breath.

5 Interesting Benefits Of Automatic License Plate Recognition

Over the past few years, we have seen several improvements within the security industry – one of which is automatic license plate recognition. ALPR technology works by reading registration numbers in seconds.

Firstly, the image is captured and enhanced with various manipulation techniques. Then, OCR (optical character recognition) is used to read the different letters and numbers.

While some of us may have seen a plate reader in action, not many know their different uses. From assisting the police force to collecting tolls – you would be surprised at how frequently we cross paths with them.

Want to learn more? In this article, we will talk about five interesting benefits of automatic license plate recognition.

Let’s get started.

24/7 Monitoring

If you pair it with video surveillance, you have even better security. All of which can be used to assist law enforcement (which we will discuss further below).

Accessibility

Alongside using ALPR to assist with breaches and monitoring, it’s also a great preventative measure. For example, most criminals are less likely to target a facility with sound technology installed.

Efficiency

We all know how frustrating congestion can be in a crowded car park. By using an automatic reader, vehicles will be able to enter at a much faster rate, improving traffic flow significantly.

Efficiency can also be shown in the requirements of security personnel. Instead of having to walk around the entire premises, they can easily monitor who is inside via included CCTV footage.

Law Enforcement Assistance

Law enforcement offices use ALPR in many different ways. As well as gathering evidence for cases, it can be used to check vehicle registration quickly.

It’s also important to note that these systems are incredibly accurate. Eyewitness reports aren’t always the most reliable, but with ALPR, you can easily find the license plate number, vehicle make, and color involved in a crime.

Cost-Effective Solution

While you might think that implementing this type of technology is more expensive, that’s actually not the case. In most situations, you’ll be able to reduce the need for physical security personnel, thus cutting down your wages.

Final Words

As you can see, many great benefits are associated with automatic license plate recognition. In fact, it will be interesting to see how far this incredible technology will go in the future!

Google Offers Image Search Indexing Tips

In a Google Webmaster Hangout, a publisher asked if it makes a difference whether an image is published using a regular image tag or by displaying an image via CSS as a background image.

Mueller’s response was interesting because it could explain why some images don’t perform well in Google image search. Here is what he said:

“…from our point of view, for image search, we would use the image tag with the source attribute pointing at the image… and as far as I know we don’t use CSS images at all for image search. So for normal web search it doesn’t matter. You can use whatever works best for you. If you want to have these images indexed in image search then I would definitely use a normal image tag for that.”

Takeaway:

If that’s true, then for publishers who wish to keep an image out of Google’s Image Search, this represents yet another way to do that. The images are still indexed, but they won’t be found in Google Image Search.  But for publishers who do want to have their images displayed in Google Image Search, this is a wake up call to use standard image tags and avoid using CSS to display images as background images.

Will This Harm My Featured Snippets? Implications for Site Auditing

I don’t know how many search marketing professionals check how images are displayed, via regular HTML image tags or CSS background images, but this may be yet another issue to look into.  I myself am going to pay increased attention to the default image handling by CMS themes. If you or your client want your images to be indexed by Google Image Search, then you may wish to check how those images are coded.

What is Google Image Search and Should I worry?

Google image search is one of several kinds of specialized searches Google provides. For some publishers it’s an important source of traffic. But for many it’s not a big concern. You may wish to review your traffic logs and traffic analytics to see how much traffic is coming from Google Images and if that traffic is important.

Screenshot by Author, modified by Author

Update the detailed information about Transformers For Image Recognition At Scale on the Moimoishop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!