Image Recognition has probably been one of the hottest topics throughout 2014 with announcements such as the launch of the Amazon FireFly app and several millions of VC capital and M&A in this space. Image recognition has the potential to become ubiquitous in our day-to-day interactions with real world objects that are connected with the digital world.
This talk will be divided in four topics. First, it will cover basic aspects of the technology: the different approaches, the type of objects that are recognized, and the limitations of each technique through demonstrations. Second, the audience will be guided through the steps required to embed an image recognition solution into an app or service. Third, a number of vendor solutions will be described to give hands on pointers for those willing to start integrating such solutions. Finally, the talk will discuss the future of image recognition in different fields.
You can watch the video of the presentation here: https://www.youtube.com/watch?v=ilbTvfchtQY
2. The visual recognition market is estimated to grow
from $9.65 billion in 2014
to $25.65 billion by 2019
According to Image Recognition Market, Markets and Markets, May 2014
12. Choose the IR mode that fits best
Cloud Service On-Device SDK
13. Choose the IR mode that fits best
Cloud Service On-Device SDK
IR requires Internet Yes No
IR speed Depends on network Controlled
Content updates Immediate Require local sync
Analytics Latest available Rely on app connection
14. Outline
What works with image recognition
How to put image recognition into your app
Vendor comparison
Trends
23. Takeaways
1. Image recognition is the door to a broad range of
applications and services
2. Improve performance with better image databases
3. Choose on-device or cloud IR depending on your use
case.
4. Catchoom is already behind 420M interactions and
looking to meet upcoming trends
27. Challenges with benchmarks
Label a database with both reference and test
images
Identify infrastructure differences
Understand performance is not necessarily
optimized for your use case
28. How to benchmark
Small dataset Full test
1. Contact the vendor 1. Contact the vendor
2. Label your database
3. Use APIs
Notas do Editor
The visual recognition market is growing extremely quickly.
The two main reasons for this growth are kind of obvious:
There is a big proliferation of images on the Internet and;
There has also been a big expansion in the use of mobile for searching and purchasing
On December 1975, Kodak and Sasson invented the digital camera. Ever since we can process images and videos digitally, we’ve been developing visual recognition, trying to make machines understand the environment.
Visual Recognition at large is a field of activity that has many branches. It is important to know that each one uses different computer vision approaches and there is not yet one ring to rule them all.
The most prominent branches are Image Recognition, Face Recognition, Object classification, and Object Character Recognition, and each one has a different level of maturity.
Image Recognition enables a fast search for images in a database to match an image taken by a smartphone or tablet. The image match pulls up related content, and users can interact, shop or rate products.
Face Recognition is basically the same but instead of comparing with images or any object, it focuses on faces. Most face recognition solutions work by training a system with very large databases of images of faces previously labelled. The main use case is security or photo album organization.
Object classification is a bit different in scope. Instead of searching for a very specific match in a database, it is trying to understand the elements present in a picture. This is the closest to what a kid does: this is a chair, this is a dog, or more complex descriptions like this is a steem train under Swiss Matterhorn. The use case is simple: Google.
Object Character Recognition identfies letters and numbers from an image. It is used in digitizing ancient books for instance.
In this tutorial, I’ll talk about Image Recognition and give you an overview of the technology, guidelines to build apps and services, and trends that we see in the market.
Why am I talking about IR in an AR conf?
Image Recognition is the door to most AR interactions in the world.
Via Image Recognition, a machine can tell what the user is seeing through her camera. If we know that, we can provide limitless options connected to the digital world.
For instance, we can augment the environment with an inmersive experience that helps the user make a better decision.
Computer Vision tries to understand what is there and what is happening in the world via images and videos.
Let’s take a look at the world with the eyes of a machine and try to see what will make us suffer.
In the first row, you can find samples of objects that differ from the amount of visual pattern that is available for recognition.
In the second row, you see two sorts that differ a lot in the amount of different samples that can exist from the same very object.
It is important to set the expectations right with respect to the kinds of objects that I showed before and the technology that is available.
If an object has a lot of texture, it has a higher probability of being more distinguishable within a large set of images, for instance, book covers.
It does not work so well when the difference between two hundred objects have no pattern and all the same shade of grey.
On the other hand, if the goal is to say “this is a blue shirt”, object classification works smoothly.
If an object is deformable, we could create a database with tones of samples but it would become unbearable if you want to do that for a hundred thousand object.
On the other hand, you can still train a classification system with many samples of that object in different deformations.
What happens if an object is transparent?
Let me tell you a story: when Logitech launched a mouse that could work over glass surfaces a few years ago,... well, rumor has it that on the day of the demo, they had to scratch the glass to make it work. The reason was that the sensor needed to "see" the dirty dots and scratches to translate that into motion.
As another example, time-of-flight cameras like Kinect see through glass, or in other words, they do not see the glass in front of them.
These examples showcase the challenge that glass puts into any sensing.
-------
I’ve been restrictive here and for instance Catchoom’s IR engine works with deformable objects, as along as they are textured. Textureless is possible, but depends on the size of the database and how close two objects can be.
In in this 2nd part, I’ll cover fundamental aspects of project development and discuss the pieces that are necessary to deploy an app that includes Image Recognition.
There are three elements that you need for an Image Recognition app to be built.
The base of the pyramid is the Image database. This is something that is often overlooked at the beginning. Sometimes, we find customers that consider the collection of images that trigger experiences after they’ve spent resources on building the app. We suggest to spend as much time with the reference images as possible to get the best experience for your users.
The second piece is the technology component. There are many options here and I’ll give you some pointers in a minute.
And last but not least, Content is always king. Make sure your app is valuable to your users. Image recognition is impressive, but even more impressive is when users want to repeat and come back to your app.
Imagine you prepared your database with any of the images below. Then you try to recognize that logo with a query image like the one on top.
For different reference images, you’ll get very different results.
The message here is to devote time to the image database. Typically, you’ll learn what works and what doesn’t, but it is good to chat with us to know what will work and what may be an issue.
One of our customers augments tattoos. You definitively want to get it right before tattooing your skin.
On-device IR makes sense especially on cases that it is preferable to offload a server infrastructure and provide quick responses to users. This is the case for second screen environments where the user gets content or offers in sync with a TV show.
Cloud IR on the other hand is very well suited for magazines or any content that is frequently updated and has a rather uniform traffic.
Let’s compare both at the feature level.
While OD looks technically more appealing, it has some limitations when it comes to enable common business interests like content updates or analytics.
In general, you will achieve the same results with both, so it depends on the use case or even your business model.
I’ll give you an overview of the vendors in the AR and outside the AR space that can help you with that.
In this list we have AR-vendors.
AR vendors offer IR that is used to trigger AR experiences at scale. In other words, they allow to search through larger databases that what would fit into a smartphone by relying on the cloud.
The disadvantage of most AR vendors who offer cloud IR, is that they’re designed only for AR and are not that flexible when used for non-AR use cases.
Also, for augmented reality it is now commonly known that patterns need well-spread texture. Image recognition is not as demanding, but benefits from curation.
In this list, we have vendors that offer the core service, independently of how you want to use it whether it is to render an AR experience, compare products or anything you’d like to do.
The table shows one additional column, which is “On Premises”. Instead of a SaaS, some vendors including Catchoom, license the core server technology to allow others build entire platforms. For example Times of India, the largest publisher in India, among other AR browsers run Catchoom inside their servers.
As you can see from this and the previous slide, Catchoom is the only one who offer in both spaces AR and IR, and also have the full set of options.
But the real reason why I like Catchoom is that we have a unique combination of ingredients in our magic sauce.
First, our image recognition tests are performed using pictures snapped by users in real world environments – so our technology knows how to handle difficult angles, blurry images, low light conditions and reflections.
Second, our passion for seamless interactions. Catchoom was built to give users an easy, seamless image recognition experience – with no knowledge of the technology required. They just keep snapping photos like they always do.
Third, the results speak for themselves. An independent benchmark study using images taken by real users rated Catchoom 20% higher on image recognition than our competitors. We also ensure a response within half a second regardless of your location thanks to our servers in the US and EU.
And last, you can build entire platforms. Whether you use our service or an on-premises installation, our image recognition software is designed to deliver outstanding performance regardless of the traffic or size of your database. From hundreds of requests per second, to millions of images, we’ve engineered our software to be prepared.
Catchoom is, in fact, already one of the most used IR engines.
Even though you may not have not heard of the brand Catchoom, our solution is already behind 420 million image recognitions globally.
And now I’m getting to the last part of the talk to discuss some of the trends that we see in this space.
There is a number of businesses with a long list of products that have a head and a long-tail of popularity. This is typically the case for eCommerce sites.
What we see is an increasing demand to search on-device on a subset of images and if there is no match, continue with a cloud request.
We have patented technology to provide support for this kind of environments without cutting any corner on the performance.
Imagine you’re a technician that has to repair a very specific part in a Star Destroyer.
How can you search through all the catalogue of parts in a fraction of a second just by scanning that part.
This is another research line that Catchoom is working on right now.
Fashion is one of the most exhiting sectors for image recognition.
Being able to recognize a pair of shoes, a handbag or a complete look is in the mindset of thousands of fashionistas around the world.
Catchoom is investing in recent advances in the field of computer vision using a technique that is called deep learning. Deep learning allows neural networks to learn the visual properties of certain objects and be able to classify them with very high precision.
-----
Those three are the main trends that we see in the IR space, and Catchoom’s Labs are heavily investing in building the technology that will make them possible in the near future.
1. Image recognition is the door to a broad range of applications and services in a fast growing market.
2. You can significantly improve the performance with better image databases.
3. Choose on-device or cloud depending on your technical and business needs.
4. Catchoom is already behind 420M interactions and is working on the current trends to meet them in the near future.
Please visit our booth in the next couple of days for live demos.
Thanks you very much for your time!
Catchoom in fact is already one of the most used IR engines out there.
While maybe you have not heard of the brand Catchoom, our solution is already behind 420 million image recognition interactions across the world.
There are a number of challenges when trying to compare the performance of image recognition vendors.
1. How many of you have around 100,000 images on both sides of the equation, references and test images?
That’s probably around the number you need to build to 1M images.
2. Is the infrastructure showing the real experience that your users will have?
Let me give you an example, Catchoom has servers in the US and in EU that allow apps to connect to the closest server wherever you are in the world. Is your app global, or simply your customer is in another continent? Take that into account.
3. Performance is not necessarily optimized for a specific use case. So the question is, does that vendor perform so well / wrong?
Most vendors provide the same experience to all customers because they cannot fine-tune parameters, but rather offer performance that is on average good for a large variety of cases.
If you use 100,000 images, you probably have multiple use cases represented, but if you just have a few, you may not show the full benchmark of that solution.
You’re probably under two situations:
Situation #1: you have a customer, with ver few images and you just want it to work like charm.
Situation #2: you’re building a self-served service, where your customers or partners will upload images without any supervision.
In both cases, my suggestion is to contact the vendor to know exactly what is possible and what not, and whether some tweeks here and there can improve significantly the results.
For instance, at Catchoom, we look at particular cases in your results to try to identify improvements or simply different profiles of the internal paramenters that can be tuned for your case.
But the reality is that unless you have an On Premises license, you won’t be able to fine tune any paramenter as all cloud service providers have the same performance across all customers.