The web is now an endless repository of images and visuals that startups, enterprises, and academics are increasingly looking to decode and leverage to drive customer engagement and increase sales. StileEye’s co-founders Vwani Roychowdhury, a professor at UCLA, and Sudhir Singh, a PhD grad from UCLA, believe they might have cracked the code when it comes to doing just that for the fashion industry. The startup today announced the launch of its platform for the web, iOS and Android to be the visual engine for fashion lovers, allowing shoppers to ‘window shop’ through an online image catalog.
Leveraging computer vision algorithms and a patented proprietary framework to crawl and break down visual information on the web, which was developed by Roychowdhury’s research group at UCLA, the duo aim to bring together consumers, trendsetters, and online retailers on a single platform. “We started thinking about how do we understand visual cues, visual content, and images from the large-scale data from the web. We came up with the platform where we can crawl the images on the web and understand them like a human would,” said Roychowdhury in an interview with BetaKit. “The challenge is really understanding patterns, shapes, soft objects, like fashion objects, and there’s a need in fashion.”
Built on the premise that when consumers see something they like they either want exactly that item or something close to it, StileEye allows a user to take a photo of any dress or bag or use an existing image of that item, and through the platform’s technology, they can upload it and instantly search for other items that attempt to match the shape and style with a high degree of accuracy. If they see something they like, the company has an online inventory of more than one million products which they can purchase via partner online merchants.
The technology can also break down images by patterns, colors, brands, and price range for advanced filtering and browsing. A typical scenario might be a user casually flipping through a magazine and noticing a handbag a celebrity was photographed with: the user can take a photo and get search results for bags that looks just like it. They can then not only look at several options, but then decide to filter it in a different color, or different pattern, and finally when the right one comes along, they hit the shopping cart icon and can purchase it online.
The monetization strategy behind StileEye is two-fold, the first being revenue sharing in taking a small percentage of each transaction that goes through the service. The second is licensing out its technology to other online retailers to use for their product search, so that if a shopper wanted something similar to a dress they liked, they could easily get results from the retailer’s image database.
Though there are existing services like Polyvore and Pose that also have components of image recognition technology to them, with clothing curated by fashion insiders, Singh said the platform’s main distinctive factor is its ability to process and dissect user-generated images. “They [Polyvore] have created a very social network of users, so there are trend setters, and selectors, who manually make up sets, it’s a purely social place. So we’re only using the aspect that people like to follow people and like to browse…imagine bringing in the web on top of it because of the unique technological breakthrough we have made and making it scalable,” Roychowdhry added.
Although currently limited to dresses and bags, the company plans to add a number of other categories, including skirts, jewelry, tops and shoes in the coming months. Singh also mentioned that in addition to licensing out the technology to other businesses, the company also plans to open it up to third-party developers so they could build additional apps and features for the platform. Fashion is certainly the first vertical in which the technology has immediate application, however, it will be interesting to see which other industries the company looks to next.