Google Blog: “At Search On today, we shared how we’re getting closer to making search experiences that reflect how we as people make sense of the world, thanks to advancements in machine learning. With a deeper understanding of information in its many forms — from language, to images, to things in the real world — we’re able to unlock entirely new ways to help people gather and explore information. We’re advancing visual search to be far more natural than ever before, and we’re helping people navigate information more intuitively. Here’s a closer look. Helping you search outside the box – With Lens, you can search the world around you with your camera or an image. (People now use it to answer more than 8 billion questions every month!) Earlier this year, we made visual search even more natural with the introduction of multisearch, a major milestone in how you can search for information. With multisearch, you can take a picture or use a screenshot and then add text to it — similar to the way you might naturally point at something and ask a question about it. Multisearch is available in English globally, and will be coming to over 70 languages in the next few months. At Google I/O, we previewed how we’re supercharging this capability with “multisearch near me,” enabling you to snap a picture or take a screenshot of a dish or an item, then find it nearby instantly. This new way of searching will help you find and connect with local businesses, whether you’re looking to support your neighborhood shop, or just need something right now. “Multisearch near me” will start rolling out in English in the U.S. later this fall…”
See also The Verge: Google is trying to reinvent search — by being more than a search engine. “The internet is more visual and more interactive than ever. So how does the world’s biggest search engine change to fit the times? By redefining the whole idea.”
Sorry, comments are closed for this post.