vip 发表于 2019-11-14 09:32:00

大道至简,google搜索AI更推进了一步,意义重大,且影响SEO

Understanding searches better than ever before
Pandu Nayak
Google Fellow and Vice President, Search
Published Oct 25, 2019
If there’s one thing I’ve learned over the 15 years working on Google Search, it’s that people’s curiosity is endless. We see billions of searches every day, and 15 percent of those queries are ones we haven’t seen before--so we’ve built ways to return results for queries we can’t anticipate.
When people like you or I come to Search, we aren’t always quite sure about the best way to formulate a query. We might not know the right words to use, or how to spell something, because often times, we come to Search looking to learn--we don’t necessarily have the knowledge to begin with.
At its core, Search is about understanding language. It’s our job to figure out what you’re searching for and surface helpful information from the web, no matter how you spell or combine the words in your query. While we’ve continued to improve our language understanding capabilities over the years, we sometimes still don’t quite get it right, particularly with complex or conversational queries. In fact, that’s one of the reasons why people often use “keyword-ese,” typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.
With the latest advancements from our research team in the science of language understanding--made possible by machine learning--we’re making a significant improvement to how we understand queries, representing the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search.
Applying BERT models to Search
Last year, we introduced and open-sourced a neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers, or as we call it--BERT, for short. This technology enables anyone to train their own state-of-the-art question answering system.
This breakthrough was the result of Google research on transformers: models that process words in relation to all the other words in a sentence, rather than one-by-one in order. BERT models can therefore consider the full context of a word by looking at the words that come before and after it—particularly useful for understanding the intent behind search queries.
But it’s not just advancements in software that can make this possible: we needed new hardware too. Some of the models we can build with BERT are so complex that they push the limits of what we can do using traditional hardware, so for the first time we’re using the latest Cloud TPUs to serve search results and get you more relevant information quickly.
Cracking your queries
So that’s a lot of technical details, but what does it all mean for you? Well, by applying BERT models to both ranking and featured snippets in Search, we’re able to do a much better jobhelping you find useful information. In fact, when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English, and we’ll bring this to more languages and locales over time.
Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.
To launch these improvements, we did a lot of testing to ensure that the changes actually are more helpful. Here are some of the examples that showed up our evaluation process that demonstrate BERT’s ability to understand the intent behind your search.
Here’s a search for “2019 brazil traveler to usa need a visa.” The word “to” and its relationship to the other words in the query are particularly important to understanding the meaning. It’s about a Brazilian traveling to the U.S., and not the other way around. Previously, our algorithms wouldn't understand the importance of this connection, and we returned results about U.S. citizens traveling to Brazil. With BERT, Search is able to grasp this nuance and know that the very common word “to” actually matters a lot here, and we can provide a much more relevant result for this query.

Let’s look at another query: “do estheticians stand a lot at work.” Previously, our systems were taking an approach of matching keywords, matching the term “stand-alone” in the result with the word “stand” in the query. But that isn’t the right use of the word “stand” in context. Our BERT models, on the other hand, understand that “stand” is related to the concept of the physical demands of a job, and displays a more useful response.

Here are some other examples where BERT has helped us grasp the subtle nuances of language that computers don’t quite understand the way humans do.



Improving Search in more languages
We’re also applying BERT to make Search better for people across the world. A powerful characteristic of these systems is that they can take learnings from one language and apply them to others. So we can take models that learn from improvements in English (a language where the vast majority of web content exists) and apply them to other languages. This helps us better return relevant results in the many languages that Search is offered in.
For featured snippets, we’re using a BERT model to improve featured snippets in the two dozen countries where this feature is available, and seeing significant improvements in languages like Korean, Hindi and Portuguese.
Search is not a solved problem
No matter what you’re looking for, or what language you speak, we hope you’re able to let go of some of your keyword-ese and search in a way that feels natural for you. But you’ll still stump Google from time to time. Even with BERT, we don’t always get it right. If you search for “what state is south of Nebraska,” BERT’s best guess is a community called “South Nebraska.” (If you've got a feeling it's not in Kansas, you're right.)
Language understanding remains an ongoing challenge, and it keeps us motivated to continue to improve Search. We’re always getting better and working to find the meaning in-- and most helpful information for-- every query you send our way.
      

vip 发表于 2019-11-14 09:49:00

BERT模式搜索 –自然搜寻
Google最新的算法更改BERT严重影响了所有查询的10%。 Google现在正在注意“微不足道”的简短词,以前被忽略为“停用词”。注意到诸如“ at”,“ to”,“ as”,“ if”之类的词会在其中创建含义(尝试搜索 shanghai to nyc)。有了BERT,Google的搜索变得越来越语义化,越来越不像数据库。 (对于那些想要“类似数据库”的搜索体验的人,Google保留其逐字搜索选项)。
BERT告诉我们的是搜索自然语言,特别是如果我们有一个简短的问题。例如,使用“什么是”,“多少”,“公司排名靠前”,“竞争对手”等开始查询。
BERT(作为其他面向语义搜索的一部分改变)教会我们需要拥抱并融入google的语义搜索。为了在Google上获得更好的结果,请尽可能简单地进行搜索。最好使用机器学习的功能,而不是通过使用长OR字符串或其他逻辑搜索字符来强制搜索。当然,如果你足够专业使用高级运算符还是可以继续使用的,只是使用OR这个搜索字符已过时。
没有关于Google查询的教科书。在开发“搜索”技能时,重要的是人的“自然智慧”。
images/attachicons/common.gif
附件: 您所在的用户组无法下载或查看附件
      

vip 发表于 2019-11-14 09:51:00

Google BERT理解
https://blog.csdn.net/qq_33373858/article/details/85063980
https://blog.csdn.net/qq_39521554/article/details/83062188
技术层面 外贸人也看不懂,我们只关心结果的变化。相信传统的seo也会因此发生变化
以上图的搜索为例,网页只是堆砌 shanghai 和 newyork字眼 已经不能在这个搜索中胜出了,BERT判定用户的搜索 shanghai to nyc 是在搜索怎么从上海飞往纽约,在找相关航班信息,所以结果里基本都是上海到纽约的航班信息。
seo怎么能适应这个变化?做好自己的事情,另贴里一个google搜索团队的话,其实也是说给seo者的:
站点是否是相关内容的最佳信息来源
网站提供这个信息的主要目的是什么?
网站提供这个信息在尝试帮助谁?
网站上的信息是否与其他可信来源一致?
网站信誉怎么样?是否有其他人对网站的评价?相信AI能读懂评价,而且会判定水军
http://waimaoluntan.com/thread-8001715-1-1.html
      

vip 发表于 2019-11-14 10:08:00

懒人工具将尝试BERT的测试搜索,如果有进展,会直接改进或增加搜索选项
      

bosunhu 发表于 2019-11-14 10:11:00

厉害了AI
      

vip 发表于 2019-11-14 10:13:00

另外值得一提的是,这位google搜索vp的文章,演示图片全是来自手机搜索,而不是电脑端搜索
手机端搜索输入可能更多的是自然语言的语音输入,而不是电脑端大家习惯的关键词搜索形式,推动了这项新技术的应用
页: [1]
查看完整版本: 大道至简,google搜索AI更推进了一步,意义重大,且影响SEO