Automating The Google Knowledge Graph With Google's Knowledge Vault

Sep 3, 2014 - 8:29 am 11 by
Filed Under Google

Vault

The New Scientist reports that Google is building a version of the knowledge graph that expands its knowledge through algorithms at mass scale - Google calls it the Knowledge Vault.

Google is building the largest store of knowledge in human history – and it's doing so without any human help.

Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

I honestly thought the knowledge graph was not by hand either. Dumb me. Okay, I am not that dumb. The knowledge graph was by no means by hand. I am confident Google didn't hire armies of people to copy and paste content into a database for them.

The Knowledge Vault, in my opinion, is just better at the automated part. As Google continued to revamp and improve the knowledge graph, it became better and picking off content from your web site and storing it in a more structured fashion, which Google can then use as answers without credit.

A statement like this from the article makes me go wow:

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far.

Really? That cannot be accurate.

So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

I find this hard to believe.

Google used algorithms to pick off data from sources such as "Wikipedia, subject-specific resources like Weather Underground, publicly available data from Freebase.com, and Google search data." In fact, on that page, Google says Google gets data for the knowledge graph in an "automated" fashion, so there can be problems and they want them reported.

The information in these sections is compiled by automated systems, so there's always a chance that some of the information is incorrect or no longer relevant.

I assume the Knowledge Vault is simply better at crawling, indexing and borrowing content from more sources, in a more automated fashion, than the Knowledge Graph.

So are you concerned now? When does this become more than a swiss army knife and leave you out of the equation?

Forum discussion at WebmasterWorld.

Image credit to BigStockPhoto for vault

 

Popular Categories

The Pulse of the search community

Follow

Search Video Recaps

 
Google Volatility, Indexing Bugs, Google Ads Broad Match Default & YouTube SEO Tips - YouTube
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: July 12, 2024

Jul 12, 2024 - 10:00 am
Search Video Recaps

Search News Buzz Video Recap: Google Volatility, Indexing Bugs, Google Ads Broad Match Default & YouTube SEO Tips

Jul 12, 2024 - 8:01 am
Google Maps

Remove Thumbtack Book Links In Google Business Profiles

Jul 12, 2024 - 7:51 am
Google Search Engine Optimization

New: Google Search Console Let's You Add Your Shipping & Return Information

Jul 12, 2024 - 7:41 am
Google

Google Search Tests Autoplaying Video Thumbnails In Results?

Jul 12, 2024 - 7:31 am
Bing Search

Bing Search Tests Voting On Search Results

Jul 12, 2024 - 7:21 am
Previous Story: Google's Site Command Not A Great Estimate