Automating The Google Knowledge Graph With Google's Knowledge Vault

Sep 3, 2014 - 8:29 am 11 by
Filed Under Google

Vault

The New Scientist reports that Google is building a version of the knowledge graph that expands its knowledge through algorithms at mass scale - Google calls it the Knowledge Vault.

Google is building the largest store of knowledge in human history – and it's doing so without any human help.

Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

I honestly thought the knowledge graph was not by hand either. Dumb me. Okay, I am not that dumb. The knowledge graph was by no means by hand. I am confident Google didn't hire armies of people to copy and paste content into a database for them.

The Knowledge Vault, in my opinion, is just better at the automated part. As Google continued to revamp and improve the knowledge graph, it became better and picking off content from your web site and storing it in a more structured fashion, which Google can then use as answers without credit.

A statement like this from the article makes me go wow:

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far.

Really? That cannot be accurate.

So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

I find this hard to believe.

Google used algorithms to pick off data from sources such as "Wikipedia, subject-specific resources like Weather Underground, publicly available data from Freebase.com, and Google search data." In fact, on that page, Google says Google gets data for the knowledge graph in an "automated" fashion, so there can be problems and they want them reported.

The information in these sections is compiled by automated systems, so there's always a chance that some of the information is incorrect or no longer relevant.

I assume the Knowledge Vault is simply better at crawling, indexing and borrowing content from more sources, in a more automated fashion, than the Knowledge Graph.

So are you concerned now? When does this become more than a swiss army knife and leave you out of the equation?

Forum discussion at WebmasterWorld.

Image credit to BigStockPhoto for vault

 

Popular Categories

The Pulse of the search community

Follow

Search Video Recaps

 
Google Core Update Flux, AdSense Ad Intent, California Link Tax & More - YouTube
Video Details More Videos Subscribe to Videos

Most Recent Articles

Link Building

Google: Ignore Link Spam Especially To 404 Pages

Apr 23, 2024 - 7:51 am
Google Search Engine Optimization

Google: We Have Taken Action On Some Parasite SEO In Recent Update

Apr 23, 2024 - 7:41 am
Bing Search

Mikhail Parakhin Breaks Silence On Mustafa Suleyman Of Microsoft (Kinda...)

Apr 23, 2024 - 7:31 am
Google Maps

Google Business Profiles Gains Select Preferred Menu Source

Apr 23, 2024 - 7:21 am
Google Search Engine Optimization

Google: Crawl Budget Goes Across All Googlebot Crawling, Not Just Web Search

Apr 23, 2024 - 7:11 am
Blog Administration

Programming Note: Offline For Passover

Apr 22, 2024 - 4:05 pm
Previous Story: Google's Site Command Not A Great Estimate