Automating The Google Knowledge Graph With Google's Knowledge Vault

Sep 3, 2014 - 8:29 am 11 by
Filed Under Google

Vault

The New Scientist reports that Google is building a version of the knowledge graph that expands its knowledge through algorithms at mass scale - Google calls it the Knowledge Vault.

Google is building the largest store of knowledge in human history – and it's doing so without any human help.

Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

I honestly thought the knowledge graph was not by hand either. Dumb me. Okay, I am not that dumb. The knowledge graph was by no means by hand. I am confident Google didn't hire armies of people to copy and paste content into a database for them.

The Knowledge Vault, in my opinion, is just better at the automated part. As Google continued to revamp and improve the knowledge graph, it became better and picking off content from your web site and storing it in a more structured fashion, which Google can then use as answers without credit.

A statement like this from the article makes me go wow:

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far.

Really? That cannot be accurate.

So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

I find this hard to believe.

Google used algorithms to pick off data from sources such as "Wikipedia, subject-specific resources like Weather Underground, publicly available data from Freebase.com, and Google search data." In fact, on that page, Google says Google gets data for the knowledge graph in an "automated" fashion, so there can be problems and they want them reported.

The information in these sections is compiled by automated systems, so there's always a chance that some of the information is incorrect or no longer relevant.

I assume the Knowledge Vault is simply better at crawling, indexing and borrowing content from more sources, in a more automated fashion, than the Knowledge Graph.

So are you concerned now? When does this become more than a swiss army knife and leave you out of the equation?

Forum discussion at WebmasterWorld.

Image credit to BigStockPhoto for vault

 

Popular Categories

The Pulse of the search community

Follow

Search Video Recaps

 
Google Core Update Rumbling, Manual Actions FAQs, Core Web Vitals Updates, AI, Bing, Ads & More - YouTube
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: March 18, 2024

Mar 18, 2024 - 4:00 pm
Google Updates

Google Urges Patience As The March 2024 Core Update Continues To Rollout

Mar 18, 2024 - 7:51 am
Google

Official: Google Replaces Perspective Filter With Forums Filter

Mar 18, 2024 - 7:41 am
Google Maps

Google Business Profiles Now Offers Additional Review After Appeal Is Denied

Mar 18, 2024 - 7:31 am
Google Maps

EU Searchers Complaining About Google Maps Features Changes Related To DMA

Mar 18, 2024 - 7:21 am
Google

Google Showing Fewer Sitelinks Within Search

Mar 18, 2024 - 7:11 am
Previous Story: Google's Site Command Not A Great Estimate