We are proud to announce the initial release of a stable, mature version v1 of our Natural Language Understanding API. On the technical side, we are now offering a stable and scalable service with a guaranteed uptime of 99%. On the knowledge graph side, we integrated Wikidata, which allows us to provide entity identifiers that are unique and stable over time. When it comes to analyzing texts, we significantly improved the entity linking quality for German. We now find more names and link them more accurately to the correct entity in the knowledge graph.
Stable API & Enterprise Plan
Our latest version v1 of the Natural Language Understanding API (NLU API) is ready to be integrated into your production environment. It is stable with a guaranteed uptime of 99% and scales to your needs. We will continue to support the v1 API schema for at least 6 months whenever we release an updated schema, allowing for a smooth and predictable transition on your end. We are offering the API as a pay-as-you-go subscription model with the following pricing model.
We also launched our new Enterprise Plan: It allows you to customize the NLU API to your usage requirements. Contact us to learn how to integrate your own knowledge to our NLU API!
Linked Data Cloud: Wikidata Integration
Starting now, you will never have to process a document twice. How come? We integrated Wikidata which provides stable identifiers for all entities. Until now, we were using Wikipedia titles as entity identifiers, which could change at the whim of an editor. As Wikipedia is continuously edited by, well, all of us, this means that identifiers constantly changed. Not very useful if you process a text now, and again in half a year. It might just happen that all identifiers changed, even though the actual entity is still the same. From v1 onwards all entity identifiers returned by the Entity Linking Service will be Wikidata identifiers and thus stable.
For example: When a Wikipedia editor decided that the article about People’s Republic of China is from now on known as China, in all our Beta versions the entity identifier changed from “YAGO3:<People%27s_Republic_of_China>” to “YAGO3:<China>”. Now, it will simply stay Q148 no matter what the title of the Wikipedia page looks like.
But there’s more to Wikidata integration: Being a central hub of the Linked Data Cloud, the new Wikidata entity identifiers give you immediate access to all the knowledge available there.
Improved Entity Linking Quality for German
We significantly improved our NLU API when it comes to German texts. The new version finds more entities mentioned in your texts, especially in some variants that were problematic before, e.g. “Universität Würzburg” or “TU Darmstadt”. It is now also more accurate when deciding between all the potential entities that a name might point to, especially when writing variants are involved. This means that both for simple typos and for grammatical variations (e.g. the genitive “s”) you will see significantly better results.
In case you are wondering what comes next, here is a teaser for you to think about:
What if you could easily add your own custom entities or connect your own knowledge graph to our Natural Language Understanding API?
Latest posts by Johannes Hoffart (see all)
- Ambiverse Natural Language Understanding API Hits Stable Version - December 19, 2016
- New Version of the Natural Language Understanding API Available - September 20, 2016