Even though general purpose AI is years away, you can still knowably contribute to speeding up AI development right now — even if you’re not a programmer. That’s because most AI projects rely on the same few centralized resources to bootstrap their language manipulation, generalized domain knowledge about the world, and common sense reasoning. These include:
Each of these systems have some or all of their databases and code available for inspection and improvement. Programmers and semi-technical contributors can help most by submitting patches to DBpedia, but non-coders can still do interesting things like voting on merges in Freebase to improve performance by eliminating duplicate entries or following NELL on twitter and replying with corrections to any incorrect facts it accidentally learns.
If you’re a strong programmer and want an even higher chance that your contributions speed up the development of AI, then improve one of the critical open-source components that these systems rely on:
Or if you like writing and organizing data more than programming, you can contribute by improving one of the top-level sources of curated content that currently feed the content into these centralized AI resources:
Adding to Wiktionary, fixing broken citations on Wikipedia, adding new data to Wikipedia infoboxes, and standardizing poorly formatted infoboxes on Wikipedia are all high value activities that will almost certainly improve eventual automated reasoning systems.
7 Responses to “Improving Centralized AI Resources”
August 3
Sean O HEigeartaighInteresting info, thanks Louie. Now, for the question of whether I want to speed up general purpose AI development or not…
August 3
Sarah ConstantinCommenting to bookmark.
August 3
Austin James ParishAgree with Sean: why do you think it’s a good idea to try to speed up these developments?
August 3
Michele ReillyReally good to see your list, Louie. That’s certainly how I see it… http://www.turingsolutions.com
We Turing Inc. are partners now with Cloudera, to improve the Hadoop ecosystem. Jeff Hammerbacher is advising us on how we can make high-integrity engineering improvements and on use-cases for the life sciences.
August 4
Mike HowardAgree with Austin, unless you’re speeding up FAI specifically significantly more than AGI generally.
August 5
Louie HelmIf human values are complex, then we’re going to need systems that bootstraps from a decently complex prior in order to locate those values.
I look at this work largely as “sharpening the priors of future AIs”.
August 5
Jonathan WeissmanI would like to see much stronger arguments that a particular intervention advances FAI relative to AGI than FAI needs complexity and this intervention provides complexity. Really, an FAI needs things like the ability to have stable goals while recursively self improving, and the ability to recognize humans and load values from them, which cost complexity. We want interventions that provide those things, not something else that also costs complexity.