The free multilingual online encyclopaedia Wikipedia has created a new AI tool that will concurrently scan thousands of citations to assist assess and validate the material they contain.
Request For Citations
A database with more than 4 million citations is used by Wikipedia. In order to be satisfied, users request the citations in order to obtain proof for the assertions. For example, a Wikipedia entry reads that President Obama flew to Europe and afterwards to Kenya where he met his paternal family for the first time. Citations and hyperlinks are required to demonstrate that the material presented above is accurate and has been taken from a reliable source.
Although hyperlinks do not explain every point, they’re still helpful in supporting the articles. The problem is seen in many cases that hyperlinks lead to unrelated pages that don’t have relevant info. They either stop reading the topic or move to another topic leaving the original one.
Meta Starts Working on AI Tool
Wikipedia has a page dedicated to Joe Hipp, the first American heavyweight boxer to compete in WBA competition. Joe Hipp and boxing were not mentioned in the article; instead, it was explained how he was the first Native American boxer to challenge the WBA.
According to Joe Hipp, Wikipedia allows people to accept something even when the citations are false. Misinformation might be disseminated all across the world with this. Because of this, Facebook owner Meta began employing the Meta AI to collaborate with the Wikimedia Foundation (a development research lab for the social media giant). They said that it is the first machine learning model to automatically scan a large number of citations simultaneously.
This will help save time as checking each citation manually would take too long.
Meta AI Efforts
Fabio Petroni, the research tech lead manager for the team Meta AI told Digital Trends:
At the end of the day, I believe we were motivated by curiosity. We were interested in discovering the technological envelope. We have zero confidence that this AI could accomplish anything significant in this situation. Similar endeavours have never been made before.
He further highlighted how this technology will work:
Using these models, we were able to create an index of all of these webpages by grouping them into passages and providing a precise representation for each one that did not reflect the text word for word but rather its meaning. This means that in the resultant n-dimensional space where all of these passages are recorded, two pieces of text with comparable meanings will be represented in a relatively near place.
Petroni claims that the group is still striving to make it clear that what they have created is only a proof of concept. Right now, it’s not really usable. You need a new index that indexes a lot more data than what we presently have in order for this to be useful.
This means that the AI tool is not only limited to text but also supports multimedia. It will help on different platforms supporting images and videos.