WorldTree Corpus of Explanation Graphs for Elementary Science Concerns

WorldTree Corpus of Explanation Graphs for Elementary Science Concerns

Wesbury Lab Usenet Corpus: anonymized compilation of postings from 47,860 newsgroups that are english-language 2005-2010 (40 GB)

russian brides in bikinis

Wesbury Lab Wikipedia Corpus Snapshot of the many articles into the English section of the Wikipedia that has been drawn in April 2010. It was prepared, as described in more detail below, to eliminate all links and irrelevant product (navigation text, etc) The corpus is untagged, natural text. Utilized by Stanford NLP (1.8 GB).

: a corpus of manually-constructed explanation graphs, explanatory part reviews, and associated semistructured tablestore for some publicly available primary technology exam concerns in america (8 MB)

Wikipedia Extraction (WEX): a prepared dump of english language wikipedia (66 GB)

Wikipedia XML information: complete copy of most Wikimedia wikis, by means of wikitext source and metadata embedded in XML. (500 GB)

Yahoo! Responses questions that are comprehensive Answers: Yahoo! Responses corpus as of 10/25/2007. Contains 4,483,032 concerns and their responses. (3.6 GB)

Yahoo! Responses composed of concerns expected in French: Subset for the Yahoo! Responses corpus from 2006 to 2015 composed of 1.7 million concerns posed in French, and their matching responses. (3.8 GB)

Yahoo! Responses Manner issues: subset regarding the Yahoo! Answers corpus from a 10/25/2007 dump, selected for his or her linguistic properties. Contains 142,627 concerns and their responses. (104 MB)

Yahoo! HTML Forms removed from Publicly Webpages that is available a little test of pages which contain complex HTML kinds, contains 2.67 million complex types. (50+ GB)

Yahoo N-Gram Representations: This dataset contains n-gram representations. The info may act as a testbed for query rewriting task, a common issue in IR research along with to term and phrase similarity task, which will be typical in NLP research. (2.6 GB)

Yahoo! N-Grams, variation 2.0: n-grams (letter = 1 to 5), removed from a corpus of 14.6 million papers (126 million unique sentences, 3.4 billion running terms) crawled from over 12000 news-oriented sites (12 GB)

Yahoo! Re Re Search Logs with Relevance Judgments: Annonymized Yahoo! Re Search Logs with Relevance Judgments (1.3 GB)

Yahoo! Semantically Annotated Snapshot regarding the English Wikipedia: English Wikipedia dated from 2006-11-04 processed with an amount of publicly-available NLP tools. 1,490,688 entries. (6 GB)

Yelp: including restaurant ranks and 2.2M reviews (on demand)

Youtube: 1.7 million youtube videos information (torrent)

  • Awesome datasets/NLP that are publicincludes more listings)
  • AWS Public Datasets
  • CrowdFlower: information for everybody (a lot of small studies they carried out and information acquired by crowdsourcing for the task that is specific
  • Kaggle 1, 2 (make certain though that the kaggle competition information may be used not in the competition! )
  • Open Library
  • Quora (mainly annotated corpora)
  • /r/datasets (endless set of datasets, many is scraped by amateurs though and never precisely documented or licensed)
  • Rs.io (another big list)
  • Stackexchange: Opendata
  • Stanford NLP team (primarily annotated corpora and TreeBanks or real NLP tools)
  • Yahoo! Webscope (also contains papers which use the info this is certainly supplied)
  • SaudiNewsNet: 31,030 Arabic magazine articles alongwith metadata, removed from different online Saudi magazines. (2 MB)
  • Number of Urdu Datasets for POS, NER and NLP tasks.

German governmental Speeches Corpus: assortment of current speeches held by top German representatives (25 MB, 11 MTokens)

NEGRA: A Syntactically Annotated Corpus of German Newspaper Texts. Designed for free for several Universities and organizations that are non-profit. Need certainly to signal and deliver type to acquire. (on demand)

Ten Thousand German News Articles Dataset: 10273 german language news articles categorized into nine classes for subject category. (26.1 MB)

100k German Court choices: Open Legal Data releases a dataset of 100,000 German court choices and 444,000 citations (772 MB)

  • © 2020 GitHub, Inc.
  • Terms
  • Privacy
  • Safety
  • Reputation
  • Assist
  • Contact GitHub
  • Prices
  • API
  • Training
  • We We We Blog
  • About

That action can’t be performed by you at this time around.

You finalized in with another tab or screen. Reload to recharge your session. You finalized down in another window or tab. Reload to refresh your session.

Acerca de Alberto del Rey Poveda

Investigador Titular del Instituto de Iberoamérica. Grupo de Investigación Multidisciplinar sobre Migraciones en América Latina [GIMMAL]. Profesor del Departamento de Sociología y Comunicación de la Universidad de Salamanca.
Aún no hay comentarios

Deja un comentario