This is a fork of the ldspider Linked Data internet bot that is being tuned to crawl the 'Australian' Linked Data Web.
The following information, other than Contacts is from the original ldspider codebase.
Grant Burgess
Lead Developer
Griffith University Industrial Placement Student at CSIRO Land & Water
[email protected]
Nicholas Car
Product Owner
Senior Experimental Scientist
CSIRO Land & Water
[email protected]
The LDSpider project provides a web crawling framework for the Linked Data web.
Requirements and challenges for crawling the Linked Data web are different from regular web crawling, thus the LDSpider project offers a web crawler adapted to traverse and harvest content from the Linked Data web.
Due to Google's change to Google code, the downloads page cannot be maintained any more, so you have to browse the repository for both code and jars. Note that you can use maven with the google code repository. The groupId is com.ontologycentral
and the artifactId ldspider
.
The project is a co-operation between Andreas Harth at AIFB and Juergen Umbrich at DERI. Aidan Hogan, Tobias Kaefer and Robert Isele are contributing.
Cite as
@inproceedings{ldspider,
author = { Robert Isele and J\"{u}rgen Umbrich and Chris Bizer and Andreas Harth},
title = { {LDSpider}: An open-source crawling framework for the Web of Linked Data} ,
year = { 2010 },
booktitle = { Proceedings of 9th International Semantic Web Conference (ISWC 2010) Posters and Demos},
url = { http://iswc2010.semanticweb.org/pdf/495.pdf }
}
- Content Handlers for different formats:
- Includes handlers to read RDF/XML, N-TRIPLES and N-QUADS;
- Any23 handlers for other RDF serialisations, e.g. RDFa
- Simple interface design to implement own handlers (e.g. to handle additional formats).
- Different crawling strategies
- Breadth-first crawl;
- Depth-first crawl;
- optionally crawl schema information (TBox).
- Crawling scope
- crawl can easily be restricted to specific pages e.g. pages with a specific domain prefix.
- Output formats - The crawled data can be written in various ways:
- The output can be written to files in different formats, such as RDF/XML or N-QUADS
- The crawler can write all statements to a Triple Store using SPARQL/Update. Optionally uses named graphs to structure the written statements by their source page.
- Optionally, the output include provenance information.
LDSpider can be used in two ways:
- Through a command line application. Getting started (CLI)
- Through a flexible API, which provides various Hooks to extend the behavior of the crawler. Getting started (API)
Sign up to the LDSpider mailing list via the web interface or by emailing mailto:[email protected]
YourKit supports open source projects with its full-featured Java Profiler. YourKit, LLC is the creator of YourKit Java Profiler and YourKit .NET Profiler, innovative and intelligent tools for profiling Java and .NET applications.