<DIV>Thanks, I'll try it.<BR><BR><B><I>Dave Beckett <Dave.Beckett@bristol.ac.uk></I></B> schrieb:
<BLOCKQUOTE class=replbq style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">On Wed, 2005-08-17 at 12:17 -0400, Christopher Schmidt wrote:<BR>> On Wed, Aug 17, 2005 at 05:23:55PM +0200, Irvine Keans wrote:<BR>> > Hi there,<BR>> > <BR>> > I've done some tests with redland and got some unexpected results. I hope that you can help me.<BR>> > <BR>> > I tried to parse 3 ontologies. Once in main memory and once in a mySQL database system.<BR>> > <BR>> > The unexpected thing is, that the parsing into the DBMS take only a part of the time that parsing into main memory needs. But why?<BR>> <BR>> Probably because a large chunk of the work is in storing efficient<BR>> memory stores so that they can be accessed, which, if you're using a<BR>> database, is taken care of you by the backing store. The in-memory store<BR>> is, in my experience, designed mostly to be used for small models (<5k<BR>> statements) at
which point it works relatively well, but anything beyond<BR>> that and you're going to run into problems.<BR>> <BR>> I could, of course, be wrong, but that's been my experience. dajobe can<BR>> probably offer more technical advice on the topic.<BR><BR>That's correct.<BR><BR>http://librdf.org/docs/storage.html goes over the tradeoffs and<BR>describes how to get a faster indexed in-memory store:<BR><BR>"The memory store is not suitable for large in-memory models since it<BR>does not do any indexing. For that, use the hash indexed store with<BR>hash-type of memory."<BR><BR>(this doc will be moving into the main redland reference document)<BR><BR>Dave<BR><BR><BR></BLOCKQUOTE></DIV><p>
                <hr size=1>Gesendet von <a href="http://de.mail.yahoo.com" target=_new>Yahoo! Mail</a> - Jetzt mit 1GB kostenlosem Speicher