如何在 Java 中高效地解析 200,000 个 XML 文件?
我有 200,000 个 XML 文件要解析并存储在数据库中.
I have 200,000 XML files I want to parse and store in a database.
这里是一个例子:https://gist.github.com/902292
这与 XML 文件一样复杂.这也将在小型 VPS (Linode) 上运行,因此内存很紧.
This is about as complex as the XML files get. This will also run on a small VPS (Linode) so memory is tight.
我想知道的是:
1) 我应该使用 DOM 还是 SAX 解析器?由于每个 XML 都很小,因此 DOM 似乎更容易和更快.
1) Should I use a DOM or SAX parser? DOM seems easier and faster since each XML is small.
2) 关于所述解析器的简单教程在哪里?(DOM 或 SAX)
2) Where is a simple tutorial on said parser? (DOM or SAX)
谢谢
编辑
尽管每个人都建议使用 SAX,但我尝试了 DOM 路由.主要是因为我找到了一个更简单"的 DOM 教程,并且我认为由于平均文件大小约为 3k - 4k,因此很容易将其保存在内存中.
I tried the DOM route even though everyone suggested SAX. Mainly because I found an "easier" tutorial for DOM and I thought that since the average file size was about 3k - 4k it would easily be able to hold that in memory.
但是,我编写了一个递归例程来处理所有 200k 文件,它完成了大约 40% 的文件,然后 Java 内存不足.
However, I wrote a recursive routine to handle all 200k files and it gets about 40% of the way through them and then Java runs out of memory.
这是项目的一部分.https://gist.github.com/905550#file_xm_lparser.java
我现在应该放弃 DOM 而只使用 SAX 吗?看起来如此小的文件 DOM 应该能够处理它.
Should I ditch DOM now and just use SAX? Just seems like with such small files DOM should be able to handle it.
此外,速度足够快".解析 2000 个 XML 文件大约需要 19 秒(在 Mongo 插入之前).
Also, the speed is "fast enough". It's taking about 19 seconds to parse 2000 XML files (before the Mongo insert).
谢谢
推荐答案
SAX 总是在速度上击败 DOM.但是由于您说 XML 文件很小,您可以继续使用 DOM 解析器.您可以做的一件事是创建一个线程池并在其中执行数据库操作.多线程更新将显着提高性能.
SAX always beats DOM at speed. But since you say XML files are small you may proceed with DOM parser. One thing you can do to speedup is create a Thread-Pool and do the database operations in it. Multithreaded updates will significantly improve the performance.
- 拉利斯
相关文章