我正在构建一个 Django 站点,我正在寻找一个搜索引擎.
I'm building a Django site and I am looking for a search engine.
一些候选人:
Lucene/Lucene with Compass/Solr
Lucene/Lucene with Compass/Solr
狮身人面像
Postgresql 内置全文搜索
Postgresql built-in full text search
MySQl 内置全文搜索
MySQl built-in full text search
选择标准:
任何使用过上述搜索引擎或不在列表中的其他引擎的人 -- 我很想听听您的意见.
Anyone who has had experience with the search engines above, or other engines not in the list -- I would love to hear your opinions.
至于索引需求,随着用户不断向站点输入数据,这些数据需要不断地被索引.它不一定是实时的,但理想情况下,新数据会出现在索引中,延迟不超过 15 - 30 分钟
As for indexing needs, as users keep entering data into the site, those data would need to be indexed continuously. It doesn't have to be real time, but ideally new data would show up in index with no more than 15 - 30 minutes delay
很高兴看到有人对 Lucene 表示赞同 - 因为我不知道这一点.
Good to see someone's chimed in about Lucene - because I've no idea about that.
Sphinx,另一方面,我很清楚,所以让我们看看是否能帮上忙.
Sphinx, on the other hand, I know quite well, so let's see if I can be of some help.
我不知道这对你的情况有多适用,但是 Evan Weaver 比较了一些常见的 Rails 搜索选项(Sphinx、Ferret(Ruby 的 Lucene 端口)和 Solr),并运行了一些基准测试.可能有用,我猜.
I've no idea how applicable to your situation this is, but Evan Weaver compared a few of the common Rails search options (Sphinx, Ferret (a port of Lucene for Ruby) and Solr), running some benchmarks. Could be useful, I guess.
我没有研究过 MySQL 全文搜索的深度,但我知道它在速度和功能方面都无法与 Sphinx、Lucene 或 Solr 竞争.
I've not plumbed the depths of MySQL's full-text search, but I know it doesn't compete speed-wise nor feature-wise with Sphinx, Lucene or Solr.
这篇关于全文搜索引擎对比——Lucene、Sphinx、Postgresql、MySQL?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为