尝试通过 MySQL 中的存储过程进行选择时出现以下错误.
Am getting the below error when trying to do a select through a stored procedure in MySQL.
排序规则 (latin1_general_cs,IMPLICIT) 和 (latin1_general_ci,IMPLICIT) 的非法混合用于操作 '='
Illegal mix of collations (latin1_general_cs,IMPLICIT) and (latin1_general_ci,IMPLICIT) for operation '='
知道这里可能出了什么问题吗?
Any idea on what might be going wrong here?
表的排序规则是latin1_general_ci,where子句中列的排序规则是latin1_general_cs.
The collation of the table is latin1_general_ci and that of the column in the where clause is latin1_general_cs.
这通常是由于比较两个不兼容的排序规则的字符串或尝试将不同排序规则的数据选择到组合列中引起的.
This is generally caused by comparing two strings of incompatible collation or by attempting to select data of different collation into a combined column.
子句 COLLATE 允许您指定查询中使用的排序规则.
The clause COLLATE allows you to specify the collation used in the query.
例如,以下 WHERE 子句将始终给出您发布的错误:
For example, the following WHERE clause will always give the error you posted:
WHERE 'A' COLLATE latin1_general_ci = 'A' COLLATE latin1_general_cs
您的解决方案是为查询中的两列指定共享排序规则.下面是一个使用 COLLATE 子句的例子:
Your solution is to specify a shared collation for the two columns within the query. Here is an example that uses the COLLATE clause:
SELECT * FROM table ORDER BY key COLLATE latin1_general_ci;
另一种选择是使用 BINARY 运算符:
Another option is to use the BINARY operator:
BINARY str 是 CAST(str AS BINARY) 的简写.
BINARY str is the shorthand for CAST(str AS BINARY).
您的解决方案可能如下所示:
Your solution might look something like this:
SELECT * FROM table WHERE BINARY a = BINARY b;
或者,
SELECT * FROM table ORDER BY BINARY a;
这篇关于排查“非法混合排序规则"mysql 中的错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为