在这里阅读了一些关于 SQL 问题的答案和评论后,还听说我的一个朋友在一个有禁止它们的政策的地方工作,我想知道在 field 周围使用反引号是否有什么问题MySQL 中的名称.
After reading a couple of answers and comments on some SQL questions here, and also hearing that a friend of mine works at a place which has a policy which bans them, I'm wondering if there's anything wrong with using backticks around field names in MySQL.
即:
SELECT `id`, `name`, `anotherfield` ...
-- vs --
SELECT id, name, anotherfield ...
使用反引号允许您使用替代字符.在查询编写中,这不是一个问题,但是如果有人假设您可以只使用反引号,我认为它可以让您摆脱诸如
Using backticks permits you to use alternative characters. In query writing it's not such a problem, but if one assumes you can just use backticks, I would assume it lets you get away with ridiculous stuff like
SELECT `id`, `my name`, `another field` , `field,with,comma`
这当然会生成名称错误的表.
Which does of course generate badly named tables.
如果你只是简明扼要,我看不出有什么问题,你会注意到你是否这样运行你的查询
If you're just being concise I don't see a problem with it, you'll note if you run your query as such
EXPLAIN EXTENDED Select foo,bar,baz
返回的生成警告将带有反引号和完全限定的表名.因此,如果您正在使用查询生成功能和自动重写查询,反引号将使解析代码的任何内容变得不那么混乱.
The generated warning that comes back will have back-ticks and fully qualified table names. So if you're using query generation features and automated re-writing of queries, backticks would make anything parsing your code less confused.
然而,我认为,与其强制您是否可以使用反引号,不如他们应该有一个名称标准.它解决了更多真实"的问题.
I think however, instead of mandating whether or not you can use backticks, they should have a standard for names. It solves more 'real' problems.
这篇关于在字段名称周围使用反引号的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为