有什么区别:
CREATE PROCEDURE [dbo].[MyProcedure]
@MyArgument INT NULL
和
CREATE PROCEDURE [dbo].[MyProcedure]
@MyArgument INT = NULL
我使用了第一个,它在 SQL Server 2016 中运行良好.但 SQL Server 2012 不接受它.两者都适用于 SQL Server 2016,我现在使用第二个没有问题.但了解其中的差异会很有趣.
I used the first one, and it worked fine in SQL Server 2016. But SQL Server 2012 did not accept it. Both works on SQL Server 2016, and I am using the second one now without problem. But it would be interesting to know the difference.
谢谢!
他们不做同样的事情.第二个为调用者未指定的情况定义默认值.第一个没有.
They don't do the same thing. The second one defines a default value for the case that the caller doesn't specify one. The first one doesn't.
本机编译存储过程的 Transact-SQL 语法"grammar 允许将参数数据类型声明为允许 NULL 或 NOT NULL.这是为 Hekaton(内存优化表)引入的.
The "Transact-SQL Syntax for Natively Compiled Stored Procedures" grammar allows parameter datatypes to be declared as allowing NULL or NOT NULL. This was introduced for Hekaton (memory optimised tables).
虽然在存储过程的 Transact-SQL 语法"中没有记录为支持语法,但它看起来允许 NULL 但在 NOT NULL 和抛出错误.
Though it isn't documented as supported for the grammar in "Transact-SQL Syntax for Stored Procedures" it looks like it allows NULL but balks at NOT NULL and throws an error.
参数 '@MyArgument' 已声明为 NOT NULL.非空参数只支持本地编译的模块,除了用于内联表值函数.
The parameter '@MyArgument' has been declared as NOT NULL. NOT NULL parameters are only supported with natively compiled modules, except for inline table-valued functions.
显式指定 NULL 没有任何价值 - 这是默认且唯一的选项.常规存储过程没有声明性语法来指示参数必须NOT NULL.
There is no value in specifying NULL explicitly - this is the default and only option. There is no declarative syntax for regular stored procs to indicate that parameters must be NOT NULL.
这篇关于存储过程参数“NULL"或“= NULL"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为