我正在尝试验证存在的电子邮件地址的输入,但仅当 company_id 与随请求传入的 company_id 相同时才有效.
I am trying to validate the input of a email address that exists but only when the company_id is the same as the company_id which is passed in with the request.
我收到此错误...
SQLSTATE[42S22]:未找到列:1054 'where 子句'中的未知列 '1'(SQL:选择 count(*) 作为来自 company_users 的聚合,其中 email_address = myemail.com 和 1 <> company_id)>
SQLSTATE[42S22]: Column not found: 1054 Unknown column '1' in 'where clause' (SQL: select count(*) as aggregate from company_users where email_address = myemail.com and 1 <> company_id)
我在网上阅读过,这样做的方法是将验证中的表和列关联起来,这就是我正在做的.
I have read online and the way to do it is to associate the table and the column inside of the validation which is what I am doing.
这是我当前的代码...
This is my current code...
required|email|unique:company_users,email_address,company_id,' . $request->company_id
这里有一个粗略的想法,你可以实现你的目标
Here is a rough idea with this you can achieve what you
您可以使用 Rule 类为您自定义验证规则.
You can use Rule class to customize the validation rule for you.
'email' => ['required', 'string', 'email', 'max:191',Rule::unique('users')->where(function ($query) use ($request) {
return $query->where('company_id', $request->company_id);
})],
希望能帮到你
这篇关于拉拉维尔 |唯一验证 where 子句的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为