我正在尝试导入 .sql 文件,但在创建表时失败.
I am trying to import a .sql file and its failing on creating tables.
这是失败的查询:
CREATE TABLE `data` (
`id` int(10) unsigned NOT NULL,
`name` varchar(100) NOT NULL,
`value` varchar(15) NOT NULL,
UNIQUE KEY `id` (`id`,`name`),
CONSTRAINT `data_ibfk_1` FOREIGN KEY (`id`) REFERENCES `keywords` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
我从同一个数据库导出了 .sql,我删除了所有表,现在我试图导入它,为什么它失败了?
I exported the .sql from the same database, I dropped all the tables and now im trying to import it, why is it failing?
MySQL: 无法创建表 './dbname/data.frm' (errno: 150)
MySQL: Can't create table './dbname/data.frm' (errno: 150)
来自 MySQL - 外键约束文档:
如果你重新创建一个被删除的表,它必须有一个符合引用它的外键约束的定义.如前所述,它必须具有正确的列名和类型,并且必须在引用的键上具有索引.如果这些都不满足,MySQL 会返回 Error 1005 并在错误消息中引用 Error 150,这意味着没有正确形成外键约束. 同样,如果由于错误 150,ALTER TABLE 失败,这意味着对于更改后的表,外键定义的格式不正确.
If you re-create a table that was dropped, it must have a definition that conforms to the foreign key constraints referencing it. It must have the correct column names and types, and it must have indexes on the referenced keys, as stated earlier. If these are not satisfied, MySQL returns Error 1005 and refers to Error 150 in the error message, which means that a foreign key constraint was not correctly formed. Similarly, if an ALTER TABLE fails due to Error 150, this means that a foreign key definition would be incorrectly formed for the altered table.
这篇关于MySQL:无法创建表(错误号:150)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为