我试图为由 2 个 mysql 主机组成的主机组使用 ansible 设置 mysql 主从复制.
I was trying to setup mysql master slave replication with ansible for a hostgroup consisting of 2 mysql hosts.
这是我的场景:
我在第一个主机上运行一个任务并跳过第二个主机,所以第一个任务(即主复制状态)返回一些值,如位置、文件等.
I run one task in the 1st host and skips the 2nd host, so the 1st task (i.e master replication status) returns some values like Position, File etc.
然后,我在第 2 个主机上运行另一个任务(跳过第一个主机),该任务使用第一个任务的返回值,如 master.Position、master.File 等.
Then, I run another task in 2nd host (skips the 1st hosts), This task uses the return values of the 1st task like master.Position, master.File etc.
现在,当我运行剧本时,第一个任务的变量似乎在第二个任务中不起作用
Now, when I run the playbook, the variables of the 1st task does not seem to be working in the 2nd task
库存文件
[mysql]
stagmysql01 ansible_host=1.1.1.1 ansible_ssh_user=ansible ansible_connection=ssh
stagmysql02 ansible_host=1.1.1.2 ansible_ssh_user=ansible ansible_connection=ssh
Master 上的任务
- name: Mysql - Check master replication status.
mysql_replication: mode=getmaster
register: master
- debug: var=master
Slave 上的任务
- name: Mysql - Configure replication on the slave.
mysql_replication:
mode: changemaster
master_host: "{{ replication_master }}"
master_user: "{{ replication_user }}"
master_password: "{{ replication_pass }}"
master_log_file: "{{ master.File }}"
master_log_pos: "{{ master.Position }}"
ignore_errors: True
主输出
TASK [Mysql_Base : Mysql - Check master replication status.] ****************
skipping: [stagmysql02]
ok: [stagmysql01]
TASK [Mysql_Base : debug] ***************************************************
ok: [stagmysql01] => {
"master": {
"Binlog_Do_DB": "",
"Binlog_Ignore_DB": "mysql,performance_schema",
"Executed_Gtid_Set": "",
"File": "mysql-bin.000003",
"Is_Master": true,
"Position": 64687163,
"changed": false,
"failed": false
}
}
ok: [stagmysql02] => {
"master": {
"changed": false,
"skip_reason": "Conditional result was False",
"skipped": true
}
}
从输出
TASK [Mysql_Base : Mysql - Configure replication on the slave.] *************
skipping: [stagmysql01]
fatal: [stagmysql02]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'File'\n\nThe error appears to have been in '/root/ansible/roles/Mysql_Base/tasks/replication.yml': line 30, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Mysql - Configure replication on the slave.\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'File'"}
...ignoring
正如您在上面看到的,由于未定义的变量,第二个主机的第二个任务失败.但是,第一个主机的第一个任务中存在所需的变量.
As, you can see above, the 2nd task failed for 2nd host because of undefined variables. However the required variables are there in 1st task of 1st host.
如何在另一个任务中使用从第一台主机返回到第二台主机的变量?
How do I use the variables returned from 1st host in 2nd host in another task ?
P.S:我见过使用 {{ hostvars['inventory_hostname']['variable'] }} 的方法.但是我对这种方法很困惑,因为需要直接添加库存主机名或 IP 地址.我正在寻找可用于不同清单文件和剧本的通用模板.
P.S: I have seen the approach of using {{ hostvars['inventory_hostname']['variable'] }}. However I'm quite confused with this approach as the inventory_hostname or IP address needs to be added directly. I was looking for a common template that can be used for different inventory files and playbooks.
我解决了我的问题>hostvars.
I was able to solve my problem by defining the variables to a new dummy host and then using it across the playbook with hostvars.
如何设置注册变量以在 ansible 中的播放之间保持不变?但是直到我发布这个问题我才注意到它.
Similar solution was already mentioned in one of the answers in How do I set register a variable to persist between plays in ansible? However I did not notice it until I posted this question.
这是我在 ansible 任务中所做的:
Here is what I did in the ansible tasks:
Master 上的任务
- name: Mysql - Check master replication status.
mysql_replication: mode=getmaster
register: master
- name: "Add master return values to a dummy host"
add_host:
name: "master_value_holder"
master_log_file: "{{ master.File }}"
master_log_pos: "{{ master.Position }}"
Slave 的任务
- name: Mysql - Displaying master replication status
debug: msg="Master Bin Log File is {{ hostvars['master_value_holder']['master_log_file'] }} and Master Bin Log Position is {{ hostvars['master_value_holder']['master_log_pos'] }}"
- name: Mysql - Configure replication on the slave.
mysql_replication:
mode: changemaster
master_host: "{{ replication_master }}"
master_user: "{{ replication_user }}"
master_password: "{{ replication_pass }}"
master_log_file: "{{ hostvars['master_value_holder']['master_log_file'] }}"
master_log_pos: "{{ hostvars['master_value_holder']['master_log_pos'] }}"
when: ansible_eth0.ipv4.address != replication_master and not slave.Slave_SQL_Running
输出
TASK [Mysql_Base : Mysql - Check master replication status.] ****************
skipping: [stagmysql02]
ok: [stagmysql01]
TASK [AZ-Mysql_Base : Add master return values to a dummy host] ****************
changed: [stagmysql01]
TASK [AZ-Mysql_Base : Mysql - Displaying master replication status] ************
ok: [stagmysql01] => {
"msg": "Master Bin Log File is mysql-bin.000001 and Master Bin Log Position is 154"
}
ok: [stagmysql02] => {
"msg": "Master Bin Log File is mysql-bin.000001 and Master Bin Log Position is 154"
}
TASK [AZ-Mysql_Base : Mysql - Configure replication on the slave.] *************
skipping: [stagmysql01]
skipping: [stagmysql02]
从上面的输出中可以看出,主复制状态现在对两台主机都可用.
As you can see from the above output that the master replication status is available for both the hosts now.
这篇关于如何在ansible中为不同的主机使用另一个任务中的一个任务的返回值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为