I have the following update procedure that update mySQL DB over the internet between source Linux Centos (local machine on my net behind a DMZ with real IP A.B.C.D) and target Linux fedora (web server www.myweb.com) every day on a specific time 18:00 through a crontab on my source linux server
server(source) ---DMZ---ASA---Router-----Internet----HostingCompany---Myweb(target) [root@source]# mysql -u updatex -p -h www.myweb.com test < sample.SQL
[root@source]$ mysql -u updatex -p -h www.myweb.com test < sample.SQL Enter password: ***** CURTIME() 19:41:44 CURTIME() 19:50:09
[root@source]$ mysql -u updatex -p -h www.myweb.com test < sample.SQL Enter password:***** CURTIME() 08:26:08 CURTIME() 08:26:34
I did the above procedure multiple times in different times in the day. the duration of this procedure takes from 22sec to 10min see above...., before a while it was running constant with duration of 30sec. I checked with my ISP, hosting company and network nothing been changed from the structure/configuration.....
[root@source]# lsof -i -P | grep 3306 mysqld 3806 mysql 11u IPv4 10926 TCP *:3306 (LISTEN) mysql 15150 user 3u IPv4 297528 TCP 192.168.10.5:8376->www.myweb.com:3306 (ESTABLISHED)
[root@target]# netstat -a |grep mysql tcp 0 0 *:mysql *:* LISTEN tcp 0 0 www.myweb.:mysql A.B.C.D:8366 TIME_WAIT tcp 0 11 www.myweb.:mysql A.B.C.D:8372 ESTABLISHED also i attached tcp connection between the nodes as above from source and target, can any one help why i have this behavior and how can i fix the delay, thinking doing QoS or clean up and remoteexcution at that time ...
Thanks