提高MySQL导入速度

47

我有一个22GB的大型数据库。 我过去使用mysqldump命令以gzip格式进行备份。

当我解压缩gz文件时,它会生成.sql文件,大小为16.2GB

当我尝试在本地服务器中导入数据库时,需要大约48小时才能完成导入。是否有方法可以提高导入速度?

另外,我想知道是否需要进行任何硬件更改来改善性能。

当前系统配置:

 Processor: 4th Gen i5
 RAM: 8GB

#更新

我的.cnf文件如下

#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
# 
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html

# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port        = 3306
socket      = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket      = /var/run/mysqld/mysqld.sock
nice        = 0

[mysqld]
#
# * Basic Settings
#
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address        = 127.0.0.1
#
# * Fine Tuning
#
key_buffer      = 16M
max_allowed_packet  = 512M
thread_stack        = 192K
thread_cache_size       = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover         = BACKUP
#max_connections        = 100
#table_cache            = 64
#thread_concurrency     = 10
#
# * Query Cache Configuration
#
query_cache_limit   = 4M
query_cache_size        = 512M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id      = 1
#log_bin            = /var/log/mysql/mysql-bin.log
expire_logs_days    = 10
max_binlog_size         = 100M
#binlog_do_db       = include_database_name
#binlog_ignore_db   = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem



[mysqldump]
quick
quote-names
max_allowed_packet  = 512M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completition

[isamchk]
key_buffer      = 512M

#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/

已经上传了3天,目前已经导入了9.9 GB。数据库包含MyISAMInnoDB表格。我该怎么做来提高导入性能?

我尝试使用mysqldump将每个表格单独以gz格式导出,并通过执行以下代码的PHP脚本导入每个表格:

$dir="./";
$files = scandir($dir, 1);
array_pop($files);
array_pop($files);
$tablecount=0;
foreach($files as $file){
    $tablecount++;
    echo $tablecount."     ";

    echo $file."\n";
    $command="gunzip < ".$file." | mysql -u root -pubuntu cms";

    echo exec($command);
}

你能接受 MySQL 服务器停机几秒钟吗?如果可以,直接备份 MySQL 数据库文件,恢复时只需将其复制回来即可。这两个操作都需要将 MySQL 服务器离线。这是一种不安全但高效的方式。 - Frederick Zhang
你有多少张表? - Alex
您可以添加更多关于问题的信息 - 瓶颈是CPU还是磁盘,是否有特定的表导致了缓慢的导入;如果是这样,那么表的结构是什么,其中有多少行等。 - VolenD
1
我们有一些大表(10GB),对于MySQL来说,导入/导出太多了。帮助的方法是将大型日志表移动到MongoDB中。我知道这不会解决你的问题,但总有一天你可能需要做出决定。 - Zdenek Machek
我投票关闭这个问题,因为它应该在dba上发布。 - undefined
显示剩余3条评论
11个回答

0
当您的SQL文件每行只有一个语句时,通过在一个语句中包含多个INSERT语句,您可以将文件导入得更快。
例如,将以下内容更改为:
INSERT INTO abc VALUES(1,2,3)
INSERT INTO abc VALUES(4,5,6)
INSERT INTO abc VALUES(4,5,6)

进入:

INSERT INTO abc VALUES(1,2,3),(4,5,6),(7,8,9);

可能最新版本的提交较少;请查看说明。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接