$ sudo apt-get update$ sudo apt-get install -y mysql-server-5.7$ sudo mysql_secure_installation
#1VALIDATE PASSWORD PLUGIN can be used to test passwords...Press y|Y for Yes, any other key for No: N (我的选项)#2Please set the password for root here...New password: 123456(输入密码)Re-enter new password: 123456(重复输入)#3By default, a MySQL installation has an anonymous user,allowing anyone to log into MySQL without having to havea user account created for them...Remove anonymous users? (Press y|Y for Yes, any other key for No) : N (我的选项)#4Normally, root should only be allowed to connect from'localhost'. This ensures that someone cannot guess atthe root password from the network...Disallow root login remotely? (Press y|Y for Yes, any other key for No) : Y (我的选项)#5By default, MySQL comes with a database named 'test' thatanyone can access...Remove test database and access to it? (Press y|Y for Yes, any other key for No) : N (我的选项)#6Reloading the privilege tables will ensure that all changesmade so far will take effect immediately.Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Y (我的选项)
查看MySQL服务是否启用
$ sudo service mysql status
若出现以下提示信息则证明MySQL服务已经开启
hadoop@slave2:~$ sudo service mysql status● mysql.service - MySQL Community ServerLoaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)Active: active (running) since Wed 2021-01-20 04:02:50 UTC; 11min agoProcess: 4325 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid (code=exited, status=0/SUCCESS)Process: 4294 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)Main PID: 4327 (mysqld)Tasks: 28 (limit: 4215)CGroup: /system.slice/mysql.service└─4327 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pidJan 20 04:02:49 slave2 systemd[1]: Starting MySQL Community Server...Jan 20 04:02:50 slave2 systemd[1]: Started MySQL Community Server.
若没有开启MySQL服务则需要执行以下命令开启MySQL
$ sudo service mysql start
配置远程访问
$ sudo mysql -uroot -p123456> use mysql;> create user hadoop identified by 'hadoop';> grant all privileges on *.* to 'hadoop'@'%' identified by '123456' with grant option;> flush privileges;> exit;
(2) 安装Hive服务器端(在slave1节点上)
使用Xftp软件将apache-hive-2.1.1-bin.tar.gz安装包上传到 ~/software/ 路径下,然后进行解压
$ tar -zxvf ~/software/apache-hive-2.1.1-bin.tar.gz -C ~/servers
重命名
$ mv ~/servers/apache-hive-2.1.1-bin ~/servers/hive
修改环境变量
$ vim ~/.bashrc
在文件末尾添加
export HIVE_HOME=/home/hadoop/servers/hiveexport PATH=$PATH:$HIVE_HOME/bin
环境变量生效
$ source ~/.bashrc
配置hive-env.sh文件
$ cp ~/servers/hive/conf/hive-env.sh.template ~/servers/hive/conf/hive-env.sh$ vim ~/servers/hive/conf/hive-env.sh
在文件末尾添加
# 配置Hadoop安装路径HADOOP_HOME=/home/hadoop/servers/hadoop# 配置Hive配置文件存放路径export HIVE_CONF_DIR=/home/hadoop/servers/hive/conf# 配置Hive运行资源库路径export HIVE_AUX_JARS_PATH=/home/hadoop/servers/hive/lib
配置hive-site.xml文件
$ vim ~/servers/hive/conf/hive-site.xml
添加
<configuration><property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://slave2:3306/hive?createDatabaseIfNotExist=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8</value><description>MySQL连接协议</description></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value><description>JDBC连接驱动</description></property><property><name>javax.jdo.option.ConnectionUserName</name><value>hadoop</value><description>MySQL登录用户名</description></property><property><name>javax.jdo.option.ConnectionPassword</name><value>123456</value><description>MySQL登录密码</description></property></configuration>
使用Xftp软件将mysql-connector-java-5.1.47-bin.jar驱动包上传到 ~/software/ 路径下,然后将MySQL连接驱动复制到Hive安装路径下的lib目录中
$ cp ~/software/mysql-connector-java-5.1.47-bin.jar ~/servers/hive/lib
将hive分发到客户端
$ scp -r ~/servers/hive master:~/servers
(3) 安装Hive客户端(在master节点上)
配置hive-site.xml文件
$ rm -f ~/servers/hive/conf/hive-site.xml$ vim ~/servers/hive/conf/hive-site.xml
添加
<configuration><property><name>hive.metastore.local</name><value>false</value><description>是否使用本地服务连接Hive</description></property><property><name>hive.metastore.uris</name><value>thrift://slave1:9083</value><description>连接Metastore服务器</description></property><property><name>hive.cli.print.current.db</name><value>true</value><description>显示当前数据库名称信息</description></property><property><name>hive.cli.print.header</name><value>true</value><description>显示当前查询表的头信息</description></property></configuration>
修改环境变量
$ vim ~/.bashrc
在文件末尾添加
export HIVE_HOME=/home/hadoop/servers/hiveexport PATH=$PATH:$HIVE_HOME/bin
环境变量生效
$ source ~/.bashrc
至此,Hive远程部署已经完成。
Hive的启动
(1) 启动Hadoop集群(在master节点上),没有搭建好Hadoop集群的小伙伴请在公众号后台回复关键词 hadoop入门 获得图文教程
$ start-all.sh
(2) 启动MySQL服务(在slave2节点上):请查看Hive部署步骤中的第一步,这里不再重复说明
(3) 启动Metastore服务(在slave1节点上)
初次启动之前首先要初始化元数据
$ schematool -dbType mysql -initSchema
若出现以下提示信息则说明初始化完成
hadoop@slave1:~$ schematool -dbType mysql -initSchemaSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hadoop/servers/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/home/hadoop/servers/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Metastore connection URL: jdbc:mysql://slave2:3306/hive?createDatabaseIfNotExist=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8Metastore Connection Driver : com.mysql.jdbc.DriverMetastore connection User: hadoopStarting metastore schema initialization to 2.1.0Initialization script hive-schema-2.1.0.mysql.sqlInitialization script completedschemaTool completed
但若出现以下提示信息则说明初始化失败
hadoop@slave1:~$ schematool -dbType mysql -initSchemaSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hadoop/servers/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/home/hadoop/servers/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Metastore connection URL: jdbc:mysql://slave2:3306/hive?createDatabaseIfNotExist=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8Metastore Connection Driver : com.mysql.jdbc.DriverMetastore connection User: hadooporg.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.SQL Error code: 0Use --verbose for detailed stacktrace.*** schemaTool failed ***
为了解决The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.问题,需要在slave2上执行命令
$ sudo vim etc/mysql/mysql.conf.d/mysqld.cnf
将bind-address=127.0.0.1修改为bind-address=0.0.0.0,然后执行命令$ sudo service mysql restart重启MySQL服务,最后在slave1节点上初始化元数据即可
接着在slave1节点上启动Metastore服务(使用nohup命令是为了在后台运行)
$ nohup hive --service metastore &
(4) 启动Hive客户端(在master节点上)
直接启动hive
$ hive
hadoop@master:~$ hiveSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hadoop/servers/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/home/hadoop/servers/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Logging initialized using configuration in jar:file:/home/hadoop/servers/hive/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: trueHive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.hive (default)>

参考:
[2] 《科技创新与应用》[J].高教学刊,2016,(第33期).
[3] 鲍亮,陈荣编著. 深入浅出云计算[M]. 北京:清华大学出版社, 2012.10.第361页
[4] https://www.cnblogs.com/qingyunzong/p/8707885.html
[5] 黑马程序员编著.Hadoop大数据技术原理与应用:清华大学出版社,2019.05:第143页
声明:
1.本文为 @caimeng99526 的原创文章,转载请附上原文出处链接。
2.本文中所转载文章及部分图片均来自公开网络,仅供学习交流使用,不会用于任何商业用途。
3.如果出处标注有误或侵犯到原著作者权益,请联系 @caimeng99526 删除,谢谢。
4.转载本公众号中的文章请注明原文链接和作者,否则产生的任何版权纠纷均与本公众号无关。





