CDH6 新加节点,不能关掉 Auto-TLS 的解决办法

坑一:新加节点 必须关掉TLS ,但是关不到

参考官网

https://www.cloudera.com/documentation/enterprise/6/6.1/topics/cm_mc_adding_hosts.html

使用“添加主机向导”添加主机

您可以使用“添加主机”向导在主机上安装CDH,Impala和Cloudera Manager Agent。

  1. 禁用TLS加密或身份验证
  2. 在不禁用TLS的情况下安装Cloudera Manager Agent的替代方法
  3. 使用“添加主机向导”
  4. 启用TLS加密或身份验证
  5. 为CDH组件启用TLS / SSL
  6. 启用Kerberos

分析:

https://www.cloudera.com/documentation/enterprise/6/6.1/topics/install_cm_server.html

由于安装的时候设置了

sudo JAVA_HOME=/usr/java/jdk1.8.0_141-cloudera /opt/cloudera/cm-agent/bin/certmanager setup --configure-services

导致不能关掉 Auto-TLS

解决办法:

1.cm后台关掉

clipboard

2. 备份cm_init.txt, 然后清空这个文件的内容

cp /var/lib/cloudera-scm-server/certmanager/cm_init.txt

3.修改每一个节点的 agent的config.ini

vi /etc/cloudera-scm-agent/config.ini

use_tls = 1 改成 use_tls = 0

4.重启服务

systemctl restart cloudera-scm-server

systemctl restart cloudera-scm-agent


部署Harbor私有镜像仓库

docker-ce的安装



step 1:安装一些必要的系统工具

yum install -y yum-utils device-mapper-persistent-data lvm2
1
Step 2:添加docker镜像源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1
Step 3: 安装 Docker-CE

yum -y install docker-ce
1
Step 4: 开启Docker服务

systemctl start docker

安装docker-compose

curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Harbor私有仓库的安装

$ wget  --continue https://storage.googleapis.com/harbor-releases/release-1.5.0/harbor-offline-installer-v1.5.1.tgz
$ tar -xzvf harbor-offline-installer-v1.5.1.tgz
$ cd harbor
$ cp harbor.cfg{,.bak}
$ vim harbor.cfg
$ diff harbor.cfg{,.bak}
7c7
< hostname = 10.0.221.74
---
> hostname = reg.mydomain.com
11c11
< ui_url_protocol = https
---
> ui_url_protocol = http
$ ./install.sh

其它操作

下列操作的工作目录均为 解压离线安装文件后 生成的 harbor 目录。

$ # 停止 harbor
$ docker-compose down -v
$ # 修改配置
$ vim harbor.cfg
$ # 更修改的配置更新到 docker-compose.yml 文件
$ ./prepare
Clearing the configuration file: ./common/config/ui/app.conf
Clearing the configuration file: ./common/config/ui/env
Clearing the configuration file: ./common/config/ui/private_key.pem
Clearing the configuration file: ./common/config/db/env
Clearing the configuration file: ./common/config/registry/root.crt
Clearing the configuration file: ./common/config/registry/config.yml
Clearing the configuration file: ./common/config/jobservice/app.conf
Clearing the configuration file: ./common/config/jobservice/env
Clearing the configuration file: ./common/config/nginx/cert/admin.pem
Clearing the configuration file: ./common/config/nginx/cert/admin-key.pem
Clearing the configuration file: ./common/config/nginx/nginx.conf
Clearing the configuration file: ./common/config/adminserver/env
loaded secret from file: /data/secretkey
Generated configuration file: ./common/config/nginx/nginx.conf
Generated configuration file: ./common/config/adminserver/env
Generated configuration file: ./common/config/ui/env
Generated configuration file: ./common/config/registry/config.yml
Generated configuration file: ./common/config/db/env
Generated configuration file: ./common/config/jobservice/env
Generated configuration file: ./common/config/jobservice/app.conf
Generated configuration file: ./common/config/ui/app.conf
Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
The configuration files are ready, please use docker-compose to start the service.
$ sudo chmod -R 777 common ## 防止容器进程没有权限读取生成的配置
$ # 启动 harbor
$ docker-compose up -d

注意

这是因为docker1.3.2版本开始默认docker registry使用的是https,我们设置Harbor默认http方式,所以当执行用docker login、pull、push等命令操作非https的docker regsitry的时就会报错。解决办法:

cat >  /etc/docker/daemon.json <<EOF
{
  "insecure-registries": [
    "10.0.221.74"
  ]
}
EOF


 

访问管理界面

http://10.0.221.74

用户: admin   密码: Harbor12345

image

测试


1)从docker hub上下载hello-world镜像

# docker run hello-world
2)给镜像打标签,以便上传到私服,其中library是harbor默认提供的项目

# docker tag hello-world 10.0.221.74/library/hello-world
3)上传镜像

先登录镜像私服,执行以下指令,再输入用户名和密码

# docker login 10.0.221.74
Username: admin
Password:       //输入密码
Login Succeeded
上传镜像

# docker push 10.0.221.74/library/hello-world

参考

https://blog.csdn.net/weixin_41465338/article/details/80146218

http://orchome.com/664


WORDPRESS 无法正常显示回车/换行/空行/回行的解决方法

修改位于 wp-includes 中的 formatting.php

//$allblocks = '(?:table|thead|tfoot|caption|col|colgroup|tbody|tr|td|th|div|dl|dd|dt|ul|ol|li|pre|form|map|area|blockquote|address|math|style|p|h[1-6]|hr|fieldset|legend|section|article|aside|hgroup|header|footer|nav|figure|figcaption|details|menu|summary)';
$allblocks = '(?:table|thead|menu|summary)';

删除这个定义里对应标签就能避免这个标签被 WordPress 额外格式化,也就避免了额外的换行


impala驱动的连接方式

官网下载 Cloudera JDBC Driver for Impala 驱动 

image

DbVisualizer 连接:

    image


      jdbc:impala://10.0.19.48:21050;AuthMech=3;request_pool=development;

AuthMech参数的意思:

  • 0 for No Authentication.
  • 1 for Kerberos.
  • 2 for User Name.
  • 3 for User Name And Password.
  • 6 for Hadoop Delegation Token.


kubernetes pv pvc与nfs 测试

1. 准备存储(NFS)
$ sudo yum install nfs-utils

设置 NFS 服务开机启动

$ sudo systemctl enable rpcbind
$ sudo systemctl enable nfs

启动 NFS 服务

$ sudo systemctl start rpcbind
$ sudo systemctl start nfs

服务启动之后,我们在服务端配置一个共享目录
 mkdir –p /data/volumes/linuxea-{1,2,3,4,5}

[root@kube-node1 volumes]# cat /etc/exports
/data/volumes/linuxea-1 10.0.0.0/8(rw,no_root_squash)
/data/volumes/linuxea-2 10.0.0.0/8(rw,no_root_squash)
/data/volumes/linuxea-3 10.0.0.0/8(rw,no_root_squash)
/data/volumes/linuxea-4 10.0.0.0/8(rw,no_root_squash)
/data/volumes/linuxea-5 10.0.0.0/8(rw,no_root_squash)

[root@kube-node1 volumes]#  exportfs -arv
exporting 10.0.0.0/8:/data/volumes/linuxea-5
exporting 10.0.0.0/8:/data/volumes/linuxea-4
exporting 10.0.0.0/8:/data/volumes/linuxea-3
exporting 10.0.0.0/8:/data/volumes/linuxea-2
exporting 10.0.0.0/8:/data/volumes/linuxea-1

[root@kube-node1 volumes]# showmount -e
Export list for kube-node1:
/data/volumes/linuxea-5 10.0.0.0/8
/data/volumes/linuxea-4 10.0.0.0/8
/data/volumes/linuxea-3 10.0.0.0/8
/data/volumes/linuxea-2 10.0.0.0/8
/data/volumes/linuxea-1 10.0.0.0/8

2. 创建pv

[k8s@kube-node1 ~]$ cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
   name: linuxea-1
   labels:
     name: v1
spec:
   nfs:
     path: /data/volumes/linuxea-1
     server: 10.0.19.152
   accessModes: ["ReadWriteMany","ReadWriteOnce"]
   capacity:
     storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
   name: linuxea-2
   labels:
     name: v2
spec:
   nfs:
     path: /data/volumes/linuxea-2
     server: 10.0.19.152
   accessModes: ["ReadWriteMany","ReadWriteOnce"]
   capacity:
     storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
   name: linuxea-3
   labels:
     name: v3
spec:
   nfs:
     path: /data/volumes/linuxea-3
     server: 10.0.19.152
   accessModes: ["ReadWriteMany","ReadWriteOnce"]
   capacity:
     storage: 3Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
   name: linuxea-4
   labels:
     name: v4
spec:
   nfs:
     path: /data/volumes/linuxea-4
     server: 10.0.19.152
   accessModes: ["ReadWriteMany","ReadWriteOnce"]
   capacity:
     storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
   name: linuxea-5
   labels:
     name: v5
spec:
   nfs:
     path: /data/volumes/linuxea-5
     server: 10.0.19.152
   accessModes: ["ReadWriteMany","ReadWriteOnce"]
   capacity:
     storage: 5Gi

定义完成后apply启动

kubectl apply -f pv-demo.yaml 
persistentvolume/linuxea-1 created
persistentvolume/linuxea-2 created
persistentvolume/linuxea-3 created
persistentvolume/linuxea-4 created
persistentvolume/linuxea-5 created

3. 创建pvc

[k8s@kube-node1 ~]$  cat pvc-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: linuxea-pvc
   namespace: default
spec:
   accessModes: ["ReadWriteMany"]
   resources:
     requests:
       storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
   name: linuxea-pvc-pod
   namespace: default
spec:
   containers:
   - name: linuxea-pod1-pvc
     image: "marksugar/nginx:1.14.a"
     ports:
       - containerPort: 88
     volumeMounts:
     - name: linuxea-image
       mountPath: /data/wwwroot/
   volumes:
   - name: linuxea-image
     persistentVolumeClaim:
       claimName: linuxea-pvc

apply创建

[root@kube-node1]# kubectl apply -f pvc-demo.yaml 
persistentvolumeclaim/linuxea-pvc created
pod/linuxea-pvc-pod created

可使用kubectl get pvc查看已经创建好的pvc已经被Bound

[root@kube-node1]# kubectl get pvc
NAME          STATUS    VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
linuxea-pvc   Bound     linuxea-5   5Gi        RWO,RWX                       6s

以及pod

[root@kube-node1]# kubectl get pods -o wide

NAME                                                  READY     STATUS    RESTARTS   AGE       IP             NODE
linuxea-pvc-pod                                       1/1       Running   0          3h        172.30.1.19    kube-node2

而后创建pvc之后,可查看pv已经被绑定到linuxea-5上的pv上(大于等于5G)

[root@kube-node1]# kubectl get pv
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                 STORAGECLASS   REASON    AGE
linuxea-1   1Gi        RWO,RWX        Retain           Available                                                  2m
linuxea-2   2Gi        RWO,RWX        Retain           Available                                                  2m
linuxea-3   3Gi        RWO,RWX        Retain           Available                                                  2m
linuxea-4   4Gi        RWO,RWX        Retain           Available                                                  2m
linuxea-5   5Gi        RWO,RWX        Retain           Bound       default/linuxea-pvc                            2m

也可以使用kubectl describe pods linuxea-pvc-pod|grep root查看信息

[root@kube-node1]# kubectl describe pods linuxea-pvc-pod|grep root
      /data/wwwroot/ from linuxea-image (rw)

pv写入测试

在集群内访问

[k8s@kube-node1 ~]$ curl 172.30.1.19
linuxea-linuxea-pvc-pod.com-127.0.0.1/8 172.30.1.19/24

而后回到nfs修改

[root@kube-node1 volumes]#  echo `date` >> /data/volumes/linuxea-5/index.html

在集群内第二次访问查看

[k8s@kube-node1 ~]$ curl 172.30.1.19
linuxea-linuxea-pvc-pod.com-127.0.0.1/8 172.30.1.19/24
Wed Apr 3 10:45:11 CST 2019

创建多大的pv,可能需要事先设定好,pvc才能适配


nfs常用操作命令

配置NFS服务器

启动NFS相关服务
# service rpcbind status  查询rpcbind服务是否运行
# service nfs status 查询nfs服务是否运行
# service rpcbind start
# service nfs start
# service nfslock start  启动数据一致性服务

查询RPC中NFS注册情况
# rpcinfo -p localhost | grep nfs

特定配置服务器NFS导出的目录即共享的目录的权限,例如目录/home/nfs
# vi /etc/exports
/home/nfs  192.168.100.0/24(rw,no_root_squash)
.....

重新读取配置:
# exportfs -arv  全部重新挂载

# exportfs -auv  全部卸载

或者重启 service nfs restart


客户端与当前NFS系统连接的信息记录在/var/lib/nfs/xtab文件;
导出目录的默认参数记录:/var/lib/nfs/etab文件中;

配置NFS客户机

查找指定主机可供共享导出的NFS目录
# showmount -e nfs_server_ip

挂载NFS共享目录
mount -t nfs nfs_server_ip:/var/ftp/pub /mnt/pub

mount命令遇到hostname:directory的设备名称,自动识别nfs类型,因此省略nfs类型
mount nfs_server_iip:/var/ftp/pub /mnt/pub

卸载
umount /mnt/pub

开机自动挂载
# vi /etc/fstab 添加一行:
# nfs_server_ip:/var/ftp/pub /mnt/pub nfs ro 0 0

预定义挂载选项,但不自动执行:
# vi /etc/fstab添加一行,并注明moauto参数:
# nfs_server_ip:/var/ftp/pub /mnt/pub ro,noauto,user 0 0
# mount /mnt/pub

注意:user选项表明所有用户都可以通过命令挂载,否则只有root用户可以挂载。


ClouderaManager (cm) 时区问题

问题:

如图我的ClouderaManager 这里的时间显示的是美国东部EST时间,哪里能设置成上海CST时间呢?
image
人肉把EST时间转成CST时间发现是时间是没错的,系统设置里没找到相关设置,NTP服务也配置了,并且是同步成功的
主机上的时区设置是正常的

解决方法:


vim /opt/cloudera/cm/bin/cm-server

添加

CMF_OPTS="$CMF_OPTS -Duser.timezone=GMT+08"

image