header 1 | header 2 |
---|---|
row 1 col 1 | row 1 col 2 |
row 2 col 1 | row 2 col 2 |
1 | E = mc^2 |
分享知识,分享快乐
find / -name WEB-INF
[root@bigdata-3 lib]# pwd
/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/solr/server/solr-webapp/webapp/WEB-INF/lib
/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/solr/server/solr-webapp/webapp/WEB-INF/lib
/opt/cloudera/parcels/CDH/lib/solr/server/solr-webapp/webapp/WEB-INF/lib
1 | -rwxr-xr-x 1 root root 1184820 May 7 10:29 ik-analyzer-7.5.0.jar |
将resources目录下的5个配置文件放入solr服务的Jetty或Tomcat的webapp/WEB-INF/classes/目录下;
1 | ① IKAnalyzer.cfg.xml |
1 | solrctl instancedir --generate $HOME/test_collection_config |
1 | solrctl instancedir --create test_collection_config $HOME/test_collection_config |
1 | solrctl collection --create test_collection -s 1 -c test_collection_config |
1 | cd /opt/cloudera/parcels/CDH/share/doc/solr-doc*/example/exampledocs |
1 | hadoop fs -ls -R /solr/test_co*/ |
linux shell 配置文件中默认的字符集编码为UTF-8
accii 文件显示中文乱码
用iconv进行转换就可以了
1 | iconv -f GBK -t UTF-8 1.csv > 3.csv |
查了下iconv命令用法如下:
iconv [选项…] [文件…]
有如下选项可用:
输入/输出格式规范:
-f, --from-code=名称 原始文本编码
-t, --to-code=名称 输出编码
信息:
-l, --list 列举所有已知的字符集
输出控制:
-c 从输出中忽略无效的字符
-o, --output=FILE 输出文件
-s, --silent 关闭警告
–verbose 打印进度信息
JanusGraph Server搭建 hbase+ solr
https://blog.csdn.net/goandozhf/article/details/80105895#2068-1524796329245
创建janusgraph-hbase-solr-server.properties文件
1 | gremlin.graph=org.janusgraph.core.JanusGraphFactory |
1 | $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 --nomapred increment 10 |
1 | hbase org.apache.hadoop.hbase.util.LoadTestTool -compression NONE -write 8:8 -num_keys 1048576 |
Canary 工具可以帮助用户“测试”HBase 集群状态
测试每个表的每个区域的每个列族
1 | hbase canary |
对特定表格的每个区域的每个列族进行 Canary 测试
1 | hbase canary test-01 test-02 |
1 | #!/bin/bash |
1 | hadoop distcp \ |
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020/user/xy_app_spark/tables/fi_gw_express_order_idcard1_encrypt/pk_year=2018/pk_month=2018-10 hdfs://172.20.85.39:8020/user/hive/warehouse/credit_mining.db/fi_gw_express_order_idcard1_encrypt/pk_year=2018/pk_month=2018-10
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020/user/xy_app_spark/tables/fi_gw_express_order_idcard1_encrypt/pk_year=2018/pk_month=2018-11 hdfs://172.20.85.39:8020/user/hive/warehouse/credit_mining.db/fi_gw_express_order_idcard1_encrypt/pk_year=2018/pk_month=2018-11
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020/user/xy_app_spark/tables/fo_payment_encrypt/pk_year=2018/pk_month=2018-10 hdfs://172.20.85.39:8020/user/hive/warehouse/credit_mining.db/fo_payment_encrypt/pk_year=2018/pk_month=2018-10
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020/user/xy_app_spark/tables/fo_payment_encrypt/pk_year=2018/pk_month=2018-11 hdfs://172.20.85.39:8020/user/hive/warehouse/credit_mining.db/fo_payment_encrypt/pk_year=2018/pk_month=2018-11
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020//user/hive/warehouse/xy_ods.db/t_serve_business_order_real_time_v2_encrypt hdfs://172.20.85.39:8020/user/hive/warehouse/xy_ods.db/t_serve_business_order_real_time_v2_encrypt
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i hdfs://192.168.81.30:8020/user/hive/warehouse/xy_ods_db.db/credit_logprocessor_rocord hdfs://172.20.85.39:8020/user/hive/warehouse/xy_ods_db.db/credit_logprocessor_rocord
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020/user/hive/warehouse/xy_ods_db.db/credit_logprocessor_rocord/pk_day=2018-11-11 hdfs://172.20.85.39:8020/user/hive/warehouse/xy_ods_db.db/credit_logprocessor_rocord/pk_day=2018-11-11
hadoop distcp -D mapreduce.job.queuename=xy_yarn_pool.development -D ipc.client.fallback-to-simple-auth-allowed=true -i -overwrite hdfs://192.168.81.30:8020/user/hive/warehouse/xy_ods.db/ods_verification_cardno_d_incr/pk_year=2017 hdfs://172.20.85.39:8020/user/hive/warehouse/xy_ods.db/ods_verification_cardno_d_incr/pk_year=2017
1 |
|
https://fivezh.github.io/2017/06/18/centos-7-memory-available/
https://blog.csdn.net/starshine/article/details/7434942
https://blog.51cto.com/12445535/2362407
1 |
|
历史版本下载地址
1 | https://repo.continuum.io/archive/ |
https://blog.51cto.com/acevi/2103437
https://blog.51cto.com/moerjinrong/2155178
1 | 安装 |
silent install