FreeBSD
$ date -jf "%s" "1541563268" +"%Y-%m-%d %H:%M:%S"
2018-11-07 12:01:08
CentOS
$ date -d @1541563268 +"%Y-%m-%d %H:%M:%S"
2018-11-07 12:01:08
Monday, December 31, 2018
加速 SSH 重複登入
這裡採用的是 ControlMaster 功能, 可設定持續時間.
$ mkdir -p ~/.ssh/cm_socket
$ vi ~/.ssh/config
Host *
ControlMaster auto
ControlPath ~/.ssh/cm_socket/%r@%h:%p
ControlPersist 5m
$ chmod 0600 ~/.ssh/config
$ mkdir -p ~/.ssh/cm_socket
$ vi ~/.ssh/config
Host *
ControlMaster auto
ControlPath ~/.ssh/cm_socket/%r@%h:%p
ControlPersist 5m
$ chmod 0600 ~/.ssh/config
如何在 bash 下解析 JSON 資料
以下是其中的三個方案
1. 使用 python
echo "$DATA" | python -c "import json, sys; obj=json.load(sys.std in); print(obj['name'])";
# echo '{"test":"123","test2":[{"test21":"456"}]}' | python -c "import json, sys; obj=json.load(sys.stdin); print(obj['test'])"
123
# echo '{"test":"123","test2":[{"test21":"456"}]}' | python -c "import json, sys; obj=json.load(sys.stdin); print(obj['test2'])"
[{'test21': '456'}]
2. 使用 awk sed tr
echo "$DATA" | sed "s/[{}]//g" | tr '[]' ' ' | sed 's/^ //' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
# echo '{"test":"123","test2":[{"test21":"456"}]}' | sed "s/[{}]//g" | tr '[]' ' ' | sed 's/^ //' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
"test":"123"
"test2": "test21":"456"
3. 使用 jq
# echo '{"test":"123","test2":[{"test21":"456"}]}' | jq -r '.test'
123
Ref:
https://stackoverflow.com/questions/1955505/parsing-json-with-unix-tools
1. 使用 python
echo "$DATA" | python -c "import json, sys; obj=json.load(sys.std
# echo '{"test":"123","test2":[{"test21":"456"}]}' | python -c "import json, sys; obj=json.load(sys.stdin); print(obj['test'])"
123
# echo '{"test":"123","test2":[{"test21":"456"}]}' | python -c "import json, sys; obj=json.load(sys.stdin); print(obj['test2'])"
[{'test21': '456'}]
echo "$DATA" | sed "s/[{}]//g" | tr '[]' ' ' | sed 's/^ //' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
# echo '{"test":"123","test2":[{"test21":"456"}]}' | sed "s/[{}]//g" | tr '[]' ' ' | sed 's/^ //' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
"test":"123"
"test2": "test21":"456"
3. 使用 jq
# echo '{"test":"123","test2":[{"test21":"456"}]}' | jq -r '.test'
123
https://stackoverflow.com/questions/1955505/parsing-json-with-unix-tools
ESXi 好用工具介紹 ghettoVCB
如果不是好野人家企業/公司, 沒有 VCenter 還有儲存裝置可以爽用 live migration.
可是 guest VM 又不能關機匯出 ova, 又必須搬/複製到另一台 ESXi VM host時, 這個超級好用工具就能幫上忙.
程式
https://github.com/lamw/ghettoVCB
目前支援到 ESXi 6.x
匯出VM 說明文件
https://communities.vmware.com/docs/DOC-8760
匯入VM說明文件
https://communities.vmware.com/docs/DOC-10595
這個軟體利用 snapshot 匯出 VM, 不過用過幾次都只能完整空間匯出, 所以不適用超大硬碟空間使用率的 guest VM. 另外匯入時也無法指定新 guest VM 名稱, 所以會檢查是否有名稱衝突問題.
至於設定檔及測試範例將在下一篇說明.
可是 guest VM 又不能關機匯出 ova, 又必須搬/複製到另一台 ESXi VM host時, 這個超級好用工具就能幫上忙.
程式
https://github.com/lamw/ghettoVCB
目前支援到 ESXi 6.x
匯出VM 說明文件
https://communities.vmware.com/docs/DOC-8760
匯入VM說明文件
https://communities.vmware.com/docs/DOC-10595
這個軟體利用 snapshot 匯出 VM, 不過用過幾次都只能完整空間匯出, 所以不適用超大硬碟空間使用率的 guest VM. 另外匯入時也無法指定新 guest VM 名稱, 所以會檢查是否有名稱衝突問題.
至於設定檔及測試範例將在下一篇說明.
在 R 進行 K-Means clustering 分析
之前說了好久的 K-Means clustering 分析終於趁過年放假公佈指令.
使用 R 的好處是將這些複雜的數學計算簡化成指令, 而只要專注於資料分析及選擇正確的分析方法. 參考資料網站的詳盡程度值得讚許, 也值得各位好好仔細學習.
###
### Load data and remove 1st and 2nd column
###
data <- DATA_SOURCE_NAME[-1]
data <- data[-1]
View(data)
###
### Normalization
###
scale(data, center = TRUE, scale = TRUE)
data <- scale(data, center = TRUE, scale = TRUE)
View(data)
write.table(data, file="DATA_Z-Score.csv", sep=",")
###
### K = 2
###
km <- kmeans(data, centers = 2, nstart = 10)
require(factoextra)
fviz_cluster(km, data = data, geom = c("point", "text"), ellipse.type = "norm")
(WSS <- km$tot.withinss) + (BSS <- km$betweenss) + (TSS <- BSS + WSS) + (ratio <- WSS / TSS)
> (WSS <- km$tot.withinss)
[1] 56253.93
> + (BSS <- km$betweenss)
[1] 22738.07
> + (TSS <- BSS + WSS)
[1] 78992
> + (ratio <- WSS / TSS)
[1] 0.7121472
outdata <- table(DATA_SOURCE_NAME$Label, km$cluster)
write.table(outdata, file="DATA_Z-Score-2.csv", sep=",")
###
### K = 3
###
km <- kmeans(data, centers = 3, nstart = 10)
require(factoextra)
fviz_cluster(km, data = data, geom = c("point", "text"), ellipse.type = "norm")
(WSS <- km$tot.withinss) + (BSS <- km$betweenss) + (TSS <- BSS + WSS) + (ratio <- WSS / TSS)
> (WSS <- km$tot.withinss)
[1] 40904.54
> + (BSS <- km$betweenss)
[1] 38087.46
> + (TSS <- BSS + WSS)
[1] 78992
> + (ratio <- WSS / TSS)
[1] 0.5178315
outdata <- table(DATA_SOURCE_NAME$Label, km$cluster)
write.table(outdata, file="DATA_Z-Score-3.csv", sep=",")
Ref:
R系列筆記
使用 R 的好處是將這些複雜的數學計算簡化成指令, 而只要專注於資料分析及選擇正確的分析方法. 參考資料網站的詳盡程度值得讚許, 也值得各位好好仔細學習.
###
### Load data and remove 1st and 2nd column
###
data <- DATA_SOURCE_NAME[-1]
data <- data[-1]
View(data)
###
### Normalization
###
scale(data, center = TRUE, scale = TRUE)
data <- scale(data, center = TRUE, scale = TRUE)
View(data)
write.table(data, file="DATA_Z-Score.csv", sep=",")
###
### K = 2
###
km <- kmeans(data, centers = 2, nstart = 10)
require(factoextra)
fviz_cluster(km, data = data, geom = c("point", "text"), ellipse.type = "norm")
(WSS <- km$tot.withinss) + (BSS <- km$betweenss) + (TSS <- BSS + WSS) + (ratio <- WSS / TSS)
> (WSS <- km$tot.withinss)
[1] 56253.93
> + (BSS <- km$betweenss)
[1] 22738.07
> + (TSS <- BSS + WSS)
[1] 78992
> + (ratio <- WSS / TSS)
[1] 0.7121472
outdata <- table(DATA_SOURCE_NAME$Label, km$cluster)
write.table(outdata, file="DATA_Z-Score-2.csv", sep=",")
###
### K = 3
###
km <- kmeans(data, centers = 3, nstart = 10)
require(factoextra)
fviz_cluster(km, data = data, geom = c("point", "text"), ellipse.type = "norm")
(WSS <- km$tot.withinss) + (BSS <- km$betweenss) + (TSS <- BSS + WSS) + (ratio <- WSS / TSS)
> (WSS <- km$tot.withinss)
[1] 40904.54
> + (BSS <- km$betweenss)
[1] 38087.46
> + (TSS <- BSS + WSS)
[1] 78992
> + (ratio <- WSS / TSS)
[1] 0.5178315
outdata <- table(DATA_SOURCE_NAME$Label, km$cluster)
write.table(outdata, file="DATA_Z-Score-3.csv", sep=",")
Ref:
R系列筆記
Subscribe to:
Posts (Atom)