Chrome Pointer

2022年1月25日 星期二

如何運用darknet 在yolov4 上面計算anchors

此篇文章會繼承我之前寫的以下幾篇文章

💛YOLOv4 training 訓練完整教學(以口罩辨識為範例)

💛YoloR的 YoloR_p6.cfg 如何修改成自己想要的訓練資料?


使用下面這一支程式,

它會產生car_train81.txtcar_test81.txt,

但因為我沒有驗證集,所以圖片都放在car_train81裡面


import glob, os


# Current directory
current_dir = os.path.dirname(os.path.abspath(__file__))

print(current_dir)

current_dir = '/home/g11016003/yolor/data/car_train81_total'

# Percentage of images to be used for the test set
percentage_test = 100; #因為我沒有測試集,所以這裡設100, 讓圖片全部存在car_train81.txt裡面

# Create and/or truncate train.txt and test.txt
file_train = open('data/car_train81.txt''w')
file_test = open('data/car_test81.txt''w')

# Populate train.txt and test.txt
counter = 1
index_test = round(100 / percentage_test)
for pathAndFilename in glob.iglob(os.path.join(current_dir, "*.jpg")):
    title, ext = os.path.splitext(os.path.basename(pathAndFilename))

    if counter == index_test:
        counter = 1
        file_train.write("/home/g11016003/yolor/data/car_train81_total" + "/" + title + '.jpg' + "\n")
    else:
        file_test.write("/home/g11016003/yolor/data/car_train81_total" + "/" + title + '.jpg' + "\n")
        counter = counter + 1

至於為什麼要用到這一支程式,

這是因為Darknetyolov4比較麻煩,

你必須要先把圖片和標註好的.txt檔案放在同一個資料夾裡面,

但是你的.data檔案的 train valid 路徑不能直接設在 圖片和.txt所在的資料夾路徑,

你必須要創一個新的.txt,

並且把圖片的路徑全部寫在這個.txt圖片裡面,

然後再到.data裡面把.txt的路徑設為train valid.

Car_train81.txt包含所有圖片的路徑


當你確定好有產生car_trin81.txt的檔案,

並且裡面有產生要訓練的圖片位置後 (我這邊是採用絕對路徑),

就可以在car81.data把car_trin81.txt路徑寫進train 和 valid裡面!!

(記得你的classes, names這些也要弄唷~)

Car81.data裡面的路徑要連結到Car_train81.txt


當你完成以上步驟後,

就可以下指令囉~

./darknet detector calc_anchors data/文件.data  -num_of_clusters 需要幾組 -width 寬度 -height 高度

由於我是要應用在YoloR上面,

因此我的-num_of_clusters是下12組, 寬高也分別給1280,

這部分可以根據你要使用的yolo版本而調整!!

./darknet detector calc_anchors data/car81.data -num_of_clusters 12 -width 1280 -height 1280


正常情況下,

當你下完指令後,

你的anchors就出現囉



#counters_per_class >> 0 

代表那類沒Label(如果太多0會造成IoU下降準確度降低)


2022年1月15日 星期六

All I need are birds to accompany me

All I need are birds to accompany me, they're my little buddies, they can sing a song with me together.

Every person might accompany you during a period of time, but not forever, and someone might destroy you, someone would treat you better.

When your glory had faded by someone else, if you could know it before you met him, I bet that you would rather be alone.

Being a good human in our education system which will tell you that you need to be kind, love each other, or even more like don't fight for yourself, if you do so, then what would you get? Usually, you would be bullied and deceived, all the knowledge they told us from our education system was to brainwash us because they want to keep their authority to control us and prevent us to replace them. 

If you don't protect yourself, who will protect you?

Finding a person to be with you is hard, please don't find a random person in a relationship, you need to observe his behavior for a long time to make sure that he's not a sociopath or psychopath.

Haven't found a lover maybe you'll feel alone, but finding someone wrong will become worse, and it will literally ruin your life, or even devastate you.

Rather than being lived with humans, maybe it'll be better to live with birds, at least you will not feel unsafe anymore.

This article was inspired by Loni Willison, I hope you can understand something more from her.

Written by Weibert小崴少









2022年1月11日 星期二

If I'm not woken up from this dream, then it's not a dream.

If I'm not woken up from this dream, then it's not a dream.

All the people who were gone all come back together with me, you can achieve the things that you couldn't in reality, and make the dream becomes your reality.

As a rainbow, I can shine my dream and the people I love.

With my lover, we can go anywhere where we want to go, we can experience anything that we want to experience.

Although one day you'll be woken up from your dream, just let your dream as your reality, then it's true because you really got through it.


Written by Weibert小崴少



2022年1月10日 星期一

YoloR installed on ubuntu / YoloR 安裝到ubuntu教學 + train & detect 指令介紹

 ubuntu 18.04

****************************************************************************
https://github.com/WongKinYiu/yolor
1. Create a virtual environment to hold OpenCV 4.5, yolov4 and additional packages
mkvirtualenv myrgpu -p python3

workon yolorgpu
pip install numpy
pip install opencv-contrib-python

git clone https://github.com/WongKinYiu/yolor (main)
cd yolor

# pip install required packages
pip install -r requirements.txt

pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -U cython
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

(myrgpu) ubuntu@server:/data/Taiwan/yolorProjects$ python
Python 3.6.9 (default, Dec  8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.9.0+cu111'
>>> torch.cuda.is_available()
True
>>> quit()

# install mish-cuda if you want to use mish activation
https://github.com/thomasbrandon/mish-cuda
https://github.com/JunnYu/mish-cuda
git clone https://github.com/JunnYu/mish-cuda
cd mish-cuda
python setup.py build install
cd ..

# install pytorch_wavelets if you want to use dwt down-sampling module
https://github.com/fbcotter/pytorch_wavelets
git clone https://github.com/fbcotter/pytorch_wavelets
cd pytorch_wavelets
pip install .
cd ..

Inference
(myrgpu) ubuntu@server:/data/Taiwan/yolorProjects/yolor$
python detect.py --source inference/images/horses.jpg --cfg cfg/yolor_p6.cfg --weights yolor_p6.pt --conf 0.25 --img-size 1280 --device 0

(myrgpu) ubuntu@server:/data/Taiwan/yolorProjects/yolor$
python detect.py --source /data/Taiwan/coco/C417/111/2021_12_07_16_24_36_844346.jpg --cfg cfg/yolor_p6.cfg --weights yolor_p6.pt --conf 0.25 --img-size 1280 --device 0

******************************************************************

 

💚Train💚 :

python -m torch.distributed.launch --nproc_per_node 2 --master_port 9527 train.py --batch-size 8 --img 1280 1280 --data car81.yaml --cfg cfg/yolor_p6.cfg --weights '' --device 1,2 --sync-bn --name gogogo1 --hyp hyp.scratch.1280.yaml --epochs 800

 


💚Detect💚 :

python detect.py --source data/car_test/images/test0.jpg --names data/car.names --cfg cfg/yolor_p6.cfg --weights runs/train/fight1/weights/best.pt --conf 0.25 --img-size 1280 --device 0


#add output #conf 0.25

python detect.py --source data/car_test/images/ --names data/car81.names --cfg cfg/yolor_p6_car81.cfg --weights runs/train/go81_2/weights/best.pt --output inference/output_go81_2 --conf 0.25 --img-size 1280 --device 0


#change conf as 0.5 #change output address

python detect.py --source data/car_test/images/ --names data/car81.names --cfg cfg/yolor_p6_car81.cfg --weights runs/train/go81_2/weights/best.pt --output inference/output_go81_3 --conf 0.5 --img-size 1280 --device 0

 

#change conf as 0.4 #change output address

python detect.py --source data/car_test/images/ --names data/car81.names --cfg cfg/yolor_p6_car81.cfg --weights runs/train/go81_2/weights/best.pt --output inference/output_go81_4 --conf 0.4 --img-size 1280 --device 0

 

detect的時候指令裡面一定要加--names

因為YoloR namesdefaultcoco.names, 

所以若你偵測的時候沒有加上names, 

跑出來的結果就會變成apple, person之類的這點請務必注意唷


另外在detect的時候, --device 只可以是0,

若改成1, 2…就會跑不動,

崴少已經幫大家試過了


參數--conf 0.25表示什麼意思?

意思是置信度低於0.25的框不要顯示出來~


2022年1月8日 星期六

YoloR的 YoloR_p6.cfg 如何修改成自己想要的訓練資料?

此文會繼承我之前寫的這篇~~

💛YOLOv4 training 訓練完整教學(以口罩辨識為範例) 💛

💛如何運用darknet 在yolov4 上面計算anchors 💛


# 207
[implicit_mul]
filters=258   #change

# 208
[implicit_mul]
filters=258   #change

# 209
[implicit_mul]
filters=258   #change

# 210
[implicit_mul]
filters=258  #change

# ============ Head ============ #

# YOLO-3

[route]
layers = 163

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=silu

[shift_channels]
from=203

[convolutional]
size=1
stride=1
pad=1
filters=258    #change
activation=linear

[control_channels]
from=207

[yolo]
mask = 0,1,2
anchors = 19,27,  44,40,  38,94,  96,68,  86,152,  180,137,  140,301,  303,264,  238,542,  436,615,  739,380,  925,792
classes=81    #change
num=12
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6


# YOLO-4

[route]
layers = 176

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=384
activation=silu

[shift_channels]
from=204

[convolutional]
size=1
stride=1
pad=1
filters=258  #change
activation=linear

[control_channels]
from=208

[yolo]
mask = 3,4,5
anchors = 19,27,  44,40,  38,94,  96,68,  86,152,  180,137,  140,301,  303,264,  238,542,  436,615,  739,380,  925,792
classes=81  #change
num=12
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6


# YOLO-5

[route]
layers = 189

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=silu

[shift_channels]
from=205

[convolutional]
size=1
stride=1
pad=1
filters=258 #change
activation=linear

[control_channels]
from=209

[yolo]
mask = 6,7,8
anchors = 19,27,  44,40,  38,94,  96,68,  86,152,  180,137,  140,301,  303,264,  238,542,  436,615,  739,380,  925,792
classes=81  #change
num=12
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6


# YOLO-6

[route]
layers = 202

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=640
activation=silu

[shift_channels]
from=206

[convolutional]
size=1
stride=1
pad=1 
filters=258    #change
activation=linear

[control_channels]
from=210

[yolo]
mask = 9,10,11
anchors = 19,27,  44,40,  38,94,  96,68,  86,152,  180,137,  140,301,  303,264,  238,542,  436,615,  739,380,  925,792
classes=81  #change
num=12
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6

# ============ End of Head ============ #

只要你把上面Highlight的地方改成你要的就可以了!!
記得請把註解#change刪除掉,
cfg檔案裡面不能打註解,
我剛剛測試過,打完註解就不能跑了


filters的公式是(classes + 5)x3能,
詳情可以看我yolov4的文章,
或者去github的AlexeyAB/darknet看介紹~

💙記得當你的classes要擴充時, 要進去你的.cfg檔案裡面調整參數, 
像是你有62個classes的話, 就去把它改成62.