|
2 yıl önce | |
---|---|---|
.. | ||
models | 3 yıl önce | |
README.md | 2 yıl önce | |
config.py | 3 yıl önce | |
dataset.py | 3 yıl önce | |
env.yml | 3 yıl önce | |
kanagawa.jpg | 3 yıl önce | |
livedemo.py | 3 yıl önce | |
livedemo_macwin.py | 3 yıl önce | |
precompute_targets.py | 3 yıl önce | |
stylenet.py | 3 yıl önce | |
train.sh | 3 yıl önce | |
train_resnet.py | 3 yıl önce |
This folder contains code for Real-time style transfer in a zoom meeting blogpost.
Please use conda for setting up environment for this project conda env create -f env.yml.
In general, you need pytorch, opencv and pillow. CUDA acceleration is highly recommended for training, but inference can be done without it.
We provide a pretrained resnet18 model file at 640x480 resolution. This is the most common resolution for webcams, so you can use this model as loss function for training style transfer models for webcams.
In case you still want to train resnet at another resolution, download imagenet data and create a text file containing paths of all images.
Set the path to the text file in config.py
.
We use knowledge distillation for creating targets for training images.
python3 precompute_targets.py
#After precomputing targets, we train
python3 train_resnet.py
You should see loss start to go down. You can also visualize the loss with tensorboard
tensorboard --logdir=./runs/
The trained model will be saved to disk. Set the path of trained resnet model you want to use as LOSS_NET_PATH in config.py
.
Set the path of any image you want to use as style target (STYLE_TARGET).
Train style transfer network with
python3 stylenet.py
After training, create virtual camera as explained in the blog post.
If you are on Windows/Mac, use
python3 livedemo_macwin.py
If you are on linux, use
python3 livedemo.py
Once the script is running, you can join any zoom/skype/teams meeting and choose the virtual camera. You will see stylized output and so will your friends in the meeting.
Want to become an expert in AI? AI Courses by OpenCV is a great place to start.