Survy Vaish 0b3cf104dd Include reference to a pre-configured Docker image to run RAFT 1 anno fa
..
models bdf05c17bb renamed: Optical-flow-as-a-deep-learning-task-> Optical-Flow-Estimation-using-Deep-Learning-RAFT 4 anni fa
videos bdf05c17bb renamed: Optical-flow-as-a-deep-learning-task-> Optical-Flow-Estimation-using-Deep-Learning-RAFT 4 anni fa
README.md 0b3cf104dd Include reference to a pre-configured Docker image to run RAFT 1 anno fa
inference.py 86b5600817 fixed error with arguments 3 anni fa
requirements.txt bdf05c17bb renamed: Optical-flow-as-a-deep-learning-task-> Optical-Flow-Estimation-using-Deep-Learning-RAFT 4 anni fa

README.md

RAFT: Optical Flow Estimation Using Deep Learning

This repository contains code for RAFT: Optical Flow estimation using Deep Learning blogpost.

download

Installation

Running code locally

  1. To run the demo, firstly you need to clone the RAFT repo being in our directory:
   git clone git@github.com:princeton-vl/RAFT.git

or

   git clone https://github.com/princeton-vl/RAFT.git

Please, attention! There is an option that authors can update their repo and our script will become non-working. To avoid this case, we saved the suitable version of the RAFT architecture in our GitHub, so you can download it from there.

  1. To run the demo, you need to create a virtual environment your working directory:
   virtualenv -p python3.7 venv
   source venv/bin/activate

and install the required libraries:

   pip install -r requirements.txt
  1. (Optional) There is a pretrained weights file that is already in our repo, but you can download all authors' weights files using this command:
   ./RAFT/download_models.sh
  1. Now you can run the demo with RAFT:
   python3 inference.py --model=./models/raft-sintel.pth --video ./videos/crowd.mp4

or with RAFT-S

   python3 inference.py --model=./models/raft-small.pth --video ./videos/crowd.mp4 --small

Running using Docker

Follow the instructions here to quickly run the RAFT example code using a pre-configured Docker image.

Troubleshooting

If you have two GPUs and there is a User Warning like:

UserWarning:
    There is an imbalance between your GPUs. You may want to exclude GPU 1 which
    has less than 75% of the memory or cores of GPU 0. You can do so by setting
    the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES
    environment variable.

with the following error such as:

TypeError: forward() missing 2 required positional arguments: 'image1' and 'image2'

one of the solution is to set the environment variable CUDA_VISIBLE_DEVICES on our own:

$ export CUDA_VISIBLE_DEVICES=0

where 0 is the id number of the one of your GPUs.

AI Courses by OpenCV

Want to become an expert in AI? AI Courses by OpenCV is a great place to start.