diff --git a/docs/install.md b/docs/install.md index 662c6739..67d09736 100644 --- a/docs/install.md +++ b/docs/install.md @@ -2,10 +2,11 @@ -- [Requirements](#requirements) -- [Prepare environment](#prepare-environment) -- [Install MMHuman3D](#install-mmhuman3d) -- [A from-scratch setup script](#a-from-scratch-setup-script) +- [Installation](#installation) + - [Requirements](#requirements) + - [Prepare environment](#prepare-environment) + - [Install MMHuman3D](#install-mmhuman3d) + - [A from-scratch setup script](#a-from-scratch-setup-script) @@ -54,6 +55,12 @@ conda install pytorch=1.8.0 torchvision cudatoolkit=10.2 -c pytorch **Important:** Make sure that your compilation CUDA version and runtime CUDA version match. Besides, for RTX 30 series GPU, cudatoolkit>=11.0 is required. +To make sure that your installed the right pytorch version, you should check if you get `True` when running the following commands: +``` +import torch +torch.cuda.is_available() +``` +If you get `False`, you should choose to install other pytorch versions. d. Install PyTorch3D from source. @@ -150,6 +157,11 @@ cd mmdetection pip install -r requirements/build.txt pip install -v -e . ``` +Here, to check that your mmdet is compatible with pytorch, check if you have problems with running the following command: +``` +from mmdet.apis import inference_detector, init_detector +``` +If you meet errors, please check your pytorch and mmcv versions and install other versions of pytorch. - mmpose (optional) ```shell diff --git a/docs/preprocess_dataset.md b/docs/preprocess_dataset.md index 481a6ec4..0537a77e 100644 --- a/docs/preprocess_dataset.md +++ b/docs/preprocess_dataset.md @@ -4,26 +4,35 @@ -- [Datasets for supported algorithms](#datasets-for-supported-algorithms) -- [Folder structure](#folder-structure) - * [AGORA](#agora) - * [COCO](#coco) - * [COCO-WholeBody](#coco-wholebody) - * [CrowdPose](#crowdpose) - * [EFT](#eft) - * [GTA-Human](#gta-human) - * [Human3.6M](#human36m) - * [Human3.6M Mosh](#human36m-mosh) - * [HybrIK](#hybrik) - * [LSP](#lsp) - * [LSPET](#lspet) - * [MPI-INF-3DHP](#mpi-inf-3dhp) - * [MPII](#mpii) - * [PoseTrack18](#posetrack18) - * [Penn Action](#penn-action) - * [PW3D](#pw3d) - * [SPIN](#spin) - * [SURREAL](#surreal) +- [Data preparation](#data-preparation) + - [Overview](#overview) + - [Datasets for supported algorithms](#datasets-for-supported-algorithms) + - [Folder structure](#folder-structure) + - [AGORA](#agora) + - [AMASS](#amass) + - [COCO](#coco) + - [COCO-WholeBody](#coco-wholebody) + - [CrowdPose](#crowdpose) + - [EFT](#eft) + - [GTA-Human](#gta-human) + - [Human3.6M](#human36m) + - [Human3.6M Mosh](#human36m-mosh) + - [HybrIK](#hybrik) + - [LSP](#lsp) + - [LSPET](#lspet) + - [MPI-INF-3DHP](#mpi-inf-3dhp) + - [MPII](#mpii) + - [PoseTrack18](#posetrack18) + - [Penn Action](#penn-action) + - [PW3D](#pw3d) + - [SPIN](#spin) + - [SURREAL](#surreal) + - [VIBE](#vibe) + - [FreiHand](#freihand) + - [EHF](#ehf) + - [FFHQ](#ffhq) + - [ExPose](#expose) + - [Stirling](#stirling) ## Overview @@ -131,7 +140,8 @@ DATASET_CONFIGS = dict( ## Datasets for supported algorithms -For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. As such, use this command with the listed `dataset-names`: +For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. +As such, use this command with the listed `dataset-names`: ```bash python tools/convert_datasets.py \ @@ -188,6 +198,11 @@ mmhuman3d ├── mpii_train.npz └── pw3d_test.npz ``` +Note that, to avoid generating npz files every iteration during training, please create a blank cache directory. To do so, run the following command: +``` +cp -r data/preprocessed_datasets data/cache +``` +Also, remember to use the *_cache.py config during training. For SPIN training, the following datasets are required: - [COCO](#coco)