If your graphics card has at least **8GB** of VRAM, follow these steps to test CUDA acceleration:
> ❗ Due to the extremely limited nature of 8GB VRAM for running this application, you need to close all other programs using VRAM to ensure that 8GB of VRAM is available when running this application.
1. Modify the value of `"device-mode"` in the `magic-pdf.json` configuration file located in your home directory.
@@ -60,8 +60,6 @@ Download a sample file from the repository and test it.
If your graphics card has at least 8GB of VRAM, follow these steps to test CUDA-accelerated parsing performance.
> ❗ Due to the extremely limited nature of 8GB VRAM for running this application, you need to close all other programs using VRAM to ensure that 8GB of VRAM is available when running this application.
1.**Overwrite the installation of torch and torchvision** supporting CUDA.
@@ -22,7 +22,9 @@ The configuration file can be found in the user directory, with the filename `ma
> Due to feedback from some users that downloading model files using git lfs was incomplete or resulted in corrupted model files, this method is no longer recommended.
If you previously downloaded model files via git lfs, you can navigate to the previous download directory and use the `git pull` command to update the model.
When magic-pdf <= 0.8.1, if you have previously downloaded the model files via git lfs, you can navigate to the previous download directory and update the models using the `git pull` command.
> For versions 0.9.x and later, due to the repository change and the addition of the layout sorting model in PDF-Extract-Kit 1.0, the models cannot be updated using the `git pull` command. Instead, a Python script must be used for one-click updates.
## 2. Models downloaded via Hugging Face or Model Scope