yangjun dfa27afb39 提交PaddleDetection develop 分支 d56cf3f7c294a7138013dac21f87da4ea6bee829 | 1 سال پیش | |
---|---|---|
.. | ||
image | 1 سال پیش | |
include | 1 سال پیش | |
src | 1 سال پیش | |
.gitignore | 1 سال پیش | |
Makefile | 1 سال پیش | |
README.md | 1 سال پیش | |
README.md.bak | 1 سال پیش | |
arm-none-eabi-gcc.cmake | 1 سال پیش | |
configure_avh.sh | 1 سال پیش | |
convert_image.py | 1 سال پیش | |
corstone300.ld | 1 سال پیش | |
requirements.txt | 1 سال پیش | |
run_demo.sh | 1 سال پیش |
This folder contains an example of how to run a PP-PicoDet model on bare metal Cortex(R)-M55 CPU using Arm Virtual Hardware.
Case 1: If the demo is run in Arm Virtual Hardware Amazon Machine Image(AMI) instance hosted by AWS/AWS China, the following software will be installed through configure_avh.sh script. It will install automatically when you run the application through run_demo.sh script. You can refer to this guide to launch an Arm Virtual Hardware AMI instance.
Case 2: If the demo is run in the ci_cpu Docker container provided with TVM, then the following software will already be installed.
Case 3: If the demo is not run in the ci_cpu Docker container, then you will need the following:
bash
pip install -r ./requirements.txt
In case2 and case3:
You will need to update your PATH environment variable to include the path to cmake 3.19.5 and the FVP.
For example if you've installed these in /opt/arm
, then you would do the following:
export PATH=/opt/arm/FVP_Corstone_SSE-300/models/Linux64_GCC-6.4:/opt/arm/cmake/bin:$PATH
You will also need TVM which can either be:
Type the following command to run the bare metal text recognition application (src/demo_bare_metal.c):
./run_demo.sh
If you are not able to use Arm Virtual Hardware Amazon Machine Image(AMI) instance hosted by AWS/AWS China, specify argument --enable_FVP to 1 to make the application run on local Fixed Virtual Platforms (FVPs) executables.
./run_demo.sh --enable_FVP 1
If the Ethos(TM)-U platform and/or CMSIS have not been installed in /opt/arm/ethosu then the locations for these can be specified as arguments to run_demo.sh, for example:
./run_demo.sh --cmsis_path /home/tvm-user/cmsis \
--ethosu_platform_path /home/tvm-user/ethosu/core_platform
With run_demo.sh to run the demo application, it will:
The create_image.py script takes a single argument on the command line which is the path of the image to be converted into an array of bytes for consumption by the model.
The demo can be modified to use an image of your choice by changing the following line in run_demo.sh
python3 ./convert_image.py path/to/image
In this demo, the model we used is based on PP-PicoDet. Because of the excellent performance, PP-PicoDet are very suitable for deployment on mobile or CPU. And it is released by PaddleDetection.