Update build commands
Tested: bazel build -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a //tensorflow/lite/java:tensorflow-lite With NDK r17c PiperOrigin-RevId: 295248025 Change-Id: Ice7895ffb5ef53f5885532fdf66ae6ee9d012892
This commit is contained in:
parent
0d1db12677
commit
dd02edce8b
|
@ -30,7 +30,7 @@ provided. Assuming a working [bazel](https://bazel.build/versions/master/docs/in
|
||||||
configuration, this can be built as follows:
|
configuration, this can be built as follows:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
bazel build -c opt --cxxopt=--std=c++11 //tensorflow/lite/c:tensorflowlite_c
|
bazel build -c opt //tensorflow/lite/c:tensorflowlite_c
|
||||||
```
|
```
|
||||||
|
|
||||||
and for Android (replace `android_arm` with `android_arm64` for 64-bit),
|
and for Android (replace `android_arm` with `android_arm64` for 64-bit),
|
||||||
|
|
|
@ -9,14 +9,13 @@ Plugin, and placed in Assets/TensorFlowLite/SDK/Plugins/. For the editor (note
|
||||||
that the generated shared library name and suffix are platform-dependent):
|
that the generated shared library name and suffix are platform-dependent):
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
bazel build -c opt --cxxopt=--std=c++11 //tensorflow/lite/c:tensorflowlite_c
|
bazel build -c opt //tensorflow/lite/c:tensorflowlite_c
|
||||||
```
|
```
|
||||||
|
|
||||||
and for Android (replace `android_arm` with `android_arm64` for 64-bit):
|
and for Android (replace `android_arm` with `android_arm64` for 64-bit):
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
bazel build -c opt --cxxopt=--std=c++11 --config=android_arm \
|
bazel build -c opt --config=android_arm //tensorflow/lite/c:tensorflowlite_c
|
||||||
//tensorflow/lite/c:tensorflowlite_c
|
|
||||||
```
|
```
|
||||||
|
|
||||||
If you encounter issues with native plugin discovery on Mac ("Darwin")
|
If you encounter issues with native plugin discovery on Mac ("Darwin")
|
||||||
|
|
|
@ -144,8 +144,7 @@ Once Bazel is properly configured, you can build the TensorFlow Lite AAR from
|
||||||
the root checkout directory as follows:
|
the root checkout directory as follows:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
bazel build --cxxopt='-std=c++11' -c opt \
|
bazel build -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
|
||||||
--fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
|
|
||||||
//tensorflow/lite/java:tensorflow-lite
|
//tensorflow/lite/java:tensorflow-lite
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -148,9 +148,9 @@ AAR to your local Maven repository:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
mvn install:install-file \
|
mvn install:install-file \
|
||||||
-Dfile=bazel-bin/tensorflow/lite/java/tensorflow-lite-with-select-tf-ops.aar \
|
-Dfile=bazel-bin/tensorflow/lite/java/tensorflow-lite-select-tf-ops.aar \
|
||||||
-DgroupId=org.tensorflow \
|
-DgroupId=org.tensorflow \
|
||||||
-DartifactId=tensorflow-lite-with-select-tf-ops -Dversion=0.1.100 -Dpackaging=aar
|
-DartifactId=tensorflow-lite-select-tf-ops -Dversion=0.1.100 -Dpackaging=aar
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, in your app's `build.gradle`, ensure you have the `mavenLocal()`
|
Finally, in your app's `build.gradle`, ensure you have the `mavenLocal()`
|
||||||
|
|
|
@ -20,7 +20,7 @@ JAVA_SRCS = glob([
|
||||||
|
|
||||||
# Building tensorflow-lite.aar including 4 variants of .so
|
# Building tensorflow-lite.aar including 4 variants of .so
|
||||||
# To build an aar for release, run below command:
|
# To build an aar for release, run below command:
|
||||||
# bazel build --cxxopt='-std=c++11' -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
|
# bazel build -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
|
||||||
# tensorflow/lite/java:tensorflow-lite
|
# tensorflow/lite/java:tensorflow-lite
|
||||||
aar_with_jni(
|
aar_with_jni(
|
||||||
name = "tensorflow-lite",
|
name = "tensorflow-lite",
|
||||||
|
|
|
@ -26,7 +26,7 @@ BASEDIR=tensorflow/lite
|
||||||
CROSSTOOL="//external:android/crosstool"
|
CROSSTOOL="//external:android/crosstool"
|
||||||
HOST_CROSSTOOL="@bazel_tools//tools/cpp:toolchain"
|
HOST_CROSSTOOL="@bazel_tools//tools/cpp:toolchain"
|
||||||
|
|
||||||
BUILD_OPTS="--cxxopt=--std=c++11 -c opt"
|
BUILD_OPTS="-c opt"
|
||||||
CROSSTOOL_OPTS="--crosstool_top=$CROSSTOOL --host_crosstool_top=$HOST_CROSSTOOL"
|
CROSSTOOL_OPTS="--crosstool_top=$CROSSTOOL --host_crosstool_top=$HOST_CROSSTOOL"
|
||||||
|
|
||||||
test -d $BASEDIR || (echo "Aborting: not at top-level build directory"; exit 1)
|
test -d $BASEDIR || (echo "Aborting: not at top-level build directory"; exit 1)
|
||||||
|
|
|
@ -41,8 +41,7 @@ code to merge.
|
||||||
2. Build the app with Bazel. The demo needs C++11:
|
2. Build the app with Bazel. The demo needs C++11:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
bazel build -c opt --cxxopt='--std=c++11' \
|
bazel build -c opt //tensorflow/lite/java/demo/app/src/main:TfLiteCameraDemo
|
||||||
//tensorflow/lite/java/demo/app/src/main:TfLiteCameraDemo
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Install the demo on a
|
3. Install the demo on a
|
||||||
|
|
|
@ -37,9 +37,9 @@ unzip -j /tmp/ovic.zip -d tensorflow/lite/java/ovic/src/testdata/
|
||||||
You can run test with Bazel as below. This helps to ensure that the installation is correct.
|
You can run test with Bazel as below. This helps to ensure that the installation is correct.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
bazel test --cxxopt=--std=c++11 //tensorflow/lite/java/ovic:OvicClassifierTest --cxxopt=-Wno-all --test_output=all
|
bazel test //tensorflow/lite/java/ovic:OvicClassifierTest --cxxopt=-Wno-all --test_output=all
|
||||||
|
|
||||||
bazel test --cxxopt=--std=c++11 //tensorflow/lite/java/ovic:OvicDetectorTest --cxxopt=-Wno-all --test_output=all
|
bazel test //tensorflow/lite/java/ovic:OvicDetectorTest --cxxopt=-Wno-all --test_output=all
|
||||||
```
|
```
|
||||||
|
|
||||||
### Test your submissions
|
### Test your submissions
|
||||||
|
@ -51,7 +51,7 @@ Once you have a submission that follows the instructions from the [competition s
|
||||||
You can call the validator binary below to verify that your model fits the format requirements. This often helps you to catch size mismatches (e.g. output for classification should be [1, 1001] instead of [1,1,1,1001]). Let say the submission file is located at `/path/to/my_model.lite`, then call:
|
You can call the validator binary below to verify that your model fits the format requirements. This often helps you to catch size mismatches (e.g. output for classification should be [1, 1001] instead of [1,1,1,1001]). Let say the submission file is located at `/path/to/my_model.lite`, then call:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
bazel build --cxxopt=--std=c++11 //tensorflow/lite/java/ovic:ovic_validator --cxxopt=-Wno-all
|
bazel build //tensorflow/lite/java/ovic:ovic_validator --cxxopt=-Wno-all
|
||||||
bazel-bin/tensorflow/lite/java/ovic/ovic_validator /path/to/my_model.lite classify
|
bazel-bin/tensorflow/lite/java/ovic/ovic_validator /path/to/my_model.lite classify
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -160,7 +160,7 @@ Note: You'll need ROOT access to the phone to change processor affinity.
|
||||||
* Build and install the app.
|
* Build and install the app.
|
||||||
|
|
||||||
```
|
```
|
||||||
bazel build -c opt --cxxopt=--std=c++11 --cxxopt=-Wno-all //tensorflow/lite/java/ovic/demo/app:ovic_benchmarker_binary
|
bazel build -c opt --cxxopt=-Wno-all //tensorflow/lite/java/ovic/demo/app:ovic_benchmarker_binary
|
||||||
adb install -r bazel-bin/tensorflow/lite/java/ovic/demo/app/ovic_benchmarker_binary.apk
|
adb install -r bazel-bin/tensorflow/lite/java/ovic/demo/app/ovic_benchmarker_binary.apk
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -159,7 +159,6 @@ adb shell /data/local/tmp/imagenet_accuracy_eval \
|
||||||
|
|
||||||
```
|
```
|
||||||
bazel run -c opt \
|
bazel run -c opt \
|
||||||
--cxxopt='--std=c++11' \
|
|
||||||
-- \
|
-- \
|
||||||
//tensorflow/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval \
|
//tensorflow/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval \
|
||||||
--model_file=mobilenet_quant_v1_224.tflite \
|
--model_file=mobilenet_quant_v1_224.tflite \
|
||||||
|
|
|
@ -28,7 +28,6 @@ to edit the `WORKSPACE` to configure the android NDK/SDK.
|
||||||
```
|
```
|
||||||
bazel build -c opt \
|
bazel build -c opt \
|
||||||
--config=android_arm64 \
|
--config=android_arm64 \
|
||||||
--cxxopt='--std=c++11' \
|
|
||||||
tensorflow/lite/tools/benchmark/android:benchmark_model
|
tensorflow/lite/tools/benchmark/android:benchmark_model
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -256,7 +256,6 @@ Optionally, you could also pass in the `--num_interpreter_threads` &
|
||||||
|
|
||||||
```
|
```
|
||||||
bazel run -c opt \
|
bazel run -c opt \
|
||||||
--cxxopt='--std=c++11' \
|
|
||||||
-- \
|
-- \
|
||||||
//tensorflow/lite/tools/evaluation/tasks/coco_object_detection:run_eval \
|
//tensorflow/lite/tools/evaluation/tasks/coco_object_detection:run_eval \
|
||||||
--model_file=/path/to/ssd_mobilenet_v1_float.tflite \
|
--model_file=/path/to/ssd_mobilenet_v1_float.tflite \
|
||||||
|
|
|
@ -204,7 +204,6 @@ adb shell /data/local/tmp/run_eval \
|
||||||
|
|
||||||
```
|
```
|
||||||
bazel run -c opt \
|
bazel run -c opt \
|
||||||
--cxxopt='--std=c++11' \
|
|
||||||
-- \
|
-- \
|
||||||
//tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification:run_eval \
|
//tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification:run_eval \
|
||||||
--model_file=mobilenet_quant_v1_224.tflite \
|
--model_file=mobilenet_quant_v1_224.tflite \
|
||||||
|
|
|
@ -59,7 +59,6 @@ LIBS := \
|
||||||
CXXFLAGS := -O3 -DNDEBUG -fPIC
|
CXXFLAGS := -O3 -DNDEBUG -fPIC
|
||||||
CXXFLAGS += $(EXTRA_CXXFLAGS)
|
CXXFLAGS += $(EXTRA_CXXFLAGS)
|
||||||
CFLAGS := ${CXXFLAGS}
|
CFLAGS := ${CXXFLAGS}
|
||||||
CXXFLAGS += --std=c++11
|
|
||||||
LDOPTS := -L/usr/local/lib
|
LDOPTS := -L/usr/local/lib
|
||||||
ARFLAGS := -r
|
ARFLAGS := -r
|
||||||
TARGET_TOOLCHAIN_PREFIX :=
|
TARGET_TOOLCHAIN_PREFIX :=
|
||||||
|
|
|
@ -139,7 +139,6 @@ ext = Extension(
|
||||||
'-I%s' % TENSORFLOW_DIR,
|
'-I%s' % TENSORFLOW_DIR,
|
||||||
'-module', 'interpreter_wrapper',
|
'-module', 'interpreter_wrapper',
|
||||||
'-outdir', PACKAGE_NAME],
|
'-outdir', PACKAGE_NAME],
|
||||||
extra_compile_args=['-std=c++11'],
|
|
||||||
include_dirs=[TENSORFLOW_DIR,
|
include_dirs=[TENSORFLOW_DIR,
|
||||||
os.path.join(TENSORFLOW_DIR, 'tensorflow', 'lite', 'tools',
|
os.path.join(TENSORFLOW_DIR, 'tensorflow', 'lite', 'tools',
|
||||||
'pip_package'),
|
'pip_package'),
|
||||||
|
|
Loading…
Reference in New Issue