Skip to content

Commit 77dc13e

Browse files
Improved
1 parent 10d22e7 commit 77dc13e

File tree

2 files changed

+71
-36
lines changed

2 files changed

+71
-36
lines changed

compile_tensorflow_cpp.md

Lines changed: 48 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -2,38 +2,51 @@
22

33
Building TensorFlow C++ API is very tricky and can be a pain as there is not much information you can find about it even on TensorFlow's official documentation. Following you will find a step-by-step instruction showing how to build TensorFlow C++ v2 on Linux. It works well for my Ubuntu 20.04 running on AMD Ryzen processors.
44

5+
In this page I will walk you through the steps to install TensorFlow C++ API version 2.7.
6+
57
## Dependencies
68

79
- Conda environment
810
- Python 3.9.0
9-
- TensorFlow 2.7
1011
- Bazel 3.7.2
1112
- Protobuf 3.9.2 (must be compatible with the version of TensorFlow-built protobuf or protoc)
1213

13-
## Environment setup & install Python
14+
---
15+
16+
## Install package dependencies
17+
18+
### 1. Environment setup & install Python
1419
```
1520
conda create -n tfcc
1621
conda activate tfcc
17-
conda install python
22+
conda install python==3.9
1823
conda update --all -y
1924
```
2025

21-
## Install bazel
26+
### 2. Install bazel
2227
```
2328
sudo apt install bazel-3.7.2
2429
```
2530

26-
## Install TensorFlow CC
27-
```
31+
### 3. Install Protobuf
32+
33+
I suggest installing protobuf after building TensorFlow so that we can check that which version of protobuf we have to use.
34+
35+
---
36+
37+
## Compile TensorFlow C++ and install libraries
38+
39+
### 1. Compile TensorFlow C++ shared library (with optimization)
40+
41+
Download or clone github repo to your system:
42+
```bash
2843
git clone https://github.com/tensorflow/tensorflow
2944
cd tensorflow
3045
git checkout r2.7
3146
```
3247

33-
---
34-
35-
## 1. Compile TF shared library (with optimization)
36-
```
48+
Let's compile using bazel `build` rule:
49+
```bash
3750
export CC=gcc
3851
export CXX=g++
3952
bazel build --jobs=4 --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" -c opt \
@@ -44,6 +57,11 @@ bazel build --jobs=4 --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" -c opt \
4457
//tensorflow/tools/pip_package:build_pip_package
4558
```
4659

60+
You can use the following command to check all available rules in each folder:
61+
```bash
62+
bazel query ...
63+
```
64+
4765
Note:
4866

4967
1. Building TF uses a lot of memory, I prefer a small number of CPUs (`--jobs`)
@@ -60,24 +78,25 @@ bazel test --jobs=4 --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" -c opt \
6078
//tensorflow/tools/lib_package:libtensorflow_test
6179
```
6280

63-
## 2. Install protobuf
81+
### 2. Install protobuf
6482

6583
1. Check the version of protobuf that TF is built with
66-
```
84+
```bash
6785
bazel-bin/external/com_google_protobuf/protoc --version
6886
libprotoc 3.9.2
6987
```
7088
2. Download protobuf source code from its GitHub release https://github.com/protocolbuffers/protobuf/tags
7189
3. Compile and link
72-
```
90+
```bash
7391
./configure --prefix=/home/rangsiman/protobuf-3.9.2/
7492
make
7593
make check
7694
make install
7795
```
7896

79-
## 3. Copy required files into a single path for C++ linkage
80-
```
97+
### 3. Copy required files into a single path for C++ linkage
98+
99+
```bash
81100
sudo mkdir /usr/local/tensorflow
82101
sudo cp -r bazel-bin/tensorflow/include/ /usr/local/tensorflow/
83102
sudo cp -r /home/rangsiman/protobuf-3.9.2/include/google/ /usr/local/tensorflow/include/
@@ -86,12 +105,12 @@ sudo cp -r bazel-bin/tensorflow/*.so* /usr/local/tensorflow/lib
86105
sudo cp -r /home/rangsiman/protobuf-3.9.2/lib/*.so* /usr/local/tensorflow/lib
87106
```
88107

89-
## 4. Compiling the op library and example code
108+
### 4. Compiling the op library and example code
90109

91110
**Example-1**: Zero out
92111

93112
Create `zero_out.cpp`
94-
```
113+
```cpp
95114
#include "tensorflow/core/framework/op.h"
96115
#include "tensorflow/core/framework/shape_inference.h"
97116

@@ -107,7 +126,7 @@ REGISTER_OP("ZeroOut")
107126
```
108127
109128
Run the following
110-
```
129+
```bash
111130
g++ -Wall -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 \
112131
-shared zero_out.cpp -o zero_out.so \
113132
-I/usr/local/tensorflow/include/ -L/usr/local/tensorflow/lib \
@@ -117,7 +136,7 @@ g++ -Wall -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 \
117136
**Example-2**: Call TF session
118137

119138
Create `session.cpp`
120-
```
139+
```cpp
121140
#include <tensorflow/core/platform/env.h>
122141
#include <tensorflow/core/public/session.h>
123142

@@ -139,7 +158,7 @@ int main()
139158
```
140159
141160
Run the following
142-
```
161+
```bash
143162
g++ -Wall -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 \
144163
session.cpp -o session \
145164
-I/usr/local/tensorflow/include/ -L/usr/local/tensorflow/lib \
@@ -150,12 +169,17 @@ To run the executable, you also need to add `/usr/local/tensorflow/lib/` into `L
150169

151170
---
152171

153-
## Optional: Compile TF via pip (wheel) builder
154-
```
155-
## create a wheel package
172+
## Optional: Compile TensorFlow via pip (wheel) builder
173+
174+
Once you have built your TensorFlow, you can then make a wheel file and install the TensorFlow Python library:
175+
176+
Create a wheel package:
177+
```bash
156178
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
179+
```
157180

158-
## install TF using a created wheel
181+
Install TensorFlow library using a created wheel:
182+
```bash
159183
pip install /tmp/tensorflow_pkg/tensorflow-*.whl
160184
```
161185

load_model_tensorflow_cpp.md

Lines changed: 23 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
# Load model with TensorFlow C++ API
22

3-
## 1. Build neural network and save model
3+
The following you will learn how to save TensorFlow or Keras model using Python and then load it via TensorFlow C++ API `LoadSavedModel`.
44

5-
```
5+
## 1. Build neural network and save a model
6+
7+
```python
8+
# keras_class.py
69
from sklearn.datasets import make_classification
710
from tensorflow.keras import Sequential
811
from tensorflow.keras.layers import Dense
@@ -20,27 +23,34 @@ sgd = SGD(learning_rate=0.001, momentum=0.8)
2023
model.compile(optimizer=sgd, loss='binary_crossentropy')
2124
# fit the model
2225
model.fit(X, y, epochs=100, batch_size=32, verbose=1, validation_split=0.3)
23-
# save model to file
26+
# save model to folder
2427
model.save('model')
2528
```
2629

27-
This will create a folder `model` and save model as protobuf files:
30+
Train a model:
31+
```bash
32+
python keras_class.py
2833
```
34+
35+
This will create a folder `model` and save model as protobuf files:
36+
```bash
2937
ls model/
38+
# output
3039
assets keras_metadata.pb saved_model.pb variables
3140
```
3241

33-
## 2. Load model with C++ API
42+
## 2. Load a model with C++ API
3443

3544
Create a new file, e.g., `load_model.cpp`
36-
```
45+
```cpp
3746
#include <tensorflow/cc/saved_model/loader.h>
3847
#include <tensorflow/cc/saved_model/tag_constants.h>
3948

4049
using namespace tensorflow;
4150

4251
int main() {
4352

53+
// load the whole folder
4454
const std::string export_dir = "./model/";
4555

4656
// Load
@@ -56,22 +66,23 @@ if (!status.ok()) {
5666
}
5767
```
5868

59-
Compile source code
60-
```
69+
Compile source code:
70+
```bash
6171
g++ -Wall -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 \
6272
load_model.cpp -o load_model.o \
6373
-I/usr/local/tensorflow/include/ -L/usr/local/tensorflow/lib -ltensorflow_cc -ltensorflow_framework
6474
```
6575

66-
Add TensorFlow lib into lib env var
67-
```
76+
Add TensorFlow lib into lib env var:
77+
```bash
6878
export LD_LIBRARY_PATH=/usr/local/tensorflow/lib/:$LD_LIBRARY_PATH
6979
```
7080

71-
Run the executable
72-
```
81+
Run the executable:
82+
```bash
7383
./load_model.o
7484

85+
# output
7586
2021-12-30 22:14:10.621434: I tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: ./model/
7687
2021-12-30 22:14:10.630099: I tensorflow/cc/saved_model/reader.cc:90] Reading meta graph with tags { serve }
7788
2021-12-30 22:14:10.630299: I tensorflow/cc/saved_model/reader.cc:132] Reading SavedModel debug info (if present) from: ./model/

0 commit comments

Comments
 (0)