Auto-Encoderdecoderfor Planar filterAnalysisSynthesis
Auto-Encoderdecoderfor Planar filterAnalysisSynthesis
Auto-Encoderdecoderfor Planar filterAnalysisSynthesis
Abstract In this paper, we propose the first study which estimates the frequency response directly from the geometry of a
planar filter, and which also synthesizes the planar filter geometry directly from the given frequency response using a
convolutional neural network (CNN) based auto-encoder/decoder. we also explain the way to generate an accurate and massive
dataset for training the auto-encoder/decoder. In our experiments, the frequency response is estimated in 1.5 msec and the filter
geometry is synthesized in 2.7 msec, respectively.
Keyword Convolutional neural network, Edge model de-embedding/embedding, Electromagnetic simulation, F-parameter,
Microwave
diagonal part of the table (W 1 = W 2 ) after halving their lengths (L n ) are generated and connected u ntil the
lengths. The edge model F edge is a cascade product of accumulated length reaches the half of the total length
the inverse function of corresponding lead line F 1 , de- L total . Conversely, entire geometry is a unfold image
embedding target, and the inverse function of F 2 . As over vertically and horizontally. The frequency
a result, F ed ge represents the non-uniformity of the response of the randomly generated geometry is
current distribution as frequency response and its calculated and stored in a dataset with it s quarter-
length is zero because total lengths of the de - sized image. Depending on the geometry the line
embedding target (L 1 + L 2 ) is coincident with the total lengths are scaled as shown in Fig. 4. Edge models are
subtraction lengths (L 1 , L 2 ). Figure 6 shows a random embedded depending on the line widths as shown in
generation of a filter geometry. Since the filter Fig. 5(b). Since the edge model has no length, the
geometry has a symmetrical structure over vertically embedding does not affect the total length and affects
and horizontally, the quarter size of the geometry is only the frequency response.
generated as an image. Random widths ( W n ) and
Convolution
+ Convolution
Sigmoid +
42 Sigmoid Full-connection
+
2 90 Sigmoid
1
100 3 6
3 14
50 200
6
Auto-encoder (CNN)/EM simulation (human)
Frequency response
1
61
Auto-decoder (CNN)/Artwork (human)
Deconvolution
+ Deconvolution Full-connection
Sigmoid + +
Sigmoid Sigmoid
42
2 90
1 1
100 3 6
3 14
50 200
6
1
Fig. 1. Auto-encoder and decoder of a planar filter using convolutional neural networks (CNNs). Since the surface of a planar filter has
a two-dimensional geometry, it can be assumed as an image. The system of an auto-encoder and decoder is a special case of a
neural network which converts an image to its similar image via a compact vector which contains a feature of the images. The
compact vector is mapped to a frequency response of a planar filter in this case. The path from an image to its feature vector is
called auto-encoder. Auto-decoder is a path from a feature vector to its related image. Since the images have higher-order
vectors than the feature vector, convolution, and de-convolution are utilized for down- and up-sampling in the auto-encoder
and decoder paths. A couple of full-connected neural networks are the bridges of a convoluted image to the feature vector or
the deconvoluted image from the feature vector. Non-linear sigmoidal functions are inserted properly to extract logical
relationships and to normalize the values. This auto-decoder and encoder paths are also human mimic tasks. When designing a
planar filter, the human creates the filter geometry from the desired frequency. Afterword, the frequency response is confirmed
using an electromagnetic (EM ) simulator to ensure its response meets the desired frequency.
2019 Thailand-Japan MicroWave (TJMW2019)
Ftotal W1 W1
W2 W2
Ftotal ≠ Fleft · Fright Ftotal ≈ Fleft · Fedge · Fright x x =
L1 0 L2 L1 L2
Fig. 2. The current density of connected transmission lines (TLs)
whose widths are different. At the edge where the two TLs (b) Edge model (Fedge ) embedding.
are connected, the current distribution is non-uniform.
Fig. 5. Edge model Fedge de-embedding and embedding. Edge
Therefore, the simple product of left- and right-side F-
model de-embedding is calculated from the inverse
parameters (Fleft, Fright) is different from the total one (Ftotal).
functions of TLs whose length and width are coincided
For the accurate calculation of the Ftotal, the non-uniformity
with the those of target ones. Conversely, Fedge is
of the connection (Fedge ) should be considered.
embedded between TLs when the line widths are different.
W1 W2 W1
W2 Ltotal
Port1
Port2
Port1
Port2
L1 L2 L3/2
W2
W2/2
port2
port1
W1 = W2 W1 < W2
W1/2 unfold
W3/2
W1
unfold
Port2
Port1
Port2
W1
W1
L1 0.8mm thikness
Polytetrafluoroethylene
Port1 (PTFE)
W1 er = 2.06, tand = 0.0002
Fig. 7. The appearance of the EMPro, which is an electromagnetic
L2 (EM ) simulator provided by Agilent technology Inc. Signal
line of a 1.8 mm copper foil is on a 0.8 mm
polytetrafluoroethylene (PTFE) bulk which is shield with a
Fig. 4. Length scaling of a transmission line (TL). Since the perfect electrical conductor (PEC). Signal line widths of W1
propagation constant g and characteristic impedance Z0 are and W2 are swept to generate the combination table of Fig.
not depending on the length of the TL, scaled F-parameter 3. The width sweeping and S-parameter extraction are
(F2) is calculated from F1 using the scaling factor n. performed automatically by a built-in Python script.
2019 Thailand-Japan MicroWave (TJMW2019)
port2
port1
W2
W2
F1 F2 F3 F4 F5 100 matrix is converted into 50 x 100 image using 42
W1 W3 W1 x 2 kernel (stride is 1 x 2). The calculation costs of
Fedge2 Fedge4 the auto-encoder/decoder are 302,652 and 7,323,372
(a) Geometry. in terms of the product-sum.
Ftotal = F1 ∙ F2 ∙ F3 ∙ F4 ∙ F5
Ftotal = F1 ∙ Fedge1 ∙ F2 ∙ Fedge2 ∙ F3 ∙ Fedge3 ∙ F4 ∙ Fedge4 ∙ F5 Image (quarter size of the geometry)
Frequency response
0
S21 S11
discrepancy
(w/o edge model)
-10
EM simulation
(1.5 hours
-15 @ Core i5)
L1 = L2 = L3 = 4.0 mm
W1 = W3 = 0.4 mm
W2 = 3.2 mm
-20
0 5 10 15
Frequency [GHz]
(b) Frequency responses.
Fig. 8. Geometry of a planar filter and its frequency response. Dash Fig. 9. Part of a dataset. The dataset contains pairs of an image and
and solid-lines are S11 and S21 when the geometry is its frequency response. The image is a randomly generated
analyzed with an electromagnetic simulator. Symbols
quarter-sized planar filter (Fig. 6). Its frequency response is
denotes the reconstructed responses calculated from F-
calculated quickly and accurately taking account of the
parameters. Circle (triangle) symbols are case of ignoring
edge model (Fig. 5). Quantity and quality of a dataset are
(considering) the edge model.
important for the auto-encoder/decoder training (Fig. 1).
3. E XPERIMENTS
0.0105
Figure 1 also shows a structure of the auto -
encoder/decoder. The size of the target planer filter is
1 x 4 cm. Metal and bulk parts are converted into
Cost function/RMSE 2
Cost function/RMSE2
0.075
geometry. The cascaded production of F-parameters
with edge models (triangles) exactly traces the
frequency response which is the EM simulation result
of the entire geometry (lines). On the other hands, the 0.070
one without edge models has a discrepancy. This RMSE2
result shows the accuracy using edge mo dels is
comparable to that of the EM simulation even though 0.065
the calculation is done in 2 msec while the EM Cost function
simulation takes 1.5 hours.
0.105 0.060
0 100 200 300 400
Epoch
0.100
RMS Error (Validation)
Fig. 13. Cost function and the square of RM S error (RM SE2) as a
function of Epoch while input and output of auto-decoder
(Fig. 1) are frequency responses and their images (Fig. 9),
0.095 respectively.
64 batch size
0.090 400 epoch Figure 9 shows a part of the dataset. The dataset
80% for Training contains pars of randomly generated image and its
20% for Validation frequency response, which is calculated accurately
0.085 considering the edge models. Taking advantage of
1k
k 3k 30k
high-speed calculation, 30,000 pairs are stored in the
Dataset size dataset.
Fig. 11. RM S error of auto-encoder with respect to the dataset size. Figure 10 shows a learning curve of the auto-
Input image encoder. Figure 11 shows the root-mean-square
50 pixels (RMS) error with respect to the dataset size. For each
10 mm size, 80% (20%) of the dataset is used for training
200 pixels
(validation). The quantity of dataset improves the
40 mm
neural network. Figure 12 shows the input image and
(a) Input image and unfolded geometry.
frequency response of the dataset and output of the
auto-encoder when this image is applied. This pair of
1.2
image and frequency response are not used for
Dataset training and used only for validation. This result
1
shows the auto-encoder can estimate the frequency
response from the input image.
0.8 Figure 13 shows a learning curve of the auto -
S21 Magnitude
1 200 pixels
Auto-decoder (CNN)
50 pixels
10 mm
0.8
binarization
S21 Magnitude
0.6
40 mm
200 pixels
0.4
50 pixels
10 mm
0.2 Auto-encoder (CNN)
0
0 10 20 30 40 mm
Frequency [GHz]
Fig. 14. An example of synthesized filter image (upper right) decoded from the desired frequency response (dash line in the graph)
and a frequency response (solid line in the graph) encoded from a binarized image (bottom right). Desired frequency
response is fed to the auto-decoder and a quarter-sized filter image is synthesized. Since the auto-decoder outputs gray-scale
image, it is binarized using a certain image library such as OpenCV. The binarized image is fed to the auto-encoder and its
frequency response is estimated. The estimated frequency response tends to be different from the desired frequency if it is
physically difficult to be realized as shown in this example of a brick-wall filter. Auto-decoder and encoder take 2.7 and 1.5
msec, respectively.
Acknowledgements
This work was supported by Japan Society for the
Promotion of Science (JSPS) through a Grant -in-Aid
for Scientific Research (No. 18K04155). This work
was also supported by VLSI Design and Education
Center (VDEC), The University of Tokyo with the
collaboration with Agilent Technologies.
References
[1] J. S. Hong and M. J. Lancaster, Microstrip Filters
for RF/Microwave Applications, John Wiley &
Sons, New York, 2001.
[2] K. Katayama, K. Takahata, T. Ohsawa and T.
Baba, “Training Dataset Synthesis for Planar
Filter Circuits Learning,” The 71th Joint Conf. of
Electrical, Electronics and Information Engineers
in Kyushu, pp. 264, Sept. 9, 2018.