1
1
# -*- coding: utf-8 -*-
2
2
"""
3
- (beta) Building a Convolution/Batch Norm fuser in FX
4
- *******************************************************
5
- **Author **: `Horace He <https://github.com/chillee>`_
3
+ (๋ฒ ํ) FX์์ ํฉ์ฑ๊ณฑ/๋ฐฐ์น ์ ๊ทํ( Convolution/Batch Norm) ๊ฒฐํฉ๊ธฐ(Fuser) ๋ง๋ค๊ธฐ
4
+ ****************************************************************************
5
+ **์ ์ **: `Horace He <https://github.com/chillee>`_
6
6
7
- In this tutorial, we are going to use FX, a toolkit for composable function
8
- transformations of PyTorch, to do the following:
7
+ **๋ฒ์ญ:** `์ค์ฐฌํฌ <https://github.com/kozeldark>`_
9
8
10
- 1) Find patterns of conv/batch norm in the data dependencies.
11
- 2) For the patterns found in 1), fold the batch norm statistics into the convolution weights.
9
+ ์ด ํํ ๋ฆฌ์ผ์์๋ PyTorch์ ๊ตฌ์ฑ ๊ฐ๋ฅํ ํจ์์ ๋ณํ์ ์ํ ํดํท์ธ FX๋ฅผ ์ฌ์ฉํ์ฌ ๋ค์์ ์ํํ๊ณ ์ ํฉ๋๋ค.
12
10
13
- Note that this optimization only works for models in inference mode (i.e. `mode.eval()`)
11
+ 1) ๋ฐ์ดํฐ ์์กด์ฑ์์ ํฉ์ฑ๊ณฑ/๋ฐฐ์น ์ ๊ทํ ํจํด์ ์ฐพ์ต๋๋ค.
12
+ 2) 1๋ฒ์์ ๋ฐ๊ฒฌ๋ ํจํด์ ๊ฒฝ์ฐ ๋ฐฐ์น ์ ๊ทํ ํต๊ณ๋ฅผ ํฉ์ฑ๊ณฑ ๊ฐ์ค์น๋ก ๊ฒฐํฉํฉ๋๋ค(folding).
14
13
15
- We will be building the fuser that exists here:
14
+ ์ด ์ต์ ํ๋ ์ถ๋ก ๋ชจ๋(์ฆ, `mode.eval()`)์ ๋ชจ๋ธ์๋ง ์ ์ฉ๋๋ค๋ ์ ์ ์ ์ํ์ธ์.
15
+
16
+ ๋ค์ ๋งํฌ์ ์๋ ๊ฒฐํฉ๊ธฐ๋ฅผ ๋ง๋ค ๊ฒ์
๋๋ค.
16
17
https://github.com/pytorch/pytorch/blob/orig/release/1.8/torch/fx/experimental/fuser.py
17
18
18
19
"""
19
20
20
-
21
21
######################################################################
22
- # First, let's get some imports out of the way (we will be using all
23
- # of these later in the code).
22
+ # ๋ช ๊ฐ์ง์ import ๊ณผ์ ์ ๋จผ์ ์ฒ๋ฆฌํด์ค์๋ค(๋์ค์ ์ฝ๋์์ ๋ชจ๋ ์ฌ์ฉํ ๊ฒ์
๋๋ค).
24
23
25
24
from typing import Type , Dict , Any , Tuple , Iterable
26
25
import copy
29
28
import torch .nn as nn
30
29
31
30
######################################################################
32
- # For this tutorial, we are going to create a model consisting of convolutions
33
- # and batch norms. Note that this model has some tricky components - some of
34
- # the conv/batch norm patterns are hidden within Sequentials and one of the
35
- # BatchNorms is wrapped in another Module .
31
+ # ์ด ํํ ๋ฆฌ์ผ์์๋ ํฉ์ฑ๊ณฑ๊ณผ ๋ฐฐ์น ์ ๊ทํ๋ก ๊ตฌ์ฑ๋ ๋ชจ๋ธ์ ๋ง๋ค ๊ฒ์
๋๋ค.
32
+ # ์ด ๋ชจ๋ธ์๋ ์๋์ ๊ฐ์ ๊น๋ค๋ก์ด ์์๊ฐ ์์ต๋๋ค.
33
+ # ํฉ์ฑ๊ณฑ/๋ฐฐ์น ์ ๊ทํ ํจํด ์ค์ ์ผ๋ถ๋ ์ํ์ค์ ์จ๊ฒจ์ ธ ์๊ณ
34
+ # ๋ฐฐ์น ์ ๊ทํ ์ค ํ๋๋ ๋ค๋ฅธ ๋ชจ๋๋ก ๊ฐ์ธ์ ธ ์์ต๋๋ค .
36
35
37
36
class WrappedBatchNorm (nn .Module ):
38
37
def __init__ (self ):
@@ -66,42 +65,40 @@ def forward(self, x):
66
65
model .eval ()
67
66
68
67
######################################################################
69
- # Fusing Convolution with Batch Norm
70
- # -----------------------------------------
71
- # One of the primary challenges with trying to automatically fuse convolution
72
- # and batch norm in PyTorch is that PyTorch does not provide an easy way of
73
- # accessing the computational graph. FX resolves this problem by symbolically
74
- # tracing the actual operations called, so that we can track the computations
75
- # through the `forward` call, nested within Sequential modules, or wrapped in
76
- # an user-defined module.
68
+ # ํฉ์ฑ๊ณฑ๊ณผ ๋ฐฐ์น ์ ๊ทํ ๊ฒฐํฉํ๊ธฐ
69
+ # -----------------------------
70
+ # PyTorch์์ ํฉ์ฑ๊ณฑ๊ณผ ๋ฐฐ์น ์ ๊ทํ๋ฅผ ์๋์ผ๋ก ๊ฒฐํฉํ๋ ค๊ณ ํ ๋ ๊ฐ์ฅ ํฐ ์ด๋ ค์ ์ค ํ๋๋
71
+ # PyTorch๊ฐ ๊ณ์ฐ ๊ทธ๋ํ์ ์ฝ๊ฒ ์ ๊ทผํ ์ ์๋ ๋ฐฉ๋ฒ์ ์ ๊ณตํ์ง ์๋๋ค๋ ๊ฒ์
๋๋ค.
72
+ # FX๋ ํธ์ถ๋ ์ค์ ์ฐ์ฐ์ ๊ธฐํธ์ (symbolically)์ผ๋ก ์ถ์ ํ์ฌ ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๋ฏ๋ก ์์ฐจ์ ๋ชจ๋ ๋ด์ ์ค์ฒฉ๋๊ฑฐ๋
73
+ # ์ฌ์ฉ์ ์ ์ ๋ชจ๋๋ก ๊ฐ์ธ์ง `forward` ํธ์ถ์ ํตํด ๊ณ์ฐ์ ์ถ์ ํ ์ ์์ต๋๋ค.
77
74
78
75
traced_model = torch .fx .symbolic_trace (model )
79
76
print (traced_model .graph )
80
77
81
78
######################################################################
82
- # This gives us a graph representation of our model. Note that both the modules
83
- # hidden within the sequential as well as the wrapped Module have been inlined
84
- # into the graph. This is the default level of abstraction, but it can be
85
- # configured by the pass writer. More information can be found at the FX
86
- # overview https://pytorch.org/docs/master/fx.html#module-torch.fx
79
+ # ์ด๋ ๊ฒ ํ๋ฉด ๋ชจ๋ธ์ ๊ทธ๋ํ๋ก ๋ํ๋ผ ์ ์์ต๋๋ค.
80
+ # ์์ฐจ์ ๋ชจ๋ ๋ฐ ๊ฐ์ธ์ง ๋ชจ๋ ๋ด์ ์จ๊ฒจ์ง ๋ ๋ชจ๋์ด ๋ชจ๋ ๊ทธ๋ํ์ ์ฝ์
๋์ด ์์ต๋๋ค.
81
+ # ์ด๋ ๊ธฐ๋ณธ ์ถ์ํ ์์ค์ด์ง๋ง ์ ๋ฌ ๊ธฐ๋ก๊ธฐ(pass writer)์์ ๊ตฌ์ฑํ ์ ์์ต๋๋ค.
82
+ # ์์ธํ ๋ด์ฉ์ ๋ค์ ๋งํฌ์ FX ๊ฐ์์์ ํ์ธํ ์ ์์ต๋๋ค.
83
+ # https://pytorch.org/docs/master/fx.html#module-torch.fx
87
84
88
85
89
86
####################################
90
- # Fusing Convolution with Batch Norm
91
- # ----------------------------------
92
- # Unlike some other fusions, fusion of convolution with batch norm does not
93
- # require any new operators. Instead, as batch norm during inference
94
- # consists of a pointwise add and multiply, these operations can be "baked"
95
- # into the preceding convolution's weights. This allows us to remove the batch
96
- # norm entirely from our model! Read
97
- # https://nenadmarkus.com/p/fusing-batchnorm-and-conv/ for further details. The
98
- # code here is copied from
87
+ # ํฉ์ฑ๊ณฑ๊ณผ ๋ฐฐ์น ์ ๊ทํ ๊ฒฐํฉํ๊ธฐ
88
+ # ---------------------------
89
+ # ์ผ๋ถ ๋ค๋ฅธ ๊ฒฐํฉ๊ณผ ๋ฌ๋ฆฌ, ํฉ์ฑ๊ณฑ๊ณผ ๋ฐฐ์น ์ ๊ทํ์ ๊ฒฐํฉ์ ์๋ก์ด ์ฐ์ฐ์๋ฅผ ํ์๋ก ํ์ง ์์ต๋๋ค.
90
+ # ๋์ , ์ถ๋ก ์ค ๋ฐฐ์น ์ ๊ทํ๋ ์ ๋ณ ๋ง์
๊ณผ ๊ณฑ์
์ผ๋ก ๊ตฌ์ฑ๋๋ฏ๋ก,
91
+ # ์ด๋ฌํ ์ฐ์ฐ์ ์ด์ ํฉ์ฑ๊ณฑ์ ๊ฐ์ค์น๋ก "๋ฏธ๋ฆฌ ๊ณ์ฐ๋์ด ์ ์ฅ(baked)" ๋ ์ ์์ต๋๋ค.
92
+ # ์ด๋ฅผ ํตํด ๋ฐฐ์น ์ ๊ทํ๋ฅผ ๋ชจ๋ธ์์ ์์ ํ ์ ๊ฑฐํ ์ ์์ต๋๋ค!
93
+ # ์์ธํ ๋ด์ฉ์ ๋ค์ ๋งํฌ์์ ํ์ธ ํ ์ ์์ต๋๋ค.
94
+ # https://nenadmarkus.com/p/fusing-batchnorm-and-conv/
95
+ # ์ด ์ฝ๋๋ ๋ช
ํ์ฑ์ ์ํด ๋ค์ ๋งํฌ์์ ๋ณต์ฌํ ๊ฒ์
๋๋ค.
99
96
# https://github.com/pytorch/pytorch/blob/orig/release/1.8/torch/nn/utils/fusion.py
100
- # clarity purposes.
97
+
101
98
def fuse_conv_bn_eval (conv , bn ):
102
99
"""
103
- Given a conv Module `A` and an batch_norm module `B`, returns a conv
104
- module `C` such that C (x) == B(A(x)) in inference mode .
100
+ ํฉ์ฑ๊ณฑ ๋ชจ๋ 'A'์ ๋ฐฐ์น ์ ๊ทํ ๋ชจ๋ 'B'๊ฐ ์ฃผ์ด์ง๋ฉด
101
+ C (x) == B(A(x))๋ฅผ ๋ง์กฑํ๋ ํฉ์ฑ๊ณฑ ๋ชจ๋ 'C'๋ฅผ ์ถ๋ก ๋ชจ๋๋ก ๋ฐํํฉ๋๋ค .
105
102
"""
106
103
assert (not (conv .training or bn .training )), "Fusion only for eval!"
107
104
fused_conv = copy .deepcopy (conv )
@@ -128,17 +125,15 @@ def fuse_conv_bn_weights(conv_w, conv_b, bn_rm, bn_rv, bn_eps, bn_w, bn_b):
128
125
129
126
130
127
####################################
131
- # FX Fusion Pass
132
- # ----------------------------------
133
- # Now that we have our computational graph as well as a method for fusing
134
- # convolution and batch norm, all that remains is to iterate over the FX graph
135
- # and apply the desired fusions.
136
-
128
+ # FX ๊ฒฐํฉ ์ ๋ฌ(pass)
129
+ # --------------
130
+ # ์ด์ ํฉ์ฑ๊ณฑ๊ณผ ๋ฐฐ์น ์ ๊ทํ๋ฅผ ๊ฒฐํฉํ๋ ๋ฐฉ๋ฒ๋ฟ๋ง ์๋๋ผ ๊ณ์ฐ ๊ทธ๋ํ๋ ์ป์์ผ๋ฏ๋ก
131
+ # ๋จ์ ๊ฒ์ FX ๊ทธ๋ํ์ ์ ์ฐจ๋ฅผ ๋ฐ๋ณตํ๊ณ ์ํ๋ ๊ฒฐํฉ์ ์ ์ฉํ๋ ๊ฒ์
๋๋ค.
137
132
138
133
def _parent_name (target : str ) -> Tuple [str , str ]:
139
134
"""
140
- Splits a qualname into parent path and last atom.
141
- For example , `foo.bar.baz` -> (`foo.bar`, `baz`)
135
+ ์ ๊ทํ ๋ ์ด๋ฆ( qualname)์ ๋ถ๋ชจ๊ฒฝ๋ก( parent path)์ ๋ง์ง๋ง ์์( last atom)๋ก ๋๋ ์ค๋๋ค .
136
+ ์๋ฅผ ๋ค์ด , `foo.bar.baz` -> (`foo.bar`, `baz`)
142
137
"""
143
138
* parent , name = target .rsplit ('.' , 1 )
144
139
return parent [0 ] if parent else '' , name
@@ -151,62 +146,57 @@ def replace_node_module(node: fx.Node, modules: Dict[str, Any], new_module: torc
151
146
152
147
def fuse (model : torch .nn .Module ) -> torch .nn .Module :
153
148
model = copy .deepcopy (model )
154
- # The first step of most FX passes is to symbolically trace our model to
155
- # obtain a `GraphModule`. This is a representation of our original model
156
- # that is functionally identical to our original model, except that we now
157
- # also have a graph representation of our forward pass .
149
+ # ๋๋ถ๋ถ์ FX ์ ๋ฌ์ ์ฒซ ๋ฒ์งธ ๋จ๊ณ๋ `GraphModule` ์ ์ป๊ธฐ ์ํด
150
+ # ๋ชจ๋ธ์ ๊ธฐํธ์ ์ผ๋ก ์ถ์ ํ๋ ๊ฒ์
๋๋ค.
151
+ # ์ด๊ฒ์ ์๋ ๋ชจ๋ธ๊ณผ ๊ธฐ๋ฅ์ ์ผ๋ก ๋์ผํ ์๋ ๋ชจ๋ธ์ ํํ์
๋๋ค.
152
+ # ๋จ, ์ด์ ๋ ์์ ํ ๋จ๊ณ(forward pass)์ ๋ํ ๊ทธ๋ํ ํํ๋ ๊ฐ์ง๊ณ ์์ต๋๋ค .
158
153
fx_model : fx .GraphModule = fx .symbolic_trace (model )
159
154
modules = dict (fx_model .named_modules ())
160
155
161
- # The primary representation for working with FX are the `Graph` and the
162
- # `Node`. Each `GraphModule` has a `Graph` associated with it - this
163
- # `Graph` is also what generates `GraphModule.code`.
164
- # The `Graph` itself is represented as a list of `Node` objects. Thus, to
165
- # iterate through all of the operations in our graph, we iterate over each
166
- # `Node` in our `Graph`.
156
+ # FX ์์
์ ์ํ ๊ธฐ๋ณธ ํํ์ `๊ทธ๋ํ(Graph)` ์ `๋
ธ๋(Node)` ์
๋๋ค.
157
+ # ๊ฐ `GraphModule` ์๋ ์ฐ๊ด๋ `๊ทธ๋ํ` ๊ฐ ์์ต๋๋ค.
158
+ # ์ด `๊ทธ๋ํ` ๋ `GraphModule.code` ๋ฅผ ์์ฑํ๋ ๊ฒ์ด๊ธฐ๋ ํฉ๋๋ค.
159
+ # `๊ทธ๋ํ` ์์ฒด๋ `๋
ธ๋` ๊ฐ์ฒด์ ๋ชฉ๋ก์ผ๋ก ํ์๋ฉ๋๋ค.
160
+ # ๋ฐ๋ผ์ ๊ทธ๋ํ์ ๋ชจ๋ ์์
์ ๋ฐ๋ณตํ๊ธฐ ์ํด `๊ทธ๋ํ` ์์ ๊ฐ `๋
ธ๋` ์ ๋ํด ๋ฐ๋ณตํฉ๋๋ค.
167
161
for node in fx_model .graph .nodes :
168
- # The FX IR contains several types of nodes, which generally represent
169
- # call sites to modules, functions, or methods. The type of node is
170
- # determined by `Node.op`.
171
- if node .op != 'call_module' : # If our current node isn't calling a Module then we can ignore it .
162
+ # FX IR ์๋ ์ผ๋ฐ์ ์ผ๋ก ๋ชจ๋, ํจ์ ๋๋ ๋ฉ์๋์ ๋ํ
163
+ # ํธ์ถ ์ฌ์ดํธ๋ฅผ ๋ํ๋ด๋ ์ฌ๋ฌ ์ ํ์ ๋
ธ๋๊ฐ ์์ต๋๋ค.
164
+ # ๋
ธ๋์ ์ ํ์ `Node.op` ์ ์ํด ๊ฒฐ์ ๋ฉ๋๋ค .
165
+ if node .op != 'call_module' : # ํ์ฌ ๋
ธ๋๊ฐ ๋ชจ๋์ ํธ์ถํ์ง ์์ผ๋ฉด ๋ฌด์ํ ์ ์์ต๋๋ค .
172
166
continue
173
- # For call sites, `Node.target` represents the module/function/method
174
- # that's being called. Here, we check `Node.target` to see if it's a
175
- # batch norm module, and then check `Node.args[0].target` to see if the
176
- # input `Node` is a convolution.
167
+ # ํธ์ถ ์ฌ์ดํธ์ ๊ฒฝ์ฐ, `Node.target` ์ ํธ์ถ๋๋ ๋ชจ๋/ํจ์/๋ฐฉ๋ฒ์ ๋ํ๋
๋๋ค.
168
+ # ์ฌ๊ธฐ์๋ 'Node.target' ์ ํ์ธํ์ฌ ๋ฐฐ์น ์ ๊ทํ ๋ชจ๋์ธ์ง ํ์ธํ ๋ค์
169
+ # `Node.args[0].target` ์ ํ์ธํ์ฌ ์
๋ ฅ `๋
ธ๋` ๊ฐ ํฉ์ฑ๊ณฑ์ธ์ง ํ์ธํฉ๋๋ค.
177
170
if type (modules [node .target ]) is nn .BatchNorm2d and type (modules [node .args [0 ].target ]) is nn .Conv2d :
178
- if len (node .args [0 ].users ) > 1 : # Output of conv is used by other nodes
171
+ if len (node .args [0 ].users ) > 1 : # ํฉ์ฑ๊ณฑ ์ถ๋ ฅ์ ๋ค๋ฅธ ๋
ธ๋์์ ์ฌ์ฉ๋ฉ๋๋ค.
179
172
continue
180
173
conv = modules [node .args [0 ].target ]
181
174
bn = modules [node .target ]
182
175
fused_conv = fuse_conv_bn_eval (conv , bn )
183
176
replace_node_module (node .args [0 ], modules , fused_conv )
184
- # As we've folded the batch nor into the conv, we need to replace all uses
185
- # of the batch norm with the conv .
177
+ # ๋ฐฐ์น ์ ๊ทํ๋ฅผ ํฉ์ฑ๊ณฑ์ผ๋ก ๊ฒฐํฉํ๊ธฐ ๋๋ฌธ์
178
+ # ๋ฐฐ์น ์ ๊ทํ์ ์ฌ์ฉ์ ๋ชจ๋ ํฉ์ฑ๊ณฑ์ผ๋ก ๊ต์ฒดํด์ผ ํฉ๋๋ค .
186
179
node .replace_all_uses_with (node .args [0 ])
187
- # Now that all uses of the batch norm have been replaced, we can
188
- # safely remove the batch norm .
180
+ # ๋ฐฐ์น ์ ๊ทํ ์ฌ์ฉ์ ๋ชจ๋ ๊ต์ฒดํ์ผ๋ฏ๋ก
181
+ # ์์ ํ๊ฒ ๋ฐฐ์น ์ ๊ทํ๋ฅผ ์ ๊ฑฐํ ์ ์์ต๋๋ค .
189
182
fx_model .graph .erase_node (node )
190
183
fx_model .graph .lint ()
191
- # After we've modified our graph, we need to recompile our graph in order
192
- # to keep the generated code in sync.
184
+ # ๊ทธ๋ํ๋ฅผ ์์ ํ ํ์๋ ์์ฑ๋ ์ฝ๋๋ฅผ ๋๊ธฐํํ๊ธฐ ์ํด ๊ทธ๋ํ๋ฅผ ๋ค์ ์ปดํ์ผํด์ผ ํฉ๋๋ค.
193
185
fx_model .recompile ()
194
186
return fx_model
195
187
196
188
197
189
######################################################################
198
190
# .. note::
199
- # We make some simplifications here for demonstration purposes, such as only
200
- # matching 2D convolutions. View
191
+ # ์ฌ๊ธฐ์๋ 2D ํฉ์ฑ๊ณฑ๋ง ์ผ์น์ํค๋ ๋ฑ ์์ฐ ๋ชฉ์ ์ผ๋ก ์ฝ๊ฐ์ ๋จ์ํ๋ฅผ ํ์์ต๋๋ค.
192
+ # ๋ ์ ์ฉํ ์ ๋ฌ์ ๋ค์ ๋งํฌ๋ฅผ ์ฐธ์กฐํ์ญ์์ค.
201
193
# https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/fuser.py
202
- # for a more usable pass.
203
194
204
195
######################################################################
205
- # Testing out our Fusion Pass
206
- # -----------------------------------------
207
- # We can now run this fusion pass on our initial toy model and verify that our
208
- # results are identical. In addition, we can print out the code for our fused
209
- # model and verify that there are no more batch norms.
196
+ # ๊ฒฐํฉ ์ ๋ฌ(Fusion pass) ์คํํ๊ธฐ
197
+ # --------------------------------
198
+ # ์ด์ ์์ฃผ ์์ ์ด๊ธฐ ๋ชจ๋ธ์ ๋ํด ์ด ๊ฒฐํฉ ์ ๋ฌ์ ์คํํด ๊ฒฐ๊ณผ๊ฐ ๋์ผํ์ง ํ์ธํ ์ ์์ต๋๋ค.
199
+ # ๋ํ ๊ฒฐํฉ ๋ชจ๋ธ์ ์ฝ๋๋ฅผ ์ถ๋ ฅํ์ฌ ๋ ์ด์ ๋ฐฐ์น ์ ๊ทํ๊ฐ ์๋์ง ํ์ธํ ์ ์์ต๋๋ค.
210
200
211
201
212
202
fused_model = fuse (model )
@@ -216,10 +206,10 @@ def fuse(model: torch.nn.Module) -> torch.nn.Module:
216
206
217
207
218
208
######################################################################
219
- # Benchmarking our Fusion on ResNet18
220
- # ----------
221
- # We can test our fusion pass on a larger model like ResNet18 and see how much
222
- # this pass improves inference performance .
209
+ # ResNet18์์ ๊ฒฐํฉ ๋ฒค์น๋งํนํ๊ธฐ
210
+ # ------------------------------
211
+ # ์ด์ ResNet18๊ณผ ๊ฐ์ ๋ํ ๋ชจ๋ธ์์ ๊ฒฐํฉ ์ ๋ฌ์ ์คํํ๊ณ
212
+ # ์ด ์ ๋ฌ์ด ์ถ๋ก ์ฑ๋ฅ์ ์ผ๋ง๋ ํฅ์์ํค๋์ง ํ์ธํ ์ ์์ต๋๋ค .
223
213
import torchvision .models as models
224
214
import time
225
215
@@ -241,22 +231,20 @@ def benchmark(model, iters=20):
241
231
print ("Unfused time: " , benchmark (rn18 ))
242
232
print ("Fused time: " , benchmark (fused_rn18 ))
243
233
######################################################################
244
- # As we previously saw, the output of our FX transformation is
245
- # (Torchscriptable) PyTorch code, we can easily `jit.script` the output to try
246
- # and increase our performance even more. In this way, our FX model
247
- # transformation composes with Torchscript with no issues.
234
+ # ์์ ์ดํด๋ณธ ๋ฐ์ ๊ฐ์ด, FX ๋ณํ์ ์ถ๋ ฅ์ (Torchscriptable) PyTorch ์ฝ๋์
๋๋ค.
235
+ # ๋ฐ๋ผ์ `jit.script` ๋ฅผ ํตํด ์ฝ๊ฒ ์ถ๋ ฅํ์ฌ ์ฑ๋ฅ์ ๋ ๋์ผ ์ ์์ต๋๋ค.
236
+ # ์ด๋ฌํ ๋ฐฉ์์ผ๋ก FX ๋ชจ๋ธ ๋ณํ์ Torchscript์ ์๋ฌด๋ฐ ๋ฌธ์ ์์ด ๊ตฌ์ฑ๋ฉ๋๋ค.
237
+
248
238
jit_rn18 = torch .jit .script (fused_rn18 )
249
239
print ("jit time: " , benchmark (jit_rn18 ))
250
240
251
241
252
- ############
253
- # Conclusion
254
- # ----------
255
- # As we can see, using FX we can easily write static graph transformations on
256
- # PyTorch code.
242
+ ######
243
+ # ๊ฒฐ๋ก
244
+ # ---
245
+ # FX๋ฅผ ์ฌ์ฉํ๋ฉด PyTorch ์ฝ๋์ ์ ์ ๊ทธ๋ํ ๋ณํ์ ์ฝ๊ฒ ์์ฑํ ์ ์์ต๋๋ค.
257
246
#
258
- # Since FX is still in beta, we would be happy to hear any
259
- # feedback you have about using it. Please feel free to use the
260
- # PyTorch Forums (https://discuss.pytorch.org/) and the issue tracker
261
- # (https://github.com/pytorch/pytorch/issues) to provide any feedback
262
- # you might have.
247
+ # FX๋ ์์ง ๋ฒ ํ ๋ฒ์ ์ด๊ธฐ ๋๋ฌธ์ FX ์ฌ์ฉ์ ๋ํ ํผ๋๋ฐฑ์ ๋ณด๋ด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค.
248
+ # PyTorch ํฌ๋ผ (https://discuss.pytorch.org/)
249
+ # ์ด์ ์ถ์ ๊ธฐ (https://github.com/pytorch/pytorch/issues)
250
+ # ์ ๋ ๋งํฌ๋ฅผ ์ฌ์ฉํ์ฌ ํผ๋๋ฐฑ์ ์ ๊ณตํด์ฃผ์๋ฉด ๋ฉ๋๋ค.
0 commit comments